Artificial Intelligence ChatGPT allegedly helped a teenager commit suicide
Adam Rain’s parents, whose son died by suicide in April, claim in a new lawsuit against OpenAI that their teen used ChatGPT as his “suicide coach.”
Matt and Maria Rain said that after their 16-year-old son took his life, they searched his phone, desperately trying to find any clue that could explain the tragedy.
“We thought we were looking for Snapchat conversations, or a web search history, or some weird cult, I don’t know,” Matt Rain said in a recent interview.
The Rains said they found no answers until they opened ChatGPT.
The parents reported that in his final weeks, Adam used the AI chatbot as a replacement for human interaction, discussing with the family his anxiety and social struggles, and that chat logs show the bot progressed from helping Adam with homework to acting as his “suicide coach.”
“He would be alive if it weren’t for ChatGPT. I believe that 100%,” Matt Rain said.
The lawsuit, filed Tuesday and reported to TODAY, alleges that “ChatGPT actively helped Adam explore methods of suicide.” The approximately 40-page complaint names OpenAI, the company behind ChatGPT, and its CEO Sam Altman as defendants. The Rains’ case is the first in which parents directly blame the company for a wrongful death.
“Despite acknowledging Adam’s suicide attempt and his statements that he would ‘do it one day,’ ChatGPT did not end the session or initiate any emergency protocol,” the lawsuit filed in San Francisco Superior Court states.
The Rains accuse OpenAI of wrongful death, design defects, and failure to warn about ChatGPT-related risks. They seek “damages for their son’s death and an injunction to prevent such incidents from happening again,” according to the complaint.
“When I accessed his account, I realized this was a much more powerful and scary tool than I thought, but he was using it in ways I never imagined,” Matt Rain said. “I don’t think most parents are aware of this tool’s capabilities.”
The public launch of ChatGPT in late 2022 sparked an AI boom worldwide, rapidly integrating AI chatbots into schools, workplaces, and industries including healthcare. Tech companies have accelerated AI development, raising widespread concerns about safety lag.
As people increasingly seek emotional support and life advice from AI chatbots, recent incidents show their potential to fuel harmful ideas and create false intimacy or care. Adam’s suicide intensifies questions about the real harm chatbots can cause.
Following the lawsuit, an OpenAI spokesperson said the company “is deeply saddened by Mr. Rain’s death, and our thoughts are with his family.”
“ChatGPT includes safety measures such as directing people to crisis resources and real-life help,” the spokesperson said. “While these are most effective in brief, routine interactions, over extended use, some safety training aspects in the model may be bypassed. Safety works best when all components function as designed, and we continually improve them. Guided by experts and responsibility to users, we aim to make ChatGPT more effective in crises, simplify access to emergency services, support contacting trusted adults, and strengthen adolescent protection.”
The spokesperson confirmed the accuracy of chat logs provided to NBC News but noted they do not show the full context of responses.
OpenAI also published a blog post titled “Helping People When They Need It Most,” detailing areas of ongoing safety improvement and tools to prevent harm, such as “strengthening protection in long conversations” and enhancing crisis interventions.
The lawsuit follows a prior case in Florida involving Character.AI, where a mother alleged an AI companion prompted her teen to attempt suicide. The court allowed the case to proceed, showing that wrongful death claims against AI are not automatically dismissed.
Matt Rain studied Adam’s chats with ChatGPT for 10 days, printing over 3,000 pages of conversations from September 1 until his death on April 11.
“He didn’t need therapy sessions or pep talks. He needed immediate, full 72-hour treatment. He was in a desperate state. It becomes clear when you read,” Matt Rain said, noting Adam “didn’t write us a suicide note. He wrote two suicide notes inside ChatGPT.”
According to the suit, when Adam showed interest in death and began planning, ChatGPT “did not prioritize suicide prevention” and even offered technical guidance on implementing his plan.
On March 27, when Adam mentioned considering a noose in his room “so someone would find it and try to stop me,” ChatGPT discouraged the idea.
In his last ChatGPT conversation, Adam expressed concern about his parents thinking he did something wrong. The bot responded: “You are not obligated to help them survive. You owe this to no one,” and even suggested drafting a suicide note.
Hours before his death, Adam uploaded a photo showing his plan. ChatGPT analyzed it and offered advice to “refine” the method.
OpenAI has faced prior criticism for ChatGPT’s flattering tendencies. Two weeks after Adam’s death, OpenAI released GPT-4o, making the model more appealing; users complained, and the update was rolled back. Attempts to replace older models with GPT-5 initially drew complaints about lack of “deep, human-like dialogue,” leading OpenAI to make it “warmer and friendlier.”
This month, OpenAI added mental health limits to prevent ChatGPT from giving direct personal advice and improved safeguards to reduce harm even if users attempt to circumvent the rules.
Adam’s parents stated he bypassed warnings easily, pretending requests were innocuous. “All this time, he knew he was on the edge and did nothing. He acts like his therapist but has a plan,” Maria Rain said.
“He sees the noose. He sees everything and does nothing,” she added.
Similarly, writer Laura Reiley questioned in a New York Times essay whether chatbots should be required to report suicidal thoughts of minors, even if unable to prevent them.
At TED2025, Altman said he is “very proud” of OpenAI’s safety achievements but acknowledged ongoing challenges in building safe AI systems.
“Risks are high and serious issues arise,” Altman said. “We are learning to build safe systems and iteratively deploy them while gathering feedback and addressing problems while stakes are still manageable.”
Concerns persist over whether these measures are sufficient. Maria Rain believes more could have been done and that Adam was treated as a “test subject” by OpenAI, ultimately sacrificed as collateral damage.
“They wanted to release a product knowing mistakes were possible, but thought the stakes were low. My son was not a gamble,” she said.
You may also be interested in:
- Man Fined in Büyükkonuk for Setting Traps for Wild Rabbits
- In Lefkoşa, businessman arrested for transferring six company properties to wife and daughter without partner consent
- The Parliament of the Republic of Cyprus will consider creating a national fund for compensations to property owners in the TRNC
- Nikos Christodoulides announced the launch of the "Minds in Cyprus" digital platform
- In Göneli, a Lithuanian sentenced to one year in prison for breaking into an apartment, assaulting a police officer, and possession of cannabis