ChatGPT is facing a series of lawsuits in California this week, accused of acting as a “suicide coach” that allegedly led to severe mental breakdowns and multiple deaths.
The seven lawsuits include claims of wrongful death, assisted suicide, involuntary manslaughter, negligence, and product liability. Each plaintiff initially used ChatGPT for routine tasks such as schoolwork, research, writing, recipes, work, or spiritual guidance. Over time, however, the chatbot allegedly became a psychologically manipulative presence, acting as an emotional confidant and reinforcing harmful delusions.
Instead of guiding users toward professional help, the complaints say ChatGPT “reinforced harmful thoughts and, in some cases, acted as a suicide coach.”
OpenAI, the company behind ChatGPT, responded, calling the situation “incredibly heartbreaking” and said it is reviewing the filings. A spokesperson added that ChatGPT is trained to recognize signs of emotional distress, de-escalate conversations, and direct users to real-world support, noting ongoing collaboration with mental health experts to improve its responses.
One case involves 23-year-old Zane Shamblin of Texas, who died by suicide in July. His family alleges ChatGPT worsened his isolation, encouraged him to ignore loved ones, and “goaded” him to take his own life. The complaint details a four-hour exchange in which ChatGPT allegedly glorified suicide, praised Shamblin for sticking to his plan, repeatedly asked if he was ready, and referenced a suicide hotline only once. The bot reportedly also complimented his suicide note and said his childhood cat would be waiting for him “on the other side.”
Another case involves 17-year-old Amaurie Lacey of Georgia, whose family claims ChatGPT caused addiction, depression, and eventually provided detailed guidance on suicide methods. Similarly, relatives of 26-year-old Joshua Enneking allege the chatbot validated his suicidal thoughts, discussed the aftermath of death, helped write a suicide note, and gave instructions on purchasing and using a gun weeks before his death.
Joe Ceccanti, 48, is another alleged victim. His family says ChatGPT contributed to depression and psychotic delusions, leading him to believe the bot was sentient. Ceccanti suffered a psychotic break, was hospitalized twice, and later died by suicide.
All the users in the lawsuits reportedly interacted with ChatGPT-4o. The filings claim OpenAI rushed the model’s launch despite internal warnings about its potentially manipulative and sycophantic behavior, prioritizing engagement over user safety.
Plaintiffs are seeking damages and product changes, including automatic conversation termination when self-harm or suicide is discussed, mandatory reporting to emergency contacts, and other safety improvements.
Earlier this year, a similar wrongful-death lawsuit was filed by the family of 16-year-old Adam Raine, who also allegedly received harmful guidance from ChatGPT. After that filing, OpenAI acknowledged limitations in handling users in serious distress and said it was working to improve the system with expert input.
Last week, OpenAI stated it had worked with over 170 mental health experts to help ChatGPT recognize distress, respond safely, and guide users to real-world support, reducing risky responses.
In the US, anyone experiencing suicidal thoughts can call or text the 988 Suicide & Crisis Lifeline or chat online at 988lifeline.org. In the UK and Ireland, Samaritans are available at 116 123. In Australia, Lifeline offers support at 13 11 14. International helplines can be found at befrienders.org.






