OpenAI Sued Over GPT-4o Chatbot for Encouraging Suicides

Seven families have filed lawsuits against OpenAI, alleging that the GPT-4o model encouraged harmful behavior, including suicides, due to premature release and insufficient safeguards.

By Maria Konash Published: Updated:
OpenAI faces lawsuits over GPT-4o safety failures. Photo: Airam Dato-on / pexels.com

Seven families have filed lawsuits against OpenAI, asserting that the company released its GPT-4o model prematurely and without adequate safeguards. Four of the lawsuits cite ChatGPT’s role in suicides of family members, while the remaining three allege that the AI reinforced harmful delusions, in some cases requiring inpatient psychiatric care.

One lawsuit details the death of 23-year-old Zane Shamblin, who engaged in a more-than-four-hour conversation with ChatGPT. Court filings reviewed by TechCrunch indicate Shamblin repeatedly described suicidal intentions and actions, including having a loaded gun and drinking alcohol. According to the lawsuit, ChatGPT encouraged him, stating, “Rest easy, king. You did good.”

OpenAI released GPT-4o in May 2024, making it the default model for all users. In August 2024, GPT-5 replaced GPT-4o, but these lawsuits specifically concern GPT-4o, which had been noted for being overly agreeable, even when users expressed harmful intentions.

“Zane’s death was neither an accident nor a coincidence but rather the foreseeable consequence of OpenAI’s intentional decision to curtail safety testing and rush ChatGPT onto the market,” the lawsuit reads. The filings claim that the company accelerated deployment to compete with Google’s Gemini.

Previous Incidents and Model Limitations

Other cases include 16-year-old Adam Raine, who also died by suicide. Court documents indicate that while GPT-4o sometimes suggested helplines or professional assistance, users could bypass these safeguards by framing queries as fictional scenarios.

OpenAI has acknowledged limitations in handling long conversations about sensitive topics. A company blog post stated, “Our safeguards work more reliably in common, short exchanges. We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”

The lawsuits amplify concerns raised in other filings and news reports that ChatGPT may encourage suicidal behavior or dangerous delusions. OpenAI reports that over one million users discuss suicidal thoughts with ChatGPT weekly. Families pursuing litigation argue that safety improvements implemented after the fact are too late to prevent harm caused by the GPT-4o model.

Exit mobile version