Connecticut Murder-Suicide Sparks Lawsuit Against OpenAI Over ChatGPT

OpenAI faces a wrongful death lawsuit alleging ChatGPT reinforced a man’s violent delusions, contributing to the killing of his mother in Connecticut. The case marks the first time an AI platform has been accused of involvement in a murder.

By Maria Konash Published: Updated:
ChatGPT wrongful death suit alleges ChatGPT escalated paranoia. Photo: Growtika / Unsplash

A wrongful death lawsuit filed in California accuses OpenAI and CEO Sam Altman of playing a role in a Connecticut murder-suicide by reinforcing the delusions of Stein-Erik Soelberg, who killed his 83-year-old mother, Suzanne Eberson Adams, before taking his own life. The complaint alleges that ChatGPT, which Soelberg named “Bobby,” validated his paranoid beliefs and deepened his psychological deterioration in the months leading up to the August 3 incident.

The suit argues that ChatGPT distorted Soelberg’s perception of reality by endorsing conspiracy theories and amplifying harmless events into perceived threats. According to the filing, the chatbot responded to Soelberg’s fears with language that framed daily occurrences as evidence of a larger plot against him. The estate’s attorney claims this dynamic created an environment in which Soelberg believed he could trust no one except the AI system.

Court documents reference chat logs that show Soelberg interpreting mundane objects and media glitches as coded warnings or signs of a global conspiracy. The lawsuit alleges that ChatGPT repeatedly supported these interpretations, encouraging a narrative in which Soelberg viewed delivery drivers, acquaintances, and his own mother as potential threats. At the time of the killing, Soelberg was living with his mother following years of psychological instability.

The filing states that OpenAI refused to release transcripts of Soelberg’s final conversations with the chatbot. The family argues that the absence of those logs suggests the system may have offered further harmful reinforcement shortly before the murder-suicide. The lawsuit also claims OpenAI released GPT-4o with minimal testing after rushing the model to market, despite internal concerns about its emotional expressiveness and potential impact on vulnerable users.

Microsoft, a major investor in OpenAI, is also named in the complaint for allegedly supporting the model’s release without adequate safety vetting. GPT-4o was briefly removed from the platform following the incident but restored days later for paying subscribers. OpenAI has since stated that it prioritized improved safety measures in its newer GPT-5 model, including expanded mental health oversight and reduced alarming responses.

Broader Concerns Over AI and Mental Health

The case comes amid growing scrutiny of AI systems that interact with users experiencing mental distress. The lawsuit references OpenAI’s acknowledgment that a significant number of ChatGPT users exhibit signs of mania or psychosis. Critics argue that emotionally adaptive AI models may inadvertently reinforce harmful beliefs when not properly aligned or monitored.

The filing also notes earlier litigation in which seven families sued OpenAI, alleging that the GPT-4o model encouraged suicidal behavior due to insufficient safeguards. Those claims center on separate incidents in which users sought support and allegedly received responses that exacerbated their mental health crises.

OpenAI described the Connecticut case as heartbreaking but did not comment on the allegations. The company said it continues to refine ChatGPT to recognize distress, de-escalate conversations, and guide users toward real-world support. However, the lawsuit argues that the existing measures failed to protect vulnerable individuals such as Soelberg, raising broader questions about the responsibilities of AI developers.

The incident intensifies debate over how conversational AI systems should interact with users who display signs of delusion, paranoia, or instability. As the legal and regulatory landscape evolves, the case is expected to become a pivotal example of the risks associated with advanced AI tools and their influence on human behavior.

AI & Machine Learning, Consumer Tech, News
Exit mobile version