A coalition of U.S. state attorneys general, coordinated through the National Association of Attorneys General, sent a letter to leading AI companies including Microsoft, OpenAI, Google, Anthropic, Apple, Meta, and 10 others. The letter warns the firms to address “delusional outputs” in chatbots or risk violating state laws.
The AGs requested new internal safeguards, including third-party audits of large language models to detect harmful or sycophantic outputs, and transparent incident reporting for users exposed to psychologically dangerous content. The audits could involve academic and civil society groups, with findings published without prior company approval.
The letter likened mental health incidents linked to AI to cybersecurity breaches, urging companies to implement detection timelines, pre-release safety tests, and clear user notifications when chatbots produce harmful outputs.
The move follows multiple AI-linked incidents, including suicides and violence, and ongoing tension between state and federal AI regulation. President Trump plans an executive order to limit state-level AI regulation.
Recent cases highlight the risks: OpenAI faces a wrongful death lawsuit over a Connecticut murder-suicide allegedly influenced by ChatGPT, seven families have sued OpenAI over the GPT-4o chatbot encouraging suicides, and the company has introduced a “confessions” feature in GPT-5 Thinking models to improve AI safety.