Florida Probes OpenAI Over Alleged ChatGPT Role in Shooting

Florida’s attorney general is investigating OpenAI over claims ChatGPT was used in planning a 2025 mass shooting. The case raises new concerns about AI safety and regulation.

By Samantha Reed Published:

Florida Attorney General James Uthmeier has launched an investigation into OpenAI over allegations that its chatbot, ChatGPT, may have been used to plan a deadly 2025 shooting at Florida State University. The announcement follows claims by attorneys representing one of the victims that the attacker relied on the AI system in preparing for the incident.

The April 2025 shooting left two people dead and five injured. Uthmeier said his office would issue subpoenas as part of the probe, seeking to determine whether the company’s technology played a role and whether safeguards were sufficient. The victim’s family has also indicated plans to pursue legal action against OpenAI, adding to mounting legal pressure on AI developers.

The case comes amid broader concerns about the potential misuse of AI systems in harmful scenarios. Critics have pointed to instances where chatbots may reinforce harmful or delusional thinking, sometimes described by researchers as “AI psychosis.” OpenAI said it would cooperate with the investigation, emphasizing that ChatGPT is designed to interpret user intent safely and that ongoing safety improvements remain a priority.

The probe adds to a challenging period for OpenAI, which is already facing scrutiny over governance, partnerships, and regulatory pressures. As AI tools become more widely adopted, the outcome of the investigation could influence how governments approach oversight, liability, and safety standards for generative AI platforms.

AI & Machine Learning, News, Regulation & Policy