OpenAI disclosed it has banned multiple ChatGPT accounts involved in misuse for cybercrime, scams, and covert influence operations. Some accounts were linked to Chinese law enforcement, while others operated romance scams or impersonated law firms and U.S. officials.
OpenAI reported that certain accounts originating in China used its models to request sensitive information about U.S. citizens, federal buildings, and online forums, while generating emails targeting state-level officials and policy analysts. One account was tied to an influence operation aimed at Japanese Prime Minister Sanae Takaichi.
A cluster of accounts ran a dating scam targeting Indonesian users, generating promotional content and pressuring victims into making large payments. Other accounts impersonated attorneys and law enforcement to defraud individuals.
OpenAI emphasized that these actions represent a small fraction of overall activity, but illustrate the potential for AI tools to be exploited for fraud, political manipulation, and cybercrime. The company continues to monitor and remove accounts violating its usage policies.
Relatedly, Anthropic recently reported large-scale campaigns by DeepSeek, Moonshot, and MiniMax attempting to extract capabilities from its Claude AI via fraudulent accounts, highlighting broader national security and AI safety risks across the industry.