Anthropic disclosed that three Chinese companies – DeepSeek, Moonshot, and MiniMax – misused its Claude chatbot to acquire capabilities for improving their own AI models. The campaign involved more than 16 million interactions via roughly 24,000 fake accounts, violating Anthropic’s terms of service and regional access restrictions.
The companies employed a technique called “distillation,” in which a larger, established AI model evaluates and transfers knowledge to a newer model. DeepSeek focused on reasoning tasks and creating censorship-safe alternatives to sensitive queries. Moonshot targeted agentic reasoning, coding, and data analysis, while MiniMax pursued agentic coding, tool use, and orchestration. Anthropic detected MiniMax’s activity while the campaign was ongoing, noting the company quickly adapted to exploit newly released models.
The company warned that such campaigns are becoming increasingly sophisticated and pose broader risks to AI safety and intellectual property. DeepSeek, Moonshot, and MiniMax did not respond to requests for comment. The disclosure follows a similar report by OpenAI earlier this month regarding AI model exploitation by Chinese firms.