Anthropic Chief Executive Dario Amodei has criticized OpenAI’s agreement with the U.S. Department of Defense, describing the company’s safety commitments as “safety theater” in a memo to employees. The comments, reported by The Information, reflect escalating tensions between leading AI developers over how their technology should be used in military contexts.
The dispute follows failed negotiations between Anthropic and the Pentagon over the military’s access to the company’s AI systems. Anthropic had previously secured a $200 million contract with the Department of Defense but declined to expand the partnership after the agency requested broader access to its technology.
According to people familiar with the talks, Anthropic asked the government to formally confirm that its models would not be used to enable mass domestic surveillance of U.S. citizens or to power fully autonomous weapons systems.
When the companies could not reach an agreement, the Pentagon instead signed a deal with OpenAI to deploy its AI models across defense infrastructure.
Debate Over Safeguards and “Lawful Use”
OpenAI said its contract with the Defense Department includes safeguards that align with similar restrictions proposed by Anthropic. The company stated that its systems would not be intentionally used for domestic surveillance of U.S. persons and that the agreement explicitly acknowledges those limitations.
However, Amodei argued in his memo that OpenAI’s messaging misrepresents the situation. He wrote that OpenAI accepted the agreement largely to avoid internal employee pushback rather than to enforce meaningful safeguards.
Amodei also criticized the contract language allowing AI systems to be used for “all lawful purposes,” a phrase Anthropic had rejected during negotiations. Critics have pointed out that legal frameworks can evolve, meaning activities considered unlawful today could become permissible in the future.
OpenAI responded in a blog post that the Defense Department has stated it does not intend to deploy AI for mass surveillance of Americans or for fully autonomous weapons systems.
Industry and Public Reaction
The dispute has become one of the most visible public disagreements among leading AI companies over military deployments of generative AI technologies.
Amodei suggested in his internal message that public sentiment may be shifting in Anthropic’s favor. Data from market intelligence firms showed a sharp increase in ChatGPT app uninstallations after OpenAI announced its Pentagon agreement, while Anthropic’s Claude application climbed in download rankings.
The controversy also unfolds as OpenAI explores further defense partnerships beyond the United States. The company is considering a deal to deploy its AI models across NATO’s unclassified networks following its Pentagon agreement, highlighting how Western defense organizations are increasingly integrating generative AI systems even as debates intensify within the industry over safeguards and governance.