Anthropic has filed a lawsuit against the U.S. Department of Defense and other federal agencies after the Trump administration labeled the artificial intelligence company a “supply chain risk.” The designation followed a breakdown in negotiations between the Pentagon and the developer of the Claude AI models over restrictions on military use of its technology.
In its legal filing, Anthropic described the government’s actions as “unprecedented and unlawful,” arguing that the designation and the directive requiring federal agencies to stop using its technology lack legal authority and proper due process.
“Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners,” an Anthropic spokesperson said in a statement.
The Pentagon declined to comment on the litigation. A White House spokesperson defended the administration’s position, stating that the government would not allow technology companies to dictate how military tools are used.
Dispute Over Military Use of AI
The conflict stems from negotiations to update the Pentagon’s contract with Anthropic. During the talks, the company asked the Defense Department to formally commit to two restrictions: that its AI systems would not be used for mass domestic surveillance of U.S. citizens and would not power fully autonomous weapons systems.
Defense officials rejected the conditions, insisting that the military must retain the ability to use AI for “all lawful purposes,” particularly in national security emergencies. The Pentagon has previously stated it does not intend to use AI for domestic surveillance or autonomous weapons but argued it could not accept restrictions imposed by a private company.
Following the breakdown in talks, the Trump administration ordered federal agencies and military contractors on February 27 to halt business with Anthropic. Defense Secretary Pete Hegseth also designated the company a supply chain risk, a classification typically applied to companies linked to foreign adversaries.
The designation limits Anthropic’s ability to work with companies that maintain contracts with the Department of Defense.
Claims of Constitutional Violations
Anthropic’s lawsuit alleges the government’s actions constitute retaliation against the company’s First Amendment protected speech. The filing also claims the administration exceeded its authority by directing federal agencies to cease using Anthropic’s technology without proper legal justification.
The company is seeking injunctive relief to prevent enforcement of the directive. According to the filing, the government’s actions place “hundreds of millions of dollars” in contracts at risk and could damage Anthropic’s reputation and commercial relationships.
Chief Executive Dario Amodei said the official designation letter suggests that contractors may still use Claude outside work directly tied to Pentagon contracts. The company has previously said it would challenge the classification in court, arguing that it sets a dangerous precedent for U.S. technology firms negotiating with government agencies.
Industry Impact and Public Reaction
The dispute has quickly escalated into one of the most significant confrontations between the U.S. government and an AI company over the limits of military technology deployment. Shortly after the administration’s directive, OpenAI reached a separate agreement with the Pentagon to deploy its models within defense infrastructure.
At the same time, Anthropic’s public profile has risen amid the conflict. The company’s Claude application recently overtook ChatGPT in Apple’s U.S. App Store rankings following the controversy, and Anthropic said more than one million new users are signing up for the platform each day as interest in its AI tools continues to grow.