Anthropic asked a U.S. appeals court to pause the Pentagon’s designation of the company as a supply-chain risk while the dispute undergoes judicial review. The request was filed with the U.S. Court of Appeals for the District of Columbia Circuit after the U.S. Defense Department barred its agencies and contractors from using Anthropic’s AI systems.
The conflict stems from a dispute over safeguards governing how the U.S. military could deploy Anthropic’s artificial intelligence models. Defense Secretary Pete Hegseth labeled the company a supply-chain risk following the disagreement, effectively restricting its technology from Pentagon contracts.
Anthropic argued in its filing that the designation could cause “irreparable harm” to the business. According to the company’s legal submission, more than 100 enterprise customers have already contacted the firm regarding the implications of the decision.
The company estimates that the government’s actions could jeopardize hundreds of millions or even billions of dollars in potential revenue for 2026 as the legal battle over AI governance and military use continues.
Meanwhile, Anthropic’s Claude chatbot recently climbed to No. 1 on the U.S. App Store amid backlash over competing AI partnerships with the Pentagon, triggering a surge in downloads and increased uninstallations of ChatGPT.