A federal appeals court in Washington, D.C., has denied Anthropic’s request to temporarily block a Pentagon decision labeling the company a supply chain risk, allowing restrictions on its AI technology to remain in place during ongoing litigation. The ruling marks a significant setback for the AI firm as it challenges the U.S. Department of Defense’s determination, which effectively bars its Claude models from being used in defense-related contracts.
The court said the balance of harm favored the government, citing national security concerns tied to how the Department of Defense procures and deploys AI during an active military conflict. While acknowledging that Anthropic could suffer financial damage, the judges characterized the impact as limited compared with the broader implications for military operations. As a result, defense contractors must continue certifying that they do not use Anthropic’s technology in work tied to the Pentagon.
The decision creates a split legal landscape for the company. In a separate case, a federal judge in San Francisco recently issued a preliminary injunction preventing the Trump administration from enforcing a broader ban on Anthropic’s Claude model across government agencies. That means Anthropic can still work with non-defense federal entities while the case proceeds, even as it remains excluded from Department of Defense contracts.
The dispute stems from a March designation by the Pentagon that labeled Anthropic a supply chain risk, a classification historically applied to foreign adversaries rather than U.S. companies. The move followed a directive from President Donald Trump ordering federal agencies to cease using Anthropic’s technology, with a phased transition period. The decision surprised many in Washington, where Anthropic’s models had already been integrated into several government systems, including classified defense networks.
A Clash Over Control and Use
At the heart of the conflict is a disagreement over how Anthropic’s AI models can be used. The Pentagon reportedly sought broad access to the company’s technology for all lawful purposes, while Anthropic pushed for restrictions to prevent applications such as fully autonomous weapons or domestic surveillance. Negotiations broke down, leading to the current legal battle.
Anthropic has argued that the designation is unconstitutional and retaliatory, while the government maintains it is necessary for national security. The appeals court rejected claims that the company’s free speech rights were being curtailed, noting no clear evidence that its expression had been restricted during the dispute.
High Stakes for AI and Government
The case highlights growing tensions between AI developers and government agencies over control, ethics, and national security. As AI systems become more embedded in defense and intelligence operations, questions around access, oversight, and acceptable use are becoming more urgent.
Anthropic, which signed a $200 million Pentagon contract last year, now faces the prospect of losing a key government customer while the case proceeds. The court signaled the need for a swift resolution, acknowledging the potential harm to the company while emphasizing the importance of maintaining government authority over military technology decisions.
The outcome of the case could set a precedent for how AI companies engage with defense agencies, particularly as governments seek deeper integration of advanced AI systems into critical operations.