U.S. Judge Raises Doubts Over Pentagon Blacklisting of Anthropic”

A U.S. judge signaled skepticism toward the Pentagon’s blacklisting of Anthropic, suggesting the move may be punitive rather than based on national security risks.

By Samantha Reed Published:

A U.S. federal judge raised concerns about the Pentagon’s decision to blacklist Anthropic, suggesting the move may have been intended to penalize the company rather than address legitimate national security risks. The comments came during a hearing over Anthropic’s request to temporarily block the designation.

U.S. District Judge Rita Lin said the government’s action “looks like an attempt to cripple Anthropic,” pointing to the possibility that the designation was linked to the company’s public stance on AI safety. Anthropic has argued that its refusal to allow its Claude models to be used for autonomous weapons or domestic surveillance triggered the decision.

The Department of Defense labeled Anthropic a supply chain risk, a designation typically reserved for entities that could expose military systems to sabotage or foreign interference. The move marks the first time a U.S. company has been publicly assigned this classification under federal procurement rules.

Anthropic claims the designation violates its constitutional rights, including free speech and due process, and could result in billions of dollars in lost contracts. The company has also argued that current AI systems are not sufficiently reliable for use in high-risk military applications.

Government lawyers defended the decision, citing concerns about potential control over deployed systems and the risk of future disruptions or modifications. They argued the designation is necessary to protect operational security.

AI & Machine Learning, News, Regulation & Policy