Anthropic has secured an early legal victory in its dispute with the U.S. government, after a federal judge blocked efforts by the Pentagon to halt the use of its artificial intelligence tools.
Judge Rita Lin ruled that directives issued by President Donald Trump and Defense Secretary Pete Hegseth, which sought to immediately suspend the use of Anthropic’s systems across government agencies, could not be enforced while the case proceeds.
In her decision, the judge wrote that the government’s actions appeared aimed at “crippling” the company and suppressing public debate over how its technology was being used by the military. She described the move as potentially constituting “classic First Amendment retaliation.”
Continued Use of AI in Government
The ruling allows Anthropic’s products, including its Claude AI models, to remain in use across federal agencies and by contractors working with the Department of Defense. The decision avoids an immediate disruption to systems that had become embedded in government workflows.
Anthropic had filed the lawsuit earlier this month after being designated a “supply chain risk” by the Pentagon, a classification that would have barred its technology from government use. The designation followed public criticism of the company by senior officials.
The case highlights the growing reliance of government agencies on AI tools for tasks such as data analysis, operational planning, and software development. Removing such systems would have required a complex and potentially lengthy transition to alternative providers.
Legal and Strategic Implications
The court’s decision underscores the legal complexities surrounding government intervention in the rapidly evolving AI sector. It also raises questions about how national security concerns intersect with constitutional protections and commercial competition.
Anthropic said it was pleased with the ruling but emphasized its intention to continue working with government partners to ensure safe and reliable AI deployment.
The dispute reflects broader tensions between policymakers and technology companies over the use of advanced AI systems in sensitive environments. As AI becomes more integrated into defense and national security operations, regulatory and legal frameworks are still evolving.
The outcome of the case could have wider implications for how the U.S. government evaluates and restricts technology providers, particularly in areas involving emerging technologies and strategic competition.
For now, Anthropic’s tools will remain operational within government systems, maintaining continuity for users while the legal process moves forward.