Anthropic Seeks Court Order to Halt Ban on Its AI Models

Anthropic is seeking a court injunction to block a U.S. government ban on its Claude AI models, arguing the move threatens billions in contracts and damages its reputation.

By Samantha Reed Published:

Anthropic is heading to federal court to challenge a U.S. government decision that restricts the use of its Claude AI models across federal agencies and defense contractors. The company is seeking a preliminary injunction to pause the designation while its broader lawsuit proceeds.

The dispute follows a decision by the U.S. Department of Defense to label Anthropic as a supply chain risk, citing national security concerns. The designation requires contractors working with the Pentagon, including Amazon, Microsoft, and Palantir, to certify that they are not using Claude in defense-related projects.

Anthropic argues that the designation is unfounded and has caused significant financial and reputational harm. The company said it risks losing billions in contracts and partnerships if the restrictions remain in place. It also claims the move is retaliatory, tied to its stance against the use of its models in fully autonomous weapons or domestic surveillance.

The case will be heard by U.S. District Judge Rita Lin, who is expected to examine whether Anthropic maintains control over its models after deployment and whether the government’s actions are justified.

The outcome could shape how AI vendors engage with government clients, particularly in defense and national security contexts. Anthropic previously secured a $200 million Pentagon contract and was among the first AI firms to deploy models in classified environments.

AI & Machine Learning, News, Regulation & Policy