Anthropic is at odds with the U.S. Department of Defense over restrictions on how its artificial intelligence models may be used, putting its work with the agency under review. A Pentagon spokesperson confirmed the status following stalled negotiations on future terms of use.
The five-year-old startup received a contract worth up to $200 million last year and, as of February, is the only AI company to have deployed models on the department’s classified networks. Anthropic has sought assurances that its technology will not be used for autonomous weapons or for large-scale domestic surveillance, according to public comments from defence officials.
The Defence Department has pushed for the ability to use the models for all lawful military purposes without limitations. Officials warned that constraints could hinder operational readiness in urgent scenarios. If no agreement is reached, the department could designate Anthropic a supply chain risk, a move that would restrict contractors from using its models.
Rival firms including OpenAI, Google, and xAI have accepted broader usage terms for unclassified systems. Anthropic, founded by former OpenAI researchers, is best known for its Claude models and recently closed a $30 billion funding round at a reported $380 billion valuation.