Anthropic Faces Supply Chain Risk Designation from U.S.

The U.S. Department of War is moving to designate Anthropic a supply chain risk over restrictions on mass domestic surveillance and autonomous weapons, the AI company said. Anthropic says its commercial services remain unaffected.

By Daniel Mercer Published:

Anthropic said the U.S. Department of War, led by Secretary Pete Hegseth, is moving to designate the company a supply chain risk. The decision follows months of negotiations over two exceptions Anthropic requested for the lawful use of its AI model, Claude: preventing its use for mass domestic surveillance and fully autonomous weapons.

Anthropic emphasized that the exceptions have not affected any government missions to date and that the designation would be unprecedented for an American company. The company intends to challenge any formal supply chain risk designation in court.

The designation, if adopted, would only impact the use of Claude on Department of War contracts. Commercial customers and individual users would continue to have full access to Anthropic’s AI services. The company reiterated its commitment to supporting U.S. warfighters while protecting fundamental rights and maintaining safe AI practices.

Anthropic’s announcement comes amid broader tensions around U.S. military AI contracts, following recent developments with other AI providers such as OpenAI and xAI, highlighting ongoing debates over responsible AI deployment in national security contexts.

AI & Machine Learning, News