Anthropic Stands Firm on AI Military Usage Limits

Anthropic will not ease restrictions on how its AI models can be used by the U.S. military, even after a high‑level meeting with the Pentagon.

By Samantha Reed Published:

Anthropic has reaffirmed it will not relax usage restrictions on its artificial intelligence technology for military applications, according to people familiar with the matter. The stance persists following a meeting with U.S. Defense Secretary Pete Hegseth aimed at resolving a protracted dispute over Pentagon demands for broader access to the company’s models. 

Anthropic, maker of the Claude large language model, prohibits use of its technology in fully autonomous weapon targeting and domestic surveillance, citing ethical concerns. Pentagon officials argue that U.S. law, not company policies, should govern military use and have given the company until Friday at 5 p.m. to agree to revise its safeguards. 

Defense officials have warned that refusal could lead to punitive steps including contract cancellation, designation as a supply‑chain risk, or invoking the Defense Production Act to compel compliance. Other AI firms including OpenAI, Google, and Elon Musk’s xAI have already agreed to broader military integration terms. 

Anthropic said conversations remain constructive and that it seeks to support national security missions within the limits of what its models can responsibly deliver. The clash underscores intensifying debates over ethical constraints on AI deployment in defense contexts.

AI & Machine Learning, News