Pentagon Flags Anthropic as National Security Risk

The U.S. Department of Defense labeled Anthropic a national security risk, citing concerns over AI usage restrictions during military operations. The dispute is now headed to federal court.

By Samantha Reed Published:

The U.S. Department of Defense has designated Anthropic as an “unacceptable risk to national security,” escalating a legal dispute over the company’s role in military AI systems. The designation follows a lawsuit filed by Anthropic challenging the Pentagon’s decision and seeking to block its enforcement.

In a federal court filing, the Pentagon argued that Anthropic’s internal usage restrictions, described as corporate “red lines,” could interfere with military operations. Officials expressed concern that the company might limit or alter its AI systems during active missions if its policies were violated.

Anthropic previously secured a $200 million contract with the Pentagon to deploy AI technologies in classified environments. However, the company resisted certain use cases, including mass surveillance of U.S. citizens and involvement in lethal targeting decisions. Defense officials argued that operational control should not be constrained by private sector policies.

The case has drawn support for Anthropic from several technology firms and advocacy groups, including employees from major AI companies. Anthropic claims the Pentagon’s actions violate its constitutional rights and unfairly penalize its ethical standards.

A federal court hearing on the matter is scheduled for next week.

AI & Machine Learning, News, Regulation & Policy