The National Security Agency is reportedly using a restricted artificial intelligence model from Anthropic, despite ongoing tensions between the company and the U.S. defense establishment. According to reports, the agency has access to Mythos Preview, a cybersecurity-focused AI system that Anthropic chose not to release publicly due to concerns about its potential misuse.
Anthropic introduced Mythos earlier this month as a frontier model designed to identify vulnerabilities in digital systems. Access has been limited to roughly 40 organizations, with only a small subset publicly disclosed. The NSA is said to be using the tool to scan for exploitable weaknesses, while the U.K.’s AI Security Institute has also confirmed its own access.
The development is notable given that the Department of Defense recently labeled Anthropic a “supply chain risk” after the company declined to provide unrestricted access to its models. The dispute reflects broader disagreements over how AI should be used in areas such as surveillance and autonomous weapons.
At the same time, Anthropic appears to be engaging more directly with U.S. policymakers. CEO Dario Amodei recently met with senior White House officials, signaling a possible shift in relations.
The situation highlights the growing complexity of AI adoption in national security, where agencies may rely on advanced tools even as policymakers debate their risks and governance.