The U.S. Department of Defense is facing internal resistance as it moves to phase out artificial intelligence tools developed by Anthropic, following a decision to classify the company as a supply-chain risk.
Defense Secretary Pete Hegseth issued the designation on March 3 after a dispute with Anthropic over usage guardrails for its AI systems. The order bars the Pentagon and its contractors from using Anthropic’s technology, including its widely adopted Claude model, with a six-month transition period.
However, military personnel, IT staff, and contractors say the directive is proving difficult to implement. Many users have grown reliant on Anthropic’s tools and view them as more effective than competing systems. Some are delaying compliance, while others expect the ban may eventually be reversed.
Operational Dependence on AI Tools
Anthropic’s AI systems have become embedded in military workflows, supporting tasks ranging from data analysis to operational planning. Claude was the first AI model approved for use on classified Pentagon networks, and adoption expanded rapidly following a $200 million defense contract awarded in 2025.
Users say the tools significantly improved efficiency, particularly in handling large datasets and automating repetitive processes. In some cases, developers relied on Anthropic’s Claude Code system to generate software and build automated workflows.
With the phase-out underway, some of these processes are reverting to manual methods. One official said tasks previously handled by AI, such as querying large datasets, are now being performed using traditional tools like spreadsheets, resulting in slower workflows and reduced productivity.
Replacing Anthropic’s systems is also technically complex. Contractors note that recertifying alternative AI models for use on classified networks could take between 12 and 18 months. This process includes rigorous security and compliance checks, making a rapid transition unlikely.
Cost, Complexity, and Strategic Uncertainty
The removal of Anthropic’s tools is expected to carry both financial and operational costs. Systems built around Claude may require partial redesign, particularly in cases where workflows and prompts were tailored to its architecture.
For example, software platforms used for intelligence analysis and targeting operations rely on AI-driven workflows that would need to be rebuilt using alternative models. Contractors say this process could delay projects and reduce efficiency in the short term.
At the same time, Pentagon officials and contractors are weighing whether to fully transition to other providers, such as OpenAI, Google, or xAI, or to adopt a more gradual approach. Some agencies are reportedly slowing their phase-out efforts in anticipation of a potential resolution between the government and Anthropic.
The situation highlights a broader challenge in AI adoption within government systems: balancing security concerns with operational effectiveness. As AI tools become more deeply integrated into critical workflows, replacing them can create significant disruption.
The Pentagon’s experience underscores how quickly AI technologies have moved from experimental tools to essential infrastructure. It also illustrates the growing tension between policy decisions and the practical realities of deploying advanced AI systems at scale.
