U.S. military forces reportedly employed Anthropic’s Claude AI language model during a major joint operation against Iran, just hours after President Donald Trump’s administration ordered all federal agencies to immediately cease use of Anthropic’s technology. Sources familiar with the matter told media outlets that Claude was used by U.S. Central Command for intelligence analysis, target identification, and battlefield scenario simulations tied to the March 1 strikes.
The air operation, conducted in coordination with Israeli forces, marked one of the most significant U.S. military actions in the Middle East in years. Claude’s involvement in the mission highlights its integration into military planning processes and classified defence systems, making a rapid removal difficult.
Political and Tech Sector Fallout
On February 27, the Trump administration directed all federal agencies to discontinue using AI tools developed by Anthropic, including its Claude models, citing national security concerns. President Trump described Anthropic’s leadership in sharply critical terms in a social media post, framing the move as necessary to prevent what he called undue influence over military operations. The directive stipulated a six-month phase-out period for agencies, including the Department of Defense, to transition away from the technology.
Defense Secretary Pete Hegseth also designated Anthropic a “supply chain risk,” a label typically applied to firms considered threats to national security, and warned that any continued use could jeopardize future government contracts. Anthropic’s refusal to grant the Pentagon unrestricted access to its models, particularly for tasks without stringent safeguards, underpinned the dispute.
Industry analysts note that the military’s continued reliance on Claude, even amid a government ban, reflects how advanced AI tools can become deeply embedded in mission-critical workflows. Claude had been integrated into classified networks and defence analytics through partnerships with third-party platforms, making an abrupt disconnect operationally challenging.
Shift to Alternative AI Providers
As the standoff with Anthropic has escalated, other AI firms have moved to fill the anticipated gap. In the wake of the breakdown in relations, OpenAI announced an agreement with the Pentagon to deploy its AI models, including those underpinning ChatGPT, across classified defence infrastructure. Elon Musk’s xAI has also secured terms to make its Grok model available for secure military environments, offering additional alternatives for defence AI workloads.
The clashes between the U.S. government and Anthropic highlight broader tensions at the intersection of AI ethics, national security, and the pace at which advanced technologies are adopted in defense contexts.