OpenAI has finalized a deal with the Department of Defense to deploy its AI models, including those powering ChatGPT, in classified U.S. military environments. The announcement followed the collapse of negotiations between Anthropic and the Pentagon, after which President Donald Trump directed federal agencies to cease using Anthropic technology following a six-month transition period. Secretary of Defense Pete Hegseth also designated Anthropic as a supply-chain risk, citing limitations on unrestricted military use.
Chief Executive Sam Altman acknowledged the speed of the negotiations, describing the agreement as “definitely rushed” and admitting that “the optics don’t look good.” The move quickly drew scrutiny from media and industry observers, with some questioning how OpenAI could secure a deal while Anthropic did not.
Safety and Deployment Measures
In response, OpenAI outlined its safeguards through a blog post and executive commentary. The company emphasized three areas where its models will not be used: mass domestic surveillance, autonomous weapons, and high-stakes automated decisions, such as social credit systems. OpenAI framed its approach as multi-layered, contrasting it with other AI companies that rely primarily on usage policies.
The deployment will occur via cloud infrastructure, with cleared OpenAI personnel overseeing operations. Contractual protections further enforce the safety red lines, the company said. Katrina Mulligan, OpenAI’s head of national security partnerships, highlighted that deployment architecture, rather than contract language alone, prevents models from being integrated directly into weapons systems or operational sensors.
Despite these assurances, analysts have questioned whether compliance with U.S. Executive Order 12333 could allow some domestic surveillance indirectly, as the order governs the collection of communications outside the U.S. that may include information about U.S. persons. OpenAI has stated it does not fully understand why Anthropic could not reach a similar agreement, expressing hope that other labs will consider comparable arrangements in the future.
Industry and Operational Implications
Altman acknowledged backlash over the deal, noting that Anthropic’s Claude briefly surpassed ChatGPT in the Apple App Store rankings following the announcement. He described the agreement as an attempt to de-escalate tensions between the Defense Department and AI companies, while protecting safety and ethical boundaries.
The development highlights the operational and political challenges of integrating advanced AI into military workflows. As U.S. defense agencies increasingly rely on AI for intelligence analysis, operational planning, and simulation, companies face scrutiny over ethical use, contractual safeguards, and alignment with government standards.