Google has signed an agreement with the U.S. Department of Defense allowing its artificial intelligence models to be used for classified government work, according to reports. The deal permits the Pentagon to deploy Google’s AI for “any lawful government purpose,” placing the company alongside OpenAI and xAI as key suppliers of AI capabilities for sensitive operations. The agreement reflects the U.S. government’s push to integrate advanced AI into defense systems, including areas such as mission planning and analysis.
The contract builds on a broader effort by the Pentagon to secure partnerships with leading AI developers. In 2025, the Department of Defense signed agreements worth up to $200 million each with companies including Anthropic, OpenAI, and Google. These deals aim to bring advanced AI tools into classified environments, where strict controls typically limit external technologies. Google’s agreement reportedly includes provisions allowing the government to request adjustments to safety settings and filters applied to its AI systems.
While the contract states that AI should not be used for domestic mass surveillance or fully autonomous weapons without human oversight, it also clarifies that Google does not have the authority to veto lawful government decisions. This distinction highlights the balance between corporate AI principles and government operational control. The deal was reported shortly after internal criticism from Google employees, with hundreds signing a letter urging leadership to avoid such military partnerships.
Ethical Tradeoffs
The agreement underscores a growing tension between commercial AI development and ethical commitments. Technology companies have previously outlined principles limiting the use of AI in surveillance and autonomous weapons. However, government contracts often require broader flexibility, particularly in classified contexts. Google’s reported terms suggest a willingness to support defense applications while maintaining general statements on oversight.
For policymakers and the public, the issue centers on how AI systems are governed once deployed in military settings. Even with stated safeguards, enforcement depends on operational practices rather than contractual language alone. The lack of veto power for the company raises questions about accountability and control over how the technology is ultimately used.
Competitive Positioning
The deal places Google firmly within a competitive group of AI providers supplying the U.S. military. Companies such as OpenAI and xAI have also secured similar agreements, reflecting the strategic importance of AI in national defense. At the same time, Anthropic’s position has been more fluid after earlier restrictions limited its role in defense-related work.
Recent comments from Donald Trump suggest that dynamic may be changing. Trump said it is “possible” that Anthropic could reach a new agreement with the Pentagon following what he described as “very good talks,” signaling a potential reversal after months of conflict. Earlier disputes centered on Anthropic’s insistence on limits around autonomous weapons and domestic surveillance, which led to restrictions on its technology. Any renewed deal would likely include safeguards while restoring its access to defense contracts.