Anthropic has reportedly agreed to spend $200 billion on Google Cloud services over the next five years, according to a report from The Information. The commitment highlights the rapidly escalating infrastructure costs tied to large-scale artificial intelligence development and positions Anthropic as one of Google Cloud’s most significant customers.
The reported agreement suggests Anthropic could account for more than 40% of the revenue backlog recently disclosed by Alphabet to investors. Revenue backlog reflects future contractual obligations from cloud customers and has become an increasingly important metric as demand for AI computing capacity surges. Alphabet shares rose roughly 2% in after-hours trading following the report.
The partnership extends beyond cloud hosting. In April, Anthropic signed a separate agreement with Google and Broadcom to secure multiple gigawatts of tensor processing unit, or TPU, capacity expected to come online beginning in 2027. Alphabet is also reportedly investing up to $40 billion into Anthropic, deepening a relationship that combines infrastructure partnership with direct strategic investment.
Anthropic’s rapid growth has intensified its demand for computing resources as adoption of its Claude AI models expands. The company has recently signed multiple infrastructure agreements, including a long-term partnership with CoreWeave and additional AI computing arrangements through Amazon Web Services. Anthropic trains and deploys Claude using a mix of Google TPUs, Nvidia GPUs, and Amazon’s Trainium chips.
According to the report, contracts involving Anthropic and OpenAI now represent more than half of the combined backlog across major cloud providers including AWS, Microsoft Azure, and Google Cloud. The scale of those commitments reflects how AI developers are becoming some of the largest infrastructure buyers in the technology sector.
AI Infrastructure Spending Reaches Unprecedented Scale
The reported deal illustrates how competition in AI is increasingly driven by access to computing infrastructure rather than software alone. Training and operating advanced AI models require enormous amounts of processing power, specialized chips, and data center capacity, pushing cloud providers into a new phase of capital-intensive expansion.
For cloud companies, AI firms have become highly valuable long-term customers because of their continuous demand for compute resources. The agreements also provide more predictable revenue streams through multi-year infrastructure commitments. At the same time, the concentration of demand among a small number of AI companies is reshaping the economics of the cloud industry.
The partnership also reflects an unusual dynamic in the AI market, where companies can simultaneously compete and collaborate. While Google develops its own Gemini AI models, it also supplies critical infrastructure and capital to Anthropic, which competes directly in enterprise and consumer AI products.
Cloud Providers Race To Secure AI Dominance
The surge in infrastructure agreements comes as major technology companies compete to secure enough capacity to support increasingly advanced AI systems. Cloud providers are rapidly expanding data centers, custom AI chips, and energy infrastructure to meet projected demand over the next decade.
Anthropic’s spending commitment further strengthens Google Cloud’s position in the enterprise AI market at a time when investors are closely watching cloud growth tied to generative AI adoption. The deal also reinforces the growing importance of vertically integrated ecosystems that combine cloud infrastructure, AI chips, and foundation models.
As AI development accelerates, infrastructure partnerships are becoming as strategically important as model performance itself. Companies with reliable access to large-scale compute resources are likely to hold a significant advantage in training future generations of AI systems and serving enterprise workloads at global scale.