Broadcom is expanding its footprint in artificial intelligence infrastructure through new agreements with Google and Anthropic, underscoring the growing demand for compute power behind generative AI systems. The company said it will develop future versions of Google’s AI chips while also supporting a major expansion of Anthropic’s access to computing capacity. The updates, disclosed in a regulatory filing, pushed Broadcom shares up about 3% in extended trading.
At the center of the announcement is Broadcom’s continued work on Google’s tensor processing units, or TPUs, custom chips designed to train and run AI models at scale. While the companies have collaborated for years, the latest agreement signals a deeper alignment as competition intensifies among chipmakers and cloud providers. Custom silicon is becoming increasingly important as AI companies look for alternatives to general-purpose graphics processing units.
Broadcom is also scaling its relationship with Anthropic, one of the fastest-growing AI startups. The expanded deal will provide the company with access to roughly 3.5 gigawatts of compute capacity, primarily powered by Google’s TPU infrastructure. That marks a sharp increase from earlier deployments. Broadcom CEO Hock Tan recently said the company had already begun supplying around 1 gigawatt of compute to Anthropic, with demand expected to exceed 3 gigawatts by 2027.
Anthropic’s rapid growth helps explain the scale of the investment. The company said its annualized revenue has surpassed $30 billion, up from about $9 billion at the end of last year. It now counts more than 1,000 enterprise customers spending over $1 million annually, a figure that has doubled in just two months. Its Claude chatbot also saw a surge in popularity earlier this year, briefly becoming the most downloaded free app in Apple’s U.S. App Store.
The broader opportunity for Broadcom could be substantial. Analysts at Mizuho estimate the company may generate $21 billion in AI-related revenue from Anthropic in 2026, potentially doubling to $42 billion in 2027. While Broadcom did not disclose financial terms, the projections highlight how central large AI customers are becoming to semiconductor revenue growth.
A Shift Beyond GPUs
The deals also reflect a wider shift in how AI infrastructure is built. For years, companies like Anthropic and OpenAI have relied heavily on Nvidia GPUs accessed through cloud providers such as Amazon, Google, and Microsoft. That model is now evolving.
Broadcom is working with multiple AI developers, including OpenAI, on custom silicon tailored to specific workloads. At the same time, OpenAI has committed to using large volumes of AMD GPUs, signaling a diversification of suppliers. This mix of custom chips and alternative hardware suggests the AI ecosystem is moving toward more specialized and distributed infrastructure strategies.
Scaling the AI Backbone
The expansion of compute capacity into the gigawatt range highlights the industrial scale of modern AI. Training and deploying advanced models now requires vast energy, data center space, and specialized hardware. Much of Anthropic’s new infrastructure is expected to be located in the United States, reflecting both capacity needs and strategic considerations around data and supply chains.
For Broadcom, the partnerships reinforce its transition from a traditional semiconductor supplier into a key enabler of AI platforms. For the industry, they illustrate how the race to build and control AI infrastructure is becoming as critical as the development of the models themselves.