
OpenAI has entered into a long-term partnership with Broadcom to co-develop and deploy custom AI processors capable of delivering up to 10 gigawatts of compute capacity. The collaboration marks OpenAI’s next major step toward securing dedicated hardware for training and running increasingly complex artificial intelligence models.
The partnership will see Broadcom design and manufacture AI accelerators optimized for OpenAI’s large language and multimodal systems. These chips are expected to complement the company’s existing reliance on Nvidia and AMD hardware, creating a diversified supply chain and reducing potential bottlenecks as demand for computing power continues to surge.
Broadcom’s chips will be built on its advanced networking and silicon platforms, integrating memory, interconnect, and AI computation layers into a unified architecture. The first generation of these accelerators is planned for deployment in OpenAI’s next round of global data centers, contributing to what could become one of the world’s largest AI infrastructure networks.
OpenAI said the new hardware is designed to achieve higher energy efficiency and improved training throughput compared to existing GPU clusters. The move is part of the company’s ongoing effort to control every layer of its computing ecosystem — from chips and interconnects to data center layout and software optimization.
According to Broadcom CEO Hock Tan, the deal represents “a new frontier in AI computing,” combining Broadcom’s long-standing expertise in semiconductors with OpenAI’s leadership in generative models. Executives from both companies emphasized that this collaboration is aimed not just at scaling AI performance, but at improving accessibility and sustainability for enterprise-level AI workloads.
For OpenAI, the initiative fits into a broader trend of vertical integration. The company has already partnered with AMD for next-generation GPU deployments and Oracle for cloud infrastructure expansion. Its new alliance with Broadcom signals a deeper commitment to custom silicon, reflecting a belief that purpose-built AI accelerators will be central to future model development.
The 10-gigawatt buildout, roughly equivalent to the total computing output of several hyperscale data centers, underscores the extraordinary demand driving today’s AI industry. By developing its own chip roadmap, OpenAI aims to achieve greater autonomy over performance, cost, and supply – key factors as it competes with Google, Anthropic, and other major AI players.
Industry analysts see the deal as a strong signal that OpenAI is preparing for a new generation of models that will require unprecedented computing resources. With this partnership, the company moves closer to its goal of building a fully integrated AI infrastructure capable of supporting global-scale learning systems.