Meta Unveils Meta Compute in Major Push to Expand AI Infrastructure

Meta has introduced a new top-level initiative, Meta Compute, to expand its AI infrastructure and energy capacity at large scale. The effort signals a long-term push to support advanced AI development and global data center growth.

By Maria Konash Published: Updated:
Meta Unveils Meta Compute in Major Push to Expand AI Infrastructure
Meta has launched Meta Compute, a new initiative aimed at massively scaling its AI infrastructure. Photo: Scott Rodgerson / Unsplash

Meta has established a new top-level initiative called Meta Compute to accelerate the buildout of its artificial intelligence infrastructure and long-term computing capacity. Chief Executive Officer Mark Zuckerberg said the company plans to develop tens of gigawatts of power capacity this decade and potentially hundreds of gigawatts over time, positioning infrastructure as a strategic advantage for future AI systems.

“How we engineer, invest, and partner to build this infrastructure will become a strategic advantage,” Zuckerberg said in a public post. The initiative reflects Meta’s growing focus on securing power, compute capacity, and supplier relationships as demand for large-scale AI training and inference continues to rise.

The company had previously signaled aggressive spending plans. During an earnings call last year, Chief Financial Officer Susan Li said building leading AI infrastructure would be a core advantage for developing advanced models and product experiences. Capital expenditures have already increased as Meta expands data centers and networking capacity to support generative AI services across its platforms.

Energy requirements remain a central challenge for the AI sector. A gigawatt represents one billion watts of electrical power, and large data centers can consume power comparable to small cities. Industry estimates suggest that US electricity demand from AI data centers could increase sharply over the next decade as hyperscalers expand capacity. To secure long-term power supply, Meta recently signed 20-year agreements to purchase nuclear energy from three Vistra plants and announced plans to develop small modular reactors with Oklo and TerraPower, underscoring how energy procurement is becoming a critical component of large-scale AI infrastructure strategy.

Leadership and Operational Focus

Meta Compute will be led by Santosh Janardhan and Daniel Gross, with defined responsibilities across engineering, operations, and long-term planning. Janardhan, who has been with Meta since 2009 and currently leads global infrastructure, will continue overseeing technical architecture, software systems, custom silicon programs, developer productivity, and the operation of Meta’s global data center fleet and network.

Gross, who joined Meta last year and previously co-founded Safe Superintelligence with former OpenAI chief scientist Ilya Sutskever, will lead a new group responsible for long-term capacity strategy, supplier partnerships, industry analysis, planning, and business modeling. The role is designed to strengthen Meta’s ability to forecast demand, manage procurement, and secure reliable infrastructure at scale.

The initiative will also involve Dina Powell McCormick, who recently joined Meta as President and Vice Chairman. She will focus on partnerships with governments and sovereign entities to support the development, financing, and deployment of infrastructure projects. These relationships are expected to play a role in securing energy access, permitting, and long-term investment structures as data center footprints expand globally.

AI & Machine Learning, Cloud & Infrastructure, News