Meta has unveiled four new custom chips designed for artificial intelligence workloads as part of its expanding data center infrastructure. The processors are part of the company’s Meta Training and Inference Accelerator (MTIA) family, a line of chips built to handle specific AI tasks within Meta’s platforms.
The announcement marks the latest step in Meta’s push to reduce reliance on external chip vendors by designing its own silicon optimized for internal workloads. According to Meta Vice President of Engineering Yee Jiun Song, custom chips allow the company to improve price-performance efficiency across its data centers while diversifying its hardware supply chain.
“This also provides us with more diversity in terms of silicon supply and insulates us from price changes to some extent,” Song said.
Meta first revealed the MTIA architecture in 2023 and released a second-generation version in 2024. The new lineup significantly expands the platform as the company scales its AI infrastructure to support recommendation systems and generative AI features across its products.
Four New Chips Target Different AI Workloads
The first of the new chips, MTIA 300, has already been deployed in Meta data centers. It is designed to train smaller AI models used for core platform tasks such as ranking and recommendation algorithms. These systems determine which posts, ads, and videos appear in users’ feeds across services including Facebook and Instagram.
Three additional chips are currently in development. The MTIA 400 is nearing deployment after completing testing, while the MTIA 450 and MTIA 500 are scheduled to become operational by 2027.
Unlike the MTIA 300, the upcoming chips will focus on generative AI inference workloads such as generating images and videos from text prompts. However, Meta said the processors will not be used to train large language models.
Song noted that Meta plans to release new chips roughly every six months as the company rapidly expands computing capacity. Each chip generation is expected to remain in service for more than five years.
Data Center Expansion and AI Infrastructure
The new processors will support Meta’s massive data center expansion. The company is currently building a large facility in Louisiana and additional centers in Ohio and Indiana. Reports also indicate that Meta is exploring leasing space at a major AI data center site in Texas.
The custom chips are manufactured by Taiwan Semiconductor Manufacturing Company, though Meta did not confirm whether production will occur at the company’s new fabrication facilities in Arizona.
Meta’s in-house silicon strategy follows a broader industry trend among major technology companies developing application-specific integrated circuits, or ASICs, for AI workloads. These chips are typically smaller and more energy-efficient than general-purpose GPUs but are optimized for narrower tasks.
Google pioneered this approach with its Tensor Processing Units in 2015, and Amazon followed with custom chips for its cloud services in 2018. Unlike those companies, Meta uses its MTIA chips exclusively for internal operations rather than offering them through a public cloud platform.
Despite building its own silicon, Meta continues to rely heavily on external hardware suppliers. The company recently signed agreements to deploy millions of Nvidia GPUs and up to six gigawatts of AMD GPUs across its data centers.
Song acknowledged that securing high-bandwidth memory remains a potential constraint as AI infrastructure spending increases across the technology industry. However, he said Meta has diversified its supply chain and believes it has secured the resources required for its planned deployments.
