Meta has signed an agreement with Amazon Web Services to deploy tens of millions of Graviton processor cores, marking a major expansion of their long-standing partnership. The deployment will support Meta’s next generation of artificial intelligence systems and is expected to scale further over time. The move positions Meta as one of the largest customers of AWS’s custom-designed Graviton chips. It comes as demand for AI infrastructure grows rapidly, particularly for systems that require real-time processing and coordination.
The chips will power a range of workloads across Meta’s platforms, including AI systems that handle billions of user interactions. While graphics processing units remain central to training large AI models, the rise of agentic AI has increased demand for CPU-based computing. These workloads include real-time reasoning, code generation, and orchestrating multi-step processes. AWS’s Graviton5 is designed for such tasks, featuring 192 cores and significantly expanded cache to improve data flow and reduce latency.
Graviton processors are built on the AWS Nitro System, which combines dedicated hardware and software to deliver high performance and security. The infrastructure also supports features such as low-latency communication between compute instances, enabling distributed AI workloads to run efficiently. Meta has previously relied on AWS services, including large-scale use of its AI tools, and this agreement deepens that relationship. The deployment also aligns with Meta’s strategy to diversify its compute resources as it scales AI capabilities.
Infrastructure Shift
The deal reflects a broader shift in how AI infrastructure is designed. While GPUs dominate model training, many emerging AI applications require sustained, high-volume processing that is better suited to CPUs. Agentic AI systems, which can plan and execute multi-step tasks autonomously, rely heavily on this type of compute. By investing in purpose-built chips like Graviton, companies can optimize performance while managing costs more effectively.
For businesses, this trend signals a more complex infrastructure landscape, where different types of processors are used for specific workloads. It may also influence how cloud providers package and price AI services, as demand grows for specialized compute resources. For end users, improved infrastructure can enable faster and more responsive AI-driven features across platforms.
The Road Ahead
The expansion underscores the increasing importance of custom silicon in the AI race. AWS designs Graviton chips to be more energy efficient and cost-effective than traditional processors, with the latest generation delivering up to 25% performance gains. Built on advanced manufacturing processes, these chips help address both cost pressures and sustainability goals as AI workloads scale.
As AI adoption accelerates, infrastructure efficiency is becoming a key competitive factor. Companies like Meta are balancing performance, cost, and energy use while building systems capable of supporting billions of interactions. The partnership with AWS suggests that purpose-built processors will play a larger role in future AI deployments, shaping how large-scale systems are developed and operated.