OpenAI and Amazon Sign $38B Multi-Year Partnership to Power Next-Gen AI

OpenAI and Amazon Web Services have announced a $38 billion multi-year partnership that will provide OpenAI with massive AWS infrastructure to power advanced AI workloads and future models.

By Maria Konash Published: Updated:
OpenAI and Amazon Sign $38B Multi-Year Partnership to Power Next-Gen AI
OpenAI and AWS have signed a $38 billion deal to power ChatGPT and next-generation AI models on Amazon’s cloud - expanding global compute capacity through 2027. Photo: Amazon

OpenAI and Amazon Web Services (AWS) have signed a $38 billion multi-year strategic partnership, one of the largest infrastructure deals in the history of artificial intelligence. The agreement gives OpenAI immediate access to AWS’s world-class computing resources to train, deploy, and scale its most advanced AI systems, including the models behind ChatGPT.

Under the partnership, OpenAI will begin running its workloads on AWS’s specialized compute clusters, including Amazon EC2 UltraServers powered by hundreds of thousands of NVIDIA GB200 and GB300 GPUs. The infrastructure can scale up to tens of millions of CPUs, designed to handle large-scale generative AI and agentic workloads. All capacity is targeted to be deployed by the end of 2026, with further expansion planned through 2027 and beyond.

AWS has long been recognized for its experience in running large-scale, secure, and reliable AI infrastructure. Its clusters – some exceeding 500,000 chips – will form the backbone of OpenAI’s next phase of growth. The collaboration combines AWS’s global infrastructure expertise with OpenAI’s leading research and product ecosystem to deliver faster, more efficient, and more cost-effective AI models to millions of users worldwide.

“Scaling frontier AI requires massive, reliable compute,” said Sam Altman, co-founder and CEO of OpenAI. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”

Matt Garman, CEO of AWS, echoed the sentiment: “As OpenAI continues to push the boundaries of what’s possible, AWS’s best-in-class infrastructure will serve as a backbone for their AI ambitions. The breadth and immediate availability of optimized compute demonstrate why AWS is uniquely positioned to support OpenAI’s vast AI workloads.”

The infrastructure being built for OpenAI features an optimized architectural design to maximize efficiency and low-latency processing. Clustering GPUs within Amazon EC2 UltraServers enables high-speed communication between nodes, supporting training, inference, and large-scale model serving – all on the same tightly integrated network. This allows OpenAI to train next-generation models faster and serve billions of ChatGPT queries with higher performance and stability.

The deal also reflects the growing demand for computing power across the AI industry. Frontier model providers like OpenAI are now competing for scarce GPU resources as they push models toward artificial general intelligence (AGI). By partnering with AWS, OpenAI secures a scalable, long-term supply of compute infrastructure to support its rapid model development cycle and global deployment strategy.

The companies already share a strong technical relationship. Earlier this year, OpenAI’s open-weight foundation models became available on Amazon Bedrock, allowing AWS customers to integrate OpenAI models directly into their workflows for agentic automation, coding, scientific research, and data analysis. OpenAI is now among the most-used model providers on Bedrock, serving enterprise clients like Thomson Reuters, Peloton, and Comscore.

For OpenAI, the partnership represents a critical step toward diversifying its infrastructure beyond Microsoft’s Azure cloud, reducing dependency while expanding flexibility. For AWS, it signals a deepening role in the global AI race, positioning its infrastructure as the preferred foundation for next-generation intelligence systems.

As OpenAI ramps up training and deployment for its next wave of models, the AWS collaboration could help shape the future of AI scale – where innovation depends as much on compute capacity as on algorithmic breakthroughs.