Anthropic has announced a compute partnership with SpaceX that gives the company access to the full capacity of the Colossus 1 AI data center. The agreement adds more than 300 megawatts of compute power and over 220,000 NVIDIA GPUs, significantly expanding the infrastructure available for Claude models.
The additional capacity is already affecting Anthropic’s products. The company said it is doubling Claude Code’s five-hour rate limits for Pro, Max, Team, and enterprise seat-based plans. It is also removing peak-hour usage reductions for Pro and Max subscribers and substantially increasing API rate limits for Claude Opus models.
According to Anthropic, the SpaceX agreement is part of a broader infrastructure expansion strategy aimed at addressing rising demand for Claude services. The company said the added GPU capacity will directly improve availability for Claude Pro and Claude Max users, who have faced tighter usage restrictions as demand for coding and reasoning workloads increased.
The Colossus 1 facility includes dense deployments of NVIDIA H100, H200, and GB200 accelerators. Anthropic said the compute cluster will support both model training and inference workloads, including Claude Code and API services.
The SpaceX agreement follows several other large-scale infrastructure deals announced by Anthropic this year. These include an agreement with Amazon for up to 5 gigawatts of AI infrastructure, including nearly 1 gigawatt expected online by the end of 2026; a 5 gigawatt partnership with Google and Broadcom beginning in 2027; a strategic infrastructure partnership involving Microsoft and NVIDIA worth up to $30 billion in Azure capacity; and a $50 billion AI infrastructure investment initiative with Fluidstack.
Usage Limits Increase As Demand Surges
The immediate product changes show how tightly compute availability is tied to user experience in large AI systems. Claude Code, which allows developers to use Claude for software engineering workflows, has become one of Anthropic’s most compute-intensive products because coding tasks often require long reasoning chains and repeated iterations.
By raising rate limits and removing peak-hour reductions, Anthropic is effectively signaling that infrastructure constraints had become a bottleneck for paid users. The increase in API capacity also matters for enterprise customers building applications on Claude Opus, Anthropic’s most capable model.
The company’s reliance on multiple hardware platforms, including AWS Trainium chips, Google TPUs, and NVIDIA GPUs, reflects a broader strategy to diversify compute supply instead of depending on a single cloud or chip provider.
AI Infrastructure Expands Beyond The US
Anthropic also said future infrastructure expansion will increasingly happen internationally, particularly for enterprise customers in regulated industries such as healthcare, government, and financial services. Many of these customers require local hosting to meet data residency and compliance rules.
The company said some of its new inference capacity through Amazon will be deployed in Asia and Europe. Anthropic also emphasized that future expansion will prioritize countries with stable legal frameworks and secure supply chains for networking, hardware, and data center infrastructure.
The announcement additionally included continued discussions with SpaceX around orbital AI compute systems. While still experimental, the idea reflects growing concern inside the AI industry that future model development could outgrow the practical limits of terrestrial power, cooling, and land availability.