At Nvidia’s annual GTC developer conference, CEO Jensen Huang said the company expects purchase orders for its Blackwell and Vera Rubin systems to reach $1 trillion by 2027, doubling earlier projections of a $500 billion opportunity.
The updated forecast reflects surging demand for AI infrastructure as companies scale from chatbot deployments to agentic AI systems, which generate significantly more compute-intensive workloads.
“If they could just get more capacity, they could generate more tokens, their revenues would go up,” Huang said during his keynote in San Jose.
Nvidia, now valued at roughly $4.5 trillion, continues to benefit from explosive demand for its GPUs. The company expects 77% year-over-year revenue growth this quarter, extending a streak of strong performance driven by AI adoption.
New Chips, Systems, and Architecture Announced
Huang introduced several new technologies, including the upcoming Vera Rubin platform, expected to launch later this year. The system, made up of 1.3 million components, is designed to deliver 10x better performance per wattcompared to the previous generation—an important advancement as energy consumption becomes a key constraint in AI infrastructure.
Nvidia also unveiled the Groq 3 Language Processing Unit (LPU), part of technology acquired through a $20 billion deal. The chip is designed to enhance inference performance and will ship in the third quarter.
A new Groq LPX rack, housing 256 LPUs, will work alongside Vera Rubin systems to significantly boost efficiency. According to Huang, the setup can improve tokens-per-watt performance by up to 35x.
Looking further ahead, Nvidia previewed Kyber, its next-generation rack architecture, which integrates 144 GPUs in a vertical configuration to increase density and reduce latency. Kyber is expected to debut in Vera Rubin Ultra systems in 2027.
Focus on Agentic AI and Developer Ecosystem
Huang highlighted the rapid rise of agent-based AI systems, pointing to the growing popularity of the open-source project OpenClaw. He introduced NemoClaw, a new reference stack designed to help developers build enterprise-ready AI agents using Nvidia infrastructure.
“It finds OpenClaw, it downloads it. It builds you an AI agent,” Huang said.
The announcements underscore Nvidia’s strategy to support the full AI stack: from chips and data center systems to developer tools and agent frameworks.
Expanding Into Autonomous Systems
Beyond data centers, Nvidia continues to push into autonomous systems. Huang said ride-hailing company Uber plans to deploy fleets powered by Nvidia’s Drive AV software across 28 cities globally by 2028, starting with Los Angeles and San Francisco next year.
Automakers including Nissan, BYD, Geely, Isuzu, and Hyundai are also developing Level 4 autonomous vehiclesusing Nvidia’s Drive Hyperion platform, while additional partners are building autonomous buses powered by Nvidia’s AGX Thor chip.
The keynote reflects Nvidia’s growing role at the center of the AI ecosystem, as demand accelerates across enterprise software, autonomous systems, and next-generation computing infrastructure.