Alibaba Unveils New AI Chip Built for the Next Generation of AI Agents
Alibaba has unveiled a new CPU designed for AI agents, focusing on inference and customizable workloads. The chip reflects China’s push to build domestic AI infrastructure.
Alibaba has unveiled a new CPU designed for AI agents, focusing on inference and customizable workloads. The chip reflects China’s push to build domestic AI infrastructure.
Lace has secured $40 million to advance helium atom beam lithography, a technology that could significantly shrink chip features beyond current limits. The approach targets next-generation AI chip manufacturing.
Helion Energy is in early talks to supply fusion power to OpenAI, signaling growing energy demands from AI infrastructure. The proposed deal highlights ambitious scaling plans for fusion energy production.
Elon Musk announced plans for Terafab, a dual chip factory project by Tesla and SpaceX to produce AI chips for vehicles, robots, and space-based data centers.
Nvidia will supply Amazon Web Services with 1 million GPUs by 2027, expanding AI infrastructure through a multi-year chip and networking deal.
Alibaba reported a sharp profit decline and missed revenue expectations, as heavy AI and cloud investments impacted quarterly results.
Surging investment in AI data centers is fueling demand for skilled trade workers, creating labor shortages and rising wages. The trend highlights the physical infrastructure behind AI growth.
Alibaba and Baidu are increasing cloud service prices by up to 34 percent as AI demand drives higher infrastructure and computing costs. The changes take effect April 18.
Nvidia CEO Jensen Huang said demand for Blackwell and Vera Rubin systems could reach $1 trillion by 2027, as the company unveiled new chips, racks, and AI infrastructure at GTC.
Nvidia CEO Jensen Huang will deliver the keynote at the GTC 2026 conference, where investors expect new AI product announcements and demand outlook updates.
Nebius has signed a long-term AI infrastructure agreement with Meta worth up to $27 billion, providing large-scale compute capacity powered by Nvidia’s Vera Rubin platform.
Amazon and Cerebras have partnered to combine their AI chips in a new AWS service designed to accelerate inference for chatbots, coding tools, and other generative AI applications.
Palantir and Nvidia unveiled a sovereign AI OS reference architecture designed to deliver turnkey AI data centers. The platform integrates Nvidia Blackwell systems with Palantir’s enterprise AI software stack.
Meta introduced four new in-house MTIA chips designed for AI training and inference as the company accelerates data center expansion. The chips aim to improve performance and reduce reliance on external hardware suppliers.
Nvidia and Nebius have formed a strategic partnership to build hyperscale AI cloud infrastructure, with Nvidia investing $2 billion to support gigawatt-scale AI computing capacity.