Nvidia has announced a strategic partnership with Marvell Technology, backed by a $2 billion investment, to expand its AI infrastructure ecosystem and accelerate development of next-generation computing systems.
The collaboration connects Marvell’s hardware capabilities to Nvidia’s NVLink Fusion platform, a rack-scale architecture designed to support custom AI infrastructure. The partnership reflects growing demand for scalable systems capable of handling increasingly complex AI workloads.
The move comes as companies across the industry race to build “AI factories,” large-scale computing environments optimized for training and deploying advanced models.
Expanding the NVLink Ecosystem
Under the agreement, Marvell will develop custom XPUs and networking technologies compatible with Nvidia’s NVLink Fusion platform. Nvidia will provide core components including its Vera CPU, NVLink interconnect, Spectrum-X switches, and networking solutions such as ConnectX NICs and BlueField DPUs.
The integration allows customers to design semi-custom AI systems while remaining fully compatible with Nvidia’s broader ecosystem. This approach supports heterogeneous computing environments, where different types of processors and accelerators work together within a unified architecture.
By enabling tighter integration between custom silicon and Nvidia’s infrastructure, the companies aim to provide greater flexibility for enterprises building specialized AI systems.
Focus on Networking and AI at Scale
The partnership also includes joint development of advanced networking technologies, particularly in silicon photonics and optical interconnects. These components are critical for improving data transfer speeds and reducing latency in large-scale AI deployments.
In addition, Nvidia and Marvell plan to collaborate on AI-RAN technology, which applies AI to telecommunications networks, including 5G and future 6G systems. The goal is to transform telecom infrastructure into distributed AI computing platforms.
As AI workloads increasingly rely on distributed systems, high-speed connectivity and efficient data movement are becoming central to performance and cost efficiency.
Strategic Bet on AI Infrastructure
Nvidia’s investment underscores the importance of partnerships in scaling AI infrastructure beyond standalone chips. As demand for inference and model deployment grows, companies are focusing on building integrated systems that combine compute, networking, and storage.
The collaboration with Marvell positions Nvidia to strengthen its role not only as a GPU provider but as a broader infrastructure platform. It also highlights the increasing role of custom silicon and specialized hardware in meeting the needs of enterprise AI applications.