Runway has unveiled a real-time AI video generation model developed in collaboration with Nvidia, marking a significant step toward low-latency generative systems. The research preview, presented at Nvidia GTC, demonstrates the ability to generate high-definition video with a time-to-first-frame of under 100 milliseconds.
The model runs on Nvidia’s Vera Rubin architecture and is designed to support instant visual output, enabling more interactive and responsive creative workflows. This represents a shift from traditional video generation approaches, which often involve higher latency and offline rendering processes.
Runway said the system contributes to the development of its General World Model, GWM-1, aimed at simulating dynamic environments in real time. The company is focusing on integrating model design with hardware advancements to improve performance and scalability.
The breakthrough highlights growing industry efforts to enable real-time AI applications, including interactive media, simulation, and advanced content creation tools.