Nvidia has unveiled Lyra 2.0, a new AI model designed to generate explorable 3D environments in real time. The system enables users to move freely through AI-generated worlds while maintaining spatial consistency, addressing a major limitation in existing generative video and 3D models.
Lyra 2.0 builds on recent advances in video-based scene generation, where AI models create camera-controlled walkthroughs and convert them into 3D environments. The model allows users to navigate dynamically through these spaces, effectively rendering new parts of the world as they explore. This approach supports real-time interaction and opens the door to applications in simulation, gaming, and robotics.
Today, we released Lyra 2.0, a framework for generating persistent, explorable 3D worlds at scale, from NVIDIA Research.
Generating large-scale, complex environments is difficult for AI models. Current models often “forget” what spaces look like and lose track of movement over… pic.twitter.com/l6oTNMl5mV
— NVIDIA AI Developer (@NVIDIAAIDev) April 15, 2026
A key differentiator is Lyra 2.0’s ability to maintain consistent geometry over long sequences. Competing systems often struggle with tracking motion over time, leading to visual artifacts such as shifting objects, blurring, or inconsistent scene reconstruction. Nvidia’s model is designed to overcome these issues, enabling stable navigation across complex environments without degradation.
Fixing Drift and Inconsistency in 3D Generation
Lyra 2.0 tackles two core technical challenges: motion drift and spatial inconsistency. In many generative systems, small errors accumulate as the model produces frames over time, eventually distorting the scene. At the same time, previously generated areas may be forgotten, causing the model to recreate them inaccurately when revisited.
To address this, Lyra 2.0 maintains a form of spatial memory by storing per-frame geometry. This allows the model to reference previously seen areas and preserve structural consistency. It also uses a training approach that exposes the system to its own imperfect outputs, helping it learn to correct errors instead of amplifying them.
The result is a system capable of generating longer, more coherent 3D sequences, even when users move in different directions or revisit earlier parts of the environment.
Real-Time Exploration and Simulation Potential
Lyra 2.0 includes an interactive interface that lets users explore generated environments freely, rather than following a fixed path. As users move, the system continuously expands the world, generating new regions while maintaining alignment with previously created structures.
The generated environments can also be exported into simulation tools such as NVIDIA Isaac Sim, making them suitable for robotics training and testing. This could reduce the time and cost required to build large-scale simulation environments.
The release highlights Nvidia’s broader push into generative AI for spatial computing. By combining video generation with 3D reconstruction and real-time interaction, Lyra 2.0 moves closer to enabling fully AI-generated virtual worlds that can be explored, manipulated, and deployed across industries.