Google DeepMind’s Project Genie Lets Users Explore AI-Generated Worlds

Google DeepMind has begun rolling out Project Genie, an experimental interactive prototype that lets users create and explore AI-generated worlds powered by its Genie 3 world model.

By Maria Konash Published: Updated:
Google DeepMind’s Project Genie Lets Users Explore AI-Generated Worlds
Google DeepMind introduces Project Genie, a Genie 3–based prototype that builds interactive worlds on the fly. Photo: Google

Google DeepMind has started rolling out access to Project Genie, an experimental interactive prototype that allows users to create, explore, and remix AI-generated worlds. The tool is powered by Genie 3, the company’s general-purpose world model, and is available initially to Google AI Ultra subscribers in the United States aged 18 and over.

The launch follows a limited preview of Genie 3 shared with trusted testers in August. Those early users created a wide range of interactive environments and identified new applications for the technology, prompting DeepMind to broaden access through a dedicated prototype focused on immersive world creation.

World models are designed to simulate environments by predicting how they evolve and how actions affect them. While DeepMind has previously built agents for closed systems such as Chess and Go, the company views general-purpose world models as a key step toward artificial general intelligence. Genie 3 differs from earlier approaches by generating environments dynamically in real time, rather than relying on static scenes or pre-rendered paths.

Interactive World Creation

Project Genie is delivered as a web-based prototype and combines Genie 3 with other Google AI systems, including Gemini and Nano Banana Pro. The experience centers on three core capabilities: world sketching, world exploration, and world remixing.

Users can begin by prompting with text, generated images, or uploaded visuals to define a setting, character, and mode of movement, such as walking, flying, or driving. For more precise control, Nano Banana Pro allows users to preview and adjust the visual structure of a world before entering it, as well as choose first-person or third-person perspectives.

Once inside, the environment is fully navigable. As users move, the system generates new terrain and scenes on the fly, responding to direction, camera changes, and interactions. Existing worlds can also be remixed by building on original prompts, and users can browse curated examples or randomized environments for inspiration. Completed sessions can be exported as short videos.

DeepMind said the prototype reflects progress in consistency and physical simulation, enabling applications across fields such as robotics research, animation, fictional storytelling, and the exploration of real or historical locations.

Limitations and Responsible Use

Project Genie remains an early-stage research prototype and includes several constraints. Generated worlds may not always match prompts or real-world physics, character control can be inconsistent, and individual generations are limited to 60 seconds. Some features previously discussed for Genie 3, such as prompt-driven events that alter environments mid-exploration, are not yet available.

DeepMind emphasized that Project Genie is being released through Google Labs to better understand how people use world models and where improvements are needed. The company said feedback from users of its most advanced AI tier will inform future development.

Access to Project Genie is rolling out gradually to U.S.-based Google AI Ultra subscribers, with plans to expand to additional regions over time. DeepMind said its longer-term goal is to make world model technology more broadly accessible as capabilities mature and reliability improves.

AI & Machine Learning, Consumer Tech, News