Luma, an AI video and 3D model startup backed by Andreessen Horowitz, has released a new model called Ray3 Modify that allows creators to edit existing video footage while maintaining the original human performance. The model enables users to transform characters, environments, and scenes using reference images without reshooting physical footage.
Ray3 Modify is designed to address a key limitation of generative video tools: preserving natural motion and emotional delivery. Luma said the model retains an actor’s timing, movement, eye line, and expression while allowing visual changes such as new characters, costumes, or locations. Creators can also supply start and end reference frames to guide transitions and maintain continuity between scenes.
The model supports character references that allow consistent identity and appearance across shots, making it suitable for brand, film, and studio production workflows. Ray3 Modify is available through Luma’s Dream Machine platform and builds on video modification features introduced earlier this year.
The release follows Luma’s recently announced $900 million funding round led by Humain, an AI firm backed by Saudi Arabia’s Public Investment Fund. Existing investors including a16z, Amplify Partners, and Matrix Partners also participated. Luma plans to use the funding to expand infrastructure, including a large-scale AI compute cluster in Saudi Arabia.