Google has expanded access to its Lyria 3 Pro music generation model, integrating the system across multiple products to support creators, developers, and enterprises. The release builds on earlier versions by enabling longer audio generation, with tracks extending up to three minutes and improved control over musical structure.
Lyria 3 Pro introduces enhanced capabilities for composing music with defined elements such as intros, verses, choruses, and bridges. The model is designed to better interpret musical prompts, allowing users to generate more complex and structured compositions across a range of styles.
The technology is now available through platforms including Vertex AI, where it is offered in public preview for enterprise use cases such as game development and media production. It is also integrated into Google AI Studio and the Gemini API, enabling developers to incorporate music generation into applications and creative tools.
Consumer-facing products are also receiving updates. Lyria 3 Pro is being rolled out in the Gemini app for paid users and integrated into Google Vids, an AI-powered video creation platform. Additionally, the model is part of ProducerAI, a collaborative tool aimed at supporting musicians and producers in iterative workflows.
Google emphasized responsible deployment, noting that the system does not replicate specific artists and includes safeguards such as content filtering and SynthID watermarking to identify AI-generated audio. The model is trained on licensed and permitted data sources.
The expansion reflects broader competition in generative AI, as companies invest in multimodal capabilities that extend beyond text and images into audio and video production. It also follows Google’s recent move to expand Stitch into a full AI-native design platform, introducing tools that convert natural language into interactive UI prototypes, further strengthening its position across creative AI workflows.