
Google introduced Lyria 3, its latest AI‑driven music generation model, positioning it as a “musical collaborator” that can compose tracks from user‑provided prompts. The announcement highlights the model’s ability to interpret nuanced textual instructions, turning them into coherent, high‑quality audio that flows naturally. Lyria 3 expands beyond text, allowing creators to convert images into bespoke soundscapes, a feature aimed at brands seeking audio identities tied to visual assets. Users can pick specific genres, blend multiple styles, and fine‑tune dynamics, tempo, and vocal characteristics, including realistic multilingual singing. The system also offers granular control over vocal language, enabling global applications. The rollout emphasizes practical workflow: once a track meets the creator’s vision, it can be exported as a crisp, clear file that carries a watermark identifying it as AI‑generated. Google frames the tool as a fast‑track for marketers, filmmakers, and independent musicians to prototype or fully produce music without traditional studio resources. If adopted widely, Lyria 3 could lower barriers to professional‑grade music production, accelerate content pipelines, and reshape licensing models, while also raising questions about attribution and the future role of human composers in commercial media.

In a behind‑the‑scenes tour of Google DeepMind’s robotics lab, host Hannah Fry and Director of Robotics Kanishka Rao showcase the latest generation of general‑purpose robots built on large multimodal models. The discussion frames the shift from narrowly programmed manipulators to...