Google sued over Lyria 3 AI music model as Gemma 4 gains NPU support in AICore
Photo by Rubaitul Azad (unsplash.com/@rubaitulazad) on Unsplash
Reports indicate Google is facing a lawsuit over its Lyria 3 AI music model just as Gemma 4 gains NPU support in AICore, highlighting escalating legal and technical battles in the AI arena.
Key Facts
- •Key company: Google
Google’s Gemma 4 model has just been extended to run on neural‑processing units (NPUs) through the AICore runtime, according to a pull‑request merged into the LiteRT‑LM repository on GitHub. The change, submitted by Google AI Edge engineers, adds NPU‑specific kernels and a hardware‑abstraction layer that enable Gemma 4 to offload matrix‑multiply operations to on‑device accelerators, reducing latency and power consumption for edge deployments. The commit log notes that the update “adds NPU support for AICore for Gemma4 model,” and the discussion thread on Hacker News (ID 47308640) records a single up‑vote and no comments, indicating that the technical community has taken note but not yet debated the broader implications (GitHub pull request, 2024).
The timing of the NPU integration coincides with a newly filed lawsuit alleging that Google’s Lyria 3 AI music generation model infringes on copyrighted works. National Today reported that the complaint, filed in a U.S. district court, accuses Google of using protected musical compositions without permission to train Lyria 3, a generative model that powers music‑creation features in Google’s ecosystem. The filing claims that Google’s data‑scraping practices violated the plaintiffs’ intellectual‑property rights and seeks injunctive relief as well as monetary damages (National Today, 2024). Google has not publicly responded to the allegations, and the case is still in the early stages of litigation.
The juxtaposition of these two developments underscores the dual pressures facing Google’s AI division: accelerating hardware‑optimized model deployment while navigating increasingly litigious terrain. The NPU support for Gemma 4 is part of a broader strategy to push large language models (LLMs) to the edge, where latency‑sensitive applications—such as real‑time translation, on‑device assistants, and low‑power inference—benefit from specialized accelerators. By embedding NPU compatibility directly into the AICore runtime, Google aims to reduce reliance on cloud‑based inference, a move that could also mitigate some regulatory exposure by keeping data processing local.
Meanwhile, the Lyria 3 lawsuit highlights the growing scrutiny of data‑centric AI training pipelines. As other firms, including Meta, announce open‑weight models like Llama 3 and launch ChatGPT‑style interfaces (Ars Technica, 2024), the industry’s reliance on large, scraped datasets is coming under legal challenge. The outcome of the Lyria 3 case could set precedents for how companies source training material, potentially influencing future model‑training practices across the sector. For now, Google’s engineering teams continue to push technical boundaries with Gemma 4, while its legal teams prepare to defend Lyria 3 against the copyright claims.
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.