Skip to main content
Google

Google launches Gemma 4 AI models for centers and smartphones, sparks privacy concerns

Published by
SectorHQ Editorial
Google launches Gemma 4 AI models for centers and smartphones, sparks privacy concerns

Photo by Compare Fibre on Unsplash

While Google touts Gemma 4 as a breakthrough for data‑centres and smartphones, privacy advocates warn the rollout could expose user data, reports indicate.

Key Facts

  • Key company: Google

Google’s Gemma 4 family, unveiled in a joint briefing by CEO Sundar Pichai and DeepMind chief Demis Hassabis, is positioned as the company’s first AI stack built to run both at scale in data‑centres and on‑device in smartphones, according to a Microsoft News Service report. The models, which the executives said are “optimised for latency and power efficiency,” are intended to let Google’s cloud customers and Android OEMs run large‑language‑model workloads without relying on external APIs, a move that could tighten the firm’s grip on the rapidly expanding generative‑AI market.

The rollout, however, has reignited scrutiny over Google’s data‑handling practices. In September 2024, Ph.D. candidate Amandla Thomas‑Johnson—who was studying in the United States on a student visa—attended a brief pro‑Palestinian protest at Cornell University. In April 2025, Immigration and Customs Enforcement (ICE) issued an administrative subpoena for his personal data, and Google complied the following month, providing the information without offering Thomas‑Johnson any opportunity to contest the request. The incident breaches a promise Google made nearly a decade ago to notify users before handing over data to law‑enforcement agencies, the Electronic Frontier Foundation (EFF) noted in a filing to the California and New York attorneys general. The EFF’s complaints allege deceptive trade practices, arguing that Google’s failure to warn its users undermines the privacy assurances that have long been a selling point for its services.

The Gemma 4 launch arrives at a moment when regulators and civil‑rights groups are tightening the spotlight on tech firms’ compliance with subpoenas. The EFF’s action underscores a broader tension: while Google touts the new models as a privacy‑by‑design solution for on‑device inference, the company’s recent history of handing over data without notice could erode user confidence in those very safeguards. Analysts cited in the MSN report point out that the ability to run AI locally on smartphones could reduce the volume of data transmitted to Google’s servers, yet the precedent set by the Thomas‑Johnson case suggests that “data residency” may not translate into legal protection when authorities demand information.

From a market perspective, Gemma 4’s dual‑deployment strategy could help Google recapture enterprise customers who have migrated to competitors offering on‑premise AI solutions. The report notes that the models are engineered to compete with offerings from Microsoft’s Azure OpenAI service and Amazon’s Bedrock, both of which emphasize flexible deployment options. If Google can convince enterprise buyers that its on‑device capabilities are both performant and insulated from external data requests, the company may shore up a revenue stream that has been slipping as cloud AI spend diversifies across vendors.

Nevertheless, the privacy controversy may temper adoption, especially among organizations with stringent data‑governance mandates. The EFF’s complaints, filed on behalf of Thomas‑Johnson, could prompt state‑level investigations that force Google to revise its subpoena‑response protocols. Until such regulatory clarity emerges, the promise of Gemma 4’s “secure, low‑latency inference” may remain contested by the very users it is designed to protect.

Sources

Primary source
Independent coverage
  • MSN

Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.

More from SectorHQ:📊Intelligence📝Blog

🏢Companies in This Story

Related Stories