Google rolls out Gemini‑powered Ask Maps and AI upgrades across Workspace and Maps apps.
Photo by Benjamin Dada (unsplash.com/@dadaben_) on Unsplash
More than 300 million locations now speak Gemini: Google’s new Ask Maps lets users ask plain‑language queries like “Where’s a lit tennis court tonight?” The‑Decoder reports the feature rolls out across Maps, Workspace and other apps.
Key Facts
- •Key company: Google
Google has begun embedding its Gemini‑2 large‑language model across the core Google Maps and Workspace suites, turning static search and productivity tools into conversational assistants. In Maps, the new “Ask Maps” feature lets users pose natural‑language queries—such as “Is there a lit tennis court nearby tonight?”—and receive results drawn from a database of more than 300 million locations and 500 million user‑generated reviews, according to The‑Decoder. The response appears on a personalized map that reflects prior searches and saved places, and users can instantly book a table, share a venue, or launch turn‑by‑turn navigation. The rollout starts in the United States and India on both Android and iOS, with Google promising broader availability later this year.
Within Google Workspace, Gemini powers a suite of AI‑enhanced capabilities that span Docs, Slides, Sheets, and Drive. ITPro reports that the update adds generative‑text assistance, data‑analysis suggestions, and automated design recommendations directly inside the familiar editing interfaces. For example, Docs can now draft sections of a report based on a brief outline, while Sheets can generate formulas or visualizations from natural‑language prompts. The integration is built on the same Gemini model that underpins Ask Maps, allowing a consistent conversational experience across productivity and navigation contexts.
Developers also gain new tooling to harness Gemini’s capabilities. Google’s developer blog announced the release of “Plan mode” in the Gemini command‑line interface (CLI), a feature that lets engineers script multi‑step interactions with the model and preview execution plans before committing to costly API calls. The blog post notes that Plan mode is intended to reduce latency and token consumption for complex workflows, a move that aligns with Google’s broader strategy to make Gemini more accessible to third‑party applications.
The simultaneous launch of consumer‑facing and developer‑oriented Gemini features underscores Google’s effort to cement its AI leadership amid intensifying competition from Microsoft’s Azure OpenAI services and Amazon’s Bedrock platform. VentureBeat’s coverage of Google Cloud’s new “AI Agent Space” highlights the growing pressure on cloud providers to offer turnkey AI agents, and the Gemini rollout can be seen as a parallel push to embed AI directly into Google’s flagship products. By leveraging its massive location data and entrenched productivity suite, Google aims to create a seamless, cross‑product AI experience that keeps users within its ecosystem while differentiating itself from rivals that rely on external large‑language models.
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.