Skip to main content
Gemini

Gemini's Task Automation Delivers Impressive Results Despite Slow, Clunky Performance

Published by
SectorHQ Editorial
Gemini's Task Automation Delivers Impressive Results Despite Slow, Clunky Performance

Photo by Mezidi Zineb (unsplash.com/@mezidi_zineb) on Unsplash

Nine minutes. That’s how long Gemini’s task automation took to order dinner, yet The Verge reports the AI still feels like “the future” despite its slow, clunky performance.

Key Facts

  • Key company: Gemini
  • Also mentioned: Gemini

Gemini’s task‑automation feature, which lets the large‑language model control native Android apps, is now live on Google’s Pixel 10 Pro and, surprisingly, on Samsung’s Galaxy S26 Ultra. According to a hands‑on review by The Verge, the system can navigate food‑delivery and rideshare apps, select menu items, and fill out checkout screens without user intervention, but it does so at a crawl. The reviewer timed a dinner order on Uber Eats and recorded a total of nine minutes from launch to the final confirmation screen, a duration that “still feels like the future” despite the obvious latency (The Verge).

The same article notes that Gemini’s workflow is deliberately background‑oriented: the AI runs while the user continues other phone activities, and it only surfaces a thin status bar indicating its current step—e.g., “Selecting a second portion of Chicken Teriyaki for the combo.” This design choice mitigates the impact of the slow pace, but it also exposes the brittleness of the system. The reviewer observed Gemini stumbling over a side‑dish that was visually prominent on the screen, describing the experience as “like watching a horror movie” because the model repeatedly missed the element before finally correcting itself. Wired’s coverage of the feature echoes this sentiment, emphasizing that the automation can “book you an Uber or order a DoorDash meal” but that the underlying process remains “clunky” and requires user oversight at the final confirmation stage (Wired).

Accuracy, however, appears to be Gemini’s strongest suit. Across five days of testing, The Verge reports that the AI rarely mis‑ordered items and that any errors were self‑corrected before the user was prompted to confirm. The reviewer never saw Gemini complete a transaction without a manual “tap‑to‑confirm,” a safeguard that aligns with Google’s beta‑stage rollout and mirrors the cautious approach seen in other AI‑driven assistants. TechCrunch’s brief on the feature underscores this point, noting that the automation is limited to a “handful of food delivery and rideshare services” and remains in beta, suggesting that Google is deliberately restricting scope while it refines reliability (TechCrunch).

From a market perspective, Gemini’s automation marks Google’s first functional on‑device AI assistant that operates beyond conversational queries. The Verge highlights that this is “the first time I’ve seen a true AI assistant actually working on a phone—not in a keynote presentation or a carefully controlled demo.” If the technology can be streamlined, it could reshape how users interact with mobile commerce, potentially reducing friction in app‑based transactions and opening new revenue streams for Google through deeper integration with partner services. However, the current performance gap—nine minutes for a simple dinner order—signals that significant engineering work remains before the feature can compete with the immediacy of human users or even existing voice assistants that rely on pre‑programmed shortcuts.

Analysts will likely watch Google’s next iteration for improvements in speed and broader app compatibility. The Verge’s reviewer hints that future updates may expand the automation beyond the current limited set of services, while Wired’s piece suggests that the “wild” early version is a proof‑of‑concept rather than a polished product. Until latency is cut down to seconds rather than minutes, Gemini’s task automation will remain a novelty for power users willing to tolerate the wait for the sake of experiencing a glimpse of AI‑driven mobile productivity.

Sources

Primary source

Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.

Compare these companies

More from SectorHQ:📊Intelligence📝Blog

🏢Companies in This Story

Related Stories