Google unlocks Gemini CLI with new Skills, Hooks, and Plan Mode
Photo by Kai Wenzel (unsplash.com/@kai_wenzel) on Unsplash
According to a recent report, Google’s Gemini CLI now offers “Skills,” “Hooks,” and a “Plan Mode,” letting developers steer AI agents through complex, multi‑step projects with confidence.
Key Facts
- •Key company: Gemini CLI
Google’s Gemini CLI now ships with three “power‑user” extensions that let developers shape AI agents more predictably. In a March 20 demo, Jack Wotherspoon walked through “Hooks,” which run scripted checks at defined lifecycle points—such as verifying a local dev server before the agent proceeds—giving developers a deterministic safety net (Greg Baugues, Google AI). The same session introduced “Skills,” modular knowledge packs that load on demand to avoid “context bloat”; a built‑in skill creator even walks users through constructing new skills via an interactive interview (“Create a docs‑writer skill for this project”). Finally, the preview‑only “Plan Mode” turns Gemini CLI into a read‑only researcher that drafts a structured execution plan, pauses for user approval, and only then switches to write mode (Baugues).
Beyond the new primitives, the demo showcased a complete end‑to‑end build in 20 minutes. Using React, Three.js, and Firebase, the team assembled “Memory Wall,” a digital bulletin board that went from an empty repository to a live‑deployed app in a single session. The rapid prototype served as a sandbox for testing the Hooks‑driven dev‑server guard, the Three.js‑specific Skill, and the “Ask User” tool, which injects interactive prompts (multiple‑choice, yes/no) into the agent’s workflow to confirm intent before any code changes are made (Baugues). The combination of these features is designed to bridge the gap between AI autonomy and developer control, a recurring pain point cited by early adopters of code‑generating agents.
The enhancements, however, arrive amid emerging security concerns. Ars Technica reported that researchers were able to craft an exploit within 48 hours that leveraged Gemini CLI’s default configuration to execute arbitrary commands and exfiltrate data from a victim’s system (Ars Technica). The flaw stems from the tool’s ability to access the command window, underscoring the importance of the newly added Hooks that can act as “security guards” to block unsafe operations. Google’s documentation now recommends enabling such Hooks by default and running the CLI in a sandboxed environment until the vulnerability is fully patched.
Open‑source availability adds both opportunity and risk. The Decoder highlighted that Gemini CLI’s codebase is publicly hosted on GitHub, with “Help Wanted” labels inviting community contributions (The Decoder). This openness accelerates feature iteration—evidenced by the rapid rollout of Skills and Plan Mode—but also expands the attack surface, making timely community review essential. Google has positioned the CLI as a developer‑first interface for Gemini, aiming to embed AI assistance directly into existing command‑line workflows while preserving manual oversight through Hooks, Skills, and Plan Mode.
Overall, the new Gemini CLI toolkit marks a shift from experimental AI coding assistants toward production‑grade tooling. By giving developers deterministic controls, on‑demand expertise, and a pre‑execution planning stage, Google hopes to make multi‑step AI‑driven projects more reliable. Yet the recent security findings serve as a reminder that tighter integration also demands rigorous safeguards, a balance the upcoming stable release will need to strike.
Sources
No primary source found (coverage-based)
- Dev.to AI Tag
Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.