Google’s Antigravity Platform Crumbles Under 70+ Vulnerabilities, Prompting Mass Bans
Photo by BoliviaInteligente (unsplash.com/@boliviainteligente) on Unsplash
What developers expected from Google’s Antigravity IDE—seamless AI‑powered coding—clashed with reality after a recent report uncovered 70+ flaws, prompting mass 403 bans.
Key Facts
- •Key company: Google
Google’s Antigravity IDE has become a cautionary tale for AI‑assisted development tools after a security researcher publicly disclosed more than 70 critical flaws in version 1.107.0. Dmitry Labintcev’s audit, posted on March 5, 2026, describes how the platform’s architecture leaks authentication tokens via Windows Management Instrumentation (WMI) and exposes an unprotected named‑pipe IPC channel, effectively turning any local process into a remote‑code execution vector (Labintcev). The report notes that these vulnerabilities allow immediate exfiltration of SSH keys, cloud provider credentials, DPAPI master keys, and Chrome session cookies, all without requiring elevated privileges.
The most egregious issue is the CSRF token leak. Antigravity secures its gRPC calls with a csrfToken that is passed as a command‑line argument to extension_server.exe. Because Windows permits any process to read the command line of another process, a simple WMI query (e.g., Get‑CimInstance Win32_Process) can harvest the token in real time (Labintcev). With that token, an attacker can connect to the IDE’s IPC endpoint—named \\.\pipe\antigravity_ipc— which, according to the same audit, lacks any Access Control List (ACL) restrictions. The combination of token exposure and an open pipe creates a “full‑blown backdoor” that bypasses the IDE’s UI entirely, allowing malicious code to be injected into the developer’s environment (Labintcev).
Google’s internal Vulnerability Response Program (VRP) initially classified the findings as a P2 priority, but later downgraded them to “Infeasible to fix,” arguing that an attacker would already need local access to exploit the flaws (Labintcev). This stance runs counter to industry best practices, which treat local privilege escalation as a core security requirement for development tools, especially given the prevalence of supply‑chain attacks that can drop malicious npm or pip packages onto a developer’s machine. The VRP’s rationale—that the IDE cannot protect against processes already running on the host—effectively absolves Google of responsibility for a product that is supposed to be a secure sandbox for code generation.
Rather than re‑architect the IPC mechanism or sandbox the extension server, Google appears to have opted for a “scorched‑earth” mitigation. Within days of the report’s release, thousands of users—including those on the premium Antigravity Ultra plan—were hit with HTTP 403 Forbidden and 400 Bad Request responses. Labintcev characterizes these bans as a cost‑saving measure, noting that Google is silencing researchers and power users instead of addressing the underlying architectural weaknesses (Labintcev). The mass bans have effectively shut down large segments of the Antigravity community, raising questions about the company’s commitment to transparency and product stability.
The performance complaints that preceded the security fallout further underscore the platform’s systemic problems. Labintcev observed that Antigravity exhibits Chrome‑like memory leaks, with RAM consumption climbing steadily over a session and causing severe degradation after a few hours of use (Labintcev). This “Performance Paradox” suggests that Google prioritized rapid AI‑driven code generation at the expense of both efficiency and security, delivering an IDE that is neither fast nor safe for professional developers.
VentureBeat’s earlier coverage highlighted Antigravity’s “agent‑first” architecture and its ambition to outpace competitors such as OpenAI’s Codex and Anthropic’s Claude (VentureBeat). However, the recent audit reveals that the underlying design choices—exposing critical tokens and IPC channels without proper isolation—render those ambitions moot. Until Google either overhauls the platform’s security model or provides a clear remediation roadmap, Antigravity is likely to remain a “house of cards” that erodes trust in AI‑powered development environments.
Sources
No primary source found (coverage-based)
- Dev.to AI Tag
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.