Skip to main content
Anthropic

Anthropic leaks 512,000 lines of Claude code, exposing three attack paths for AI desktop

Published by
SectorHQ Editorial
Anthropic leaks 512,000 lines of Claude code, exposing three attack paths for AI desktop

Logo: Anthropic

Anthropic accidentally shipped a 59.8 MB source‑map with its Claude‑Code 2.1.88 npm package, leaking 512,000 lines of TypeScript across 1,906 files and exposing three attack paths, VentureBeat reports.

Key Facts

  • Key company: Anthropic
  • Also mentioned: Github, OpenAI, Google

Anthropic’s mistake has rippled through the AI‑coding community. Within minutes of the source‑map being published, developers forked the 512,000 lines of TypeScript and uploaded them to GitHub, making the repository the platform’s fastest‑ever download, according to The Guardian. The leak revealed the full permission model, every Bash security validator, 44 unreleased feature flags and references to future models that Anthropic has not announced. Security researcher Chaofan Shou flagged three concrete attack paths in the code – a privileged‑execution flaw, an unchecked remote‑command interface, and a sandbox‑escape vector – in a VentureBeat post on April 1.

Anthropic moved quickly to contain the breach. The company filed a U.S. copyright takedown request that forced GitHub to remove roughly 8,100 repositories, including legitimate forks of its own public Claude Code repo, TechCrunch reported. The takedown notice also hit thousands of third‑party copies that had been mirrored after the leak, prompting angry developers to complain on social media.

The exposed code also sheds light on new desktop‑control features Anthropic rolled out last month. Claude Code and Claude Cowork now let the model open apps, navigate browsers and fill spreadsheets on macOS and Windows, a capability that originated from Vercept AI, which Anthropic acquired about four weeks ago, The‑Decoder notes. The same research preview includes “Dispatch,” a remote‑control tool that lets users operate their own machines from any location.

Enterprise security teams are scrambling to reassess risk. VentureBeat’s “five actions” guide urges firms to audit any Claude‑based agents, disable the newly discovered privileged‑execution path, and monitor for unauthorized remote commands. The leak removes a layer of obscurity that previously shielded Anthropic’s internal safeguards, making it easier for threat actors to weaponize the desktop‑control functions.

Anthropic confirmed the source‑map was an accidental inclusion in version 2.1.88 of its @anthropic‑ai/claude‑code npm package and that no customer data or model weights were exposed, per the company’s statement cited by The Guardian. The incident highlights how a single packaging error can open a floodgate of vulnerabilities in rapidly evolving AI tooling.

Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.

More from SectorHQ:📊Intelligence📝Blog

🏢Companies in This Story

Related Stories