Skip to main content
Anthropic

Anthropic Blocks Claude Subscriptions for OpenClaw, Highlighting AI Agent Security Risks

Published by
SectorHQ Editorial
Anthropic Blocks Claude Subscriptions for OpenClaw, Highlighting AI Agent Security Risks

Photo by Steve Johnson on Unsplash

512,000 lines. That’s the size of the TypeScript source exposed when Anthropic’s Claude npm package shipped a source‑map file, prompting the firm to block subscription access to OpenClaw and flag AI‑agent security risks, reports indicate.

Key Facts

  • Key company: Anthropic

Anthropic’s decision to block Claude subscriptions for OpenClaw came on the heels of a startling code leak that exposed more than half a million lines of the company’s TypeScript source. Security researcher Chaofan Shou discovered the .map file in the @anthropic/claude-code npm package (v2.1.88) on March 31, 2026, and quickly reconstructed the full codebase, publishing it online — a move that revealed not just the sheer size of the package but also the inner workings of Claude’s “undercover mode,” a subsystem designed to prevent the model from spilling internal prompts, tool definitions, and operational details — according to Kang’s analysis on April 4, 2026 【source】. The leak itself was a routine packaging mistake, but the contents laid bare the very mechanisms Anthropic relies on to guard against prompt‑extraction attacks, highlighting a paradox: the company built walls around what the AI could say while inadvertently shipping the entire blueprint to anyone who installed the package.

In a swift response, Anthropic announced that, effective 12 p.m. PT/3 p.m. ET on Saturday, April 4, 2026, subscribers to Claude Pro and Claude Max would no longer be able to connect their subscriptions to third‑party agentic tools such as OpenClaw — a move the firm justified as a “capacity” issue. Boris Cherny, head of Claude Code, explained on X that the subscription plans were never designed for the heavy, continuous usage patterns of external agents, and that Anthropic needed to “prioritize our customers using our products and API” — as reported by VentureBeat 【source】. The company’s email to affected users hinted that the decision was also driven by the strain these integrations placed on Anthropic’s compute and engineering resources, though it left open whether Claude Team and Enterprise customers would face the same restrictions.

For developers who rely on Claude’s models—Opus, Sonnet, and Haiku—to power autonomous agents, the new policy forces a pivot to Anthropic’s pay‑as‑you‑go API, which bills per token rather than offering the flat‑rate flexibility of the subscription tiers. This shift not only adds a layer of cost uncertainty but also reintroduces the very “undercover mode” concerns the leak exposed: the API surface now includes the full suite of internal tools, making it easier for malicious actors to probe the system for hidden prompts or tool definitions. As Kang noted, the source map’s “sourcesContent” array contained every comment, string, and internal function, effectively handing over the playbook for how Claude defends itself against prompt‑extraction attacks — a risk that the new billing model may not fully mitigate.

The broader AI‑agent ecosystem is taking note. OpenClaw’s community, which has built a marketplace of plug‑and‑play agents, now faces a fragmentation risk as developers scramble to re‑architect their pipelines around Anthropic’s token‑based pricing or switch to alternative models from competitors like OpenAI or Google. While Anthropic’s move underscores the growing pains of scaling agentic AI—balancing open‑source flexibility with proprietary safeguards—it also serves as a cautionary tale about the hidden costs of packaging decisions. As the industry pushes toward more autonomous agents, the line between “security through obscurity” and transparent, auditable code will become a decisive factor in who can safely build the next generation of AI assistants.

Sources

Other signals
  • Dev.to AI Tag

Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.

More from SectorHQ:📊Intelligence📝Blog

🏢Companies in This Story

Related Stories