Skip to main content
Anthropic

Anthropic Cracks Down on OpenClaw, Forces Users to Pay Extra After Claude Mythos Leak

Published by
SectorHQ Editorial
Anthropic Cracks Down on OpenClaw, Forces Users to Pay Extra After Claude Mythos Leak

Photo by Markus Spiske on Unsplash

Anthropic is forcing OpenClaw users to pay extra after a Claude Mythos draft blog post leaked from its Sanity.io CMS, a breach traced to sources Roy Paz and Alexandre Pauwels, Iter reports.

Key Facts

  • Key company: Anthropic

Anthropic’s decision to charge OpenClaw users a separate “pay‑as‑you‑go” fee stems directly from a security incident that exposed a draft blog post about the upcoming Claude Mythos model. According to an investigation published by Iter, the leak originated from Anthropic’s headless CMS, Sanity.io, which serves as the backend for the company’s website. The draft, identified by the internal document ID `featureMythos`, was retrieved via a GROQ query that pulls the first matching record (`*[_id == "featureMythos"][0]`). The query appears in a cached copy of the page hosted by M1Astra, revealing that the CMS project ID is `4zrzovbb` and that the API endpoint was accessed without authentication—a default configuration in Sanity that allows anyone to read all published content in a project. Iter’s author traced the unauthenticated request to two internal sources, Roy Paz and Alexandre Pauwels, confirming that the draft was inadvertently exposed through the public Sanity API rather than a deliberate internal breach.

The technical details of the exposure have immediate product‑level ramifications. As reported by The Verge, Anthropic will cease to count third‑party tool usage against existing Claude subscription limits beginning April 4, 2026, at 3 p.m. ET. Users of OpenClaw—a third‑party interface that integrates Claude’s language model—must now purchase separate usage bundles or rely on an API key, both of which are billed independently of the standard Claude subscription. The policy shift is framed as a “pay‑as‑you‑go option,” but the timing suggests a direct response to the leak, which highlighted the fragility of Anthropic’s content‑delivery pipeline and the potential for unintended data exposure when third‑party integrations are allowed to consume subscription‑level resources.

From an architectural standpoint, Anthropic’s reliance on Sanity’s default read permissions created a single point of failure. Sanity.io’s API model separates content creation (authenticated) from content delivery (publicly readable), a design that simplifies static site generation but requires explicit configuration to restrict draft or unpublished documents. In Anthropic’s case, the draft blog post was stored alongside published assets under the same project ID, and the lack of a read‑only token for internal use meant that any request to the Sanity endpoint could retrieve the document. The cached image URLs—served through `www-cdn.anthropic.com`, a proxy to `cdn.sanity.io`—further expose the internal project identifier, making it trivial for an external actor to construct a query that pulls the draft content.

Anthropic’s response, as outlined by Claude Code executive Boris Cherny in the Verge email, includes the rollout of discounted usage bundles and the option to use a Claude API key for OpenClaw access. This move not only monetizes the third‑party integration but also forces OpenClaw developers to adopt a more controlled access pattern that bypasses the ambiguous subscription accounting previously used. By requiring an API key, Anthropic can enforce stricter rate limiting and audit trails at the API gateway level, reducing the risk that future CMS misconfigurations will leak privileged content. The shift also aligns with Anthropic’s broader strategy to consolidate usage under its own tooling, such as the newly announced Claude Cowork platform, which may eventually replace third‑party bridges like OpenClaw.

The incident underscores a broader lesson for AI companies that expose large language models through both internal and external interfaces: content management systems must be hardened against accidental disclosure, especially when draft materials contain roadmap information that can influence market perception. As Anthropic tightens its integration policies, developers of third‑party tools will need to adapt to more granular authentication schemes and anticipate additional costs for accessing premium model capabilities. The technical community will be watching closely to see whether other AI firms, such as OpenAI and Google DeepMind, revise their own CMS and API configurations in the wake of Anthropic’s leak‑driven policy change.

Sources

Primary source
Independent coverage

Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.

More from SectorHQ:📊Intelligence📝Blog

🏢Companies in This Story

Related Stories