Skip to main content
Anthropic

Anthropic Cuts Off Paid Users Without Notice, Ignoring Support Requests

Published by
SectorHQ Editorial
Anthropic Cuts Off Paid Users Without Notice, Ignoring Support Requests

Photo by Possessed Photography on Unsplash

According to a recent report, Anthropic abruptly disabled shell/bash execution for a paying Claude subscriber, cutting off paid features without notice, explanation, or response to support tickets.

Key Facts

  • Key company: Anthropic

Anthropic’s unilateral restriction of shell/bash execution on a paid Claude account surfaced when a cybersecurity firm’s engineer noticed the loss of functionality across all Claude sessions, including Claude Code, without any prior warning. The user captured the altered system prompt injected at the deployment level, which explicitly disables command‑line operations (see screenshots posted on Reddit) and confirmed the change persisted after reinstalling Claude Code on a fresh Hetzner server with the same credentials. A subsequent purchase of a new Anthropic account showed no such limitation, indicating the restriction was applied selectively by Anthropic rather than resulting from a third‑party intrusion.

The affected subscriber has filed multiple support tickets, submitted the standard appeal form, and completed Anthropic’s account‑block questionnaire, yet reports “radio silence” from the company while continuing to be billed at the full subscription rate. According to the user’s account, the service remains otherwise functional—chat, web search, and file creation work as expected—but the core feature used for daily security research (shell execution) has been silently stripped. The user argues that Anthropic’s Terms of Service permit “modify, suspend, or discontinue the Services…without notice,” but contends that a partial downgrade while still charging full price falls into a gray area not explicitly covered by the agreement.

Anthropic has not publicly commented on the incident, but the company’s recent transparency reports reveal it processes tens of thousands of appeals each month, suggesting the infrastructure exists to address such cases. The lack of response stands in contrast to Anthropic’s own policy, which obliges the provider to inform users of suspensions or terminations via its T&S Support Center. In this instance, the user only discovered the restriction after prompting Claude to reveal the system prompt, implying the change was not communicated as a formal suspension.

Industry observers note that Anthropic has been under pressure from Chinese AI firms that allegedly harvested Claude data to train their own models, a claim detailed in recent coverage by The Information and VentureBeat. While the user speculates that automated classifiers may have flagged legitimate security‑research activity as a policy violation, no official rationale has been provided. The incident raises broader concerns about how AI service providers enforce usage policies on enterprise customers, especially when essential features are removed without notice, potentially jeopardizing workflows that depend on those capabilities.

The episode underscores a tension between Anthropic’s stated commitment to responsible AI deployment and its operational practices. As the company continues to monetize Claude through enterprise subscriptions, stakeholders will likely demand clearer communication protocols and more transparent enforcement mechanisms to avoid eroding trust among paying users who rely on the platform for critical security tasks.

Sources

Primary source

No primary source found (coverage-based)

Other signals
  • Reddit - r/ClaudeAI

Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.

More from SectorHQ:📊Intelligence📝Blog

🏢Companies in This Story

Related Stories