Claude Code Slashes 20x Plan to 5x, Overhauls Growth Strategy
Photo by Kevin Ku on Unsplash
While users expected Anthropic’s 20x plan to deliver twentyfold gains, reports indicate it behaves like a 5x plan, with weekly session limits hit 75% in just 72 hours despite smarter work habits.
Key Facts
- •Key company: Claude Code
Anthropic’s “Claude Code” 20× plan, marketed as a twenty‑fold boost in token capacity, is now behaving like a five‑fold upgrade, according to a user report posted on GitHub. The author notes that within 72 hours of a weekly reset the plan’s session limit was 75 % consumed, despite a reduction in overall workload and more efficient prompting techniques such as using the Sonnet model for simple tasks and leveraging skills instead of paste‑based macros. The rapid depletion “feels crazy,” the user writes, and casts doubt on the value proposition of the premium tier【report】.
The root cause, identified by the same contributor, is a silent switch to Anthropic’s 1 million‑token context model in version 2.1.63 of Claude Code. Earlier releases (2.1.62 and prior) default to a 200 k token window, but the newer build defaults to the 1 M context without displaying the “(1 M context)” label in the UI. The bug forces the model to bill at the extended‑context rate even when the user sees only the standard “Opus 4.6 + ultrathink” status line. Tests comparing token usage across versions show that the default 2.1.63 session consumes roughly 5 % of the weekly quota for 50 k tokens in a 1 M window, whereas rolling back to 2.1.58 restores the expected 24 % usage for the same 48 k tokens in a 200 k window【report】.
Anthropic has not issued an official statement, but the community workaround is already documented. Adding an environment variable `CLAUDE_CODE_DISABLE_1M_CONTEXT=1` to `~/.claude/settings.json` forces the client to stay on the 200 k context model, instantly raising the visible usage percentage from 5 % to 28 % for identical token counts. The fix also restores the missing UI indicator, allowing users to verify the context size with the `/context` command. Multiple GitHub bug reports corroborate the issue, confirming that version 2.1.63 is the only release where the silent 1 M default occurs【report】.
The pricing implications are significant. Anthropic’s extended‑context tier carries a premium per‑token rate, meaning that users unknowingly on the 1 M model are paying more for the same amount of generated text. This explains the “significantly usage increase” observed by the reporter, who now feels the 20× plan is “throwing money away.” The lack of opt‑out capability in the default configuration undermines the transparency expected from a paid enterprise service, especially as competitors such as OpenAI and Google emphasize clear billing dashboards.
Industry commentary has highlighted the broader risk of hidden consumption models. The Register has previously covered Claude’s pricing quirks, noting that enterprise customers often grapple with opaque token accounting. Forbes, meanwhile, has promoted Claude Cowork as a productivity tool, but the current bug casts a shadow over those claims, suggesting that “smart work habits” alone cannot offset unintended cost escalations. As Anthropic works to patch the issue, users are advised to pin their client to a stable update channel and verify context settings before committing to the 20× tier.
Sources
No primary source found (coverage-based)
- Reddit - r/ClaudeAI
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.