Skip to main content
Anthropic

Anthropic doubles Claude's off‑peak usage limits as it rolls out 1‑million token context

Published by
SectorHQ Editorial
Anthropic doubles Claude's off‑peak usage limits as it rolls out 1‑million token context

Photo by Possessed Photography on Unsplash

Anthropic doubled Claude's off‑peak usage limits on Friday as it launched a production 1‑million‑token context window for Opus 4.6 and Sonnet 4.6, according to Martinalderson.

Key Facts

  • Key company: Anthropic

Anthropic’s decision to double off‑peak usage limits comes just days after it rolled out a production‑grade 1‑million‑token context window for its Opus 4.6 and Sonnet 4.6 models, a move that Martin Alderson described as “a big breakthrough” because it expands the effective reading length to roughly 1,000–2,000 pages of text—equivalent to four or five novels (Martinalderson). The new context length dwarfs the 4,096‑token window of GPT‑3.5 in late‑2022 and the roughly 200 K token windows that emerged over the intervening years, positioning Claude as one of the few frontier models that can sustain such breadth without the “context rot” that typically degrades quality in longer sessions (Martinalderson). By making this capability generally available, Anthropic is signaling that it can now handle complex, multi‑document tasks—legal reviews, extensive codebases, or long‑form research—without the need to constantly truncate or restart sessions.

The promotion, which runs from March 13 to March 27, automatically doubles usage limits for all non‑Enterprise users (Free, Pro, Max, and Team plans) during a five‑hour window each weekday (8 AM–2 PM ET). According to Engadget, the boost applies across Claude’s web, desktop, and mobile interfaces, as well as its integrations with Cowork, Claude Code, Claude for Excel, and Claude for PowerPoint, and requires no manual activation (Engadget). By targeting a broader audience than its December‑year‑end promotion—when only Pro, Max 5×, and Max 20× subscribers benefited—Anthropic appears to be leveraging the new context length as a hook to retain and grow its user base amid heightened competition.

The timing of the offer is also strategic. After Anthropic refused to relax certain safety safeguards for the U.S. Department of Defense, the Pentagon labeled the company a “supply‑chain risk” and terminated its contract, while OpenAI secured a separate DoD deal (The Verge; Wired). The fallout sparked a wave of user backlash against ChatGPT and prompted many to migrate to Claude, a trend that Anthropic acknowledges by framing the promotion as “a small thank you to everyone using Claude” (Engadget). The company’s willingness to double limits for a wide swath of customers—excluding only Enterprise accounts—suggests an effort to cement loyalty among the newly attracted cohort and to differentiate its service from OpenAI’s more restrictive usage policies.

From a technical perspective, the 1 M token window also mitigates the need for developers to chunk inputs, a common workaround that adds latency and complexity. While earlier attempts at long contexts, such as Google’s Gemini, suffered from poor results, Anthropic’s implementation appears to maintain quality, according to Alderson’s early experiments. This advancement aligns with broader industry trends that see improvements not just in model size but also in cost efficiency and speed, as exemplified by the low‑cost Qwen 3.5 models referenced in his analysis. By coupling a substantial context increase with a temporary usage boost, Anthropic is positioning Claude as a more viable alternative for enterprise‑level workloads that demand deep, uninterrupted reasoning.

Analysts will likely watch how the off‑peak promotion translates into sustained usage once the two‑week window closes. If the doubled limits drive higher engagement and showcase the practical benefits of a million‑token context, Anthropic could leverage the data to argue for premium pricing or to attract new enterprise contracts—especially as rivals scramble to match the same breadth of context. For now, the combination of a record‑setting context window and a generous, automatically applied usage boost marks a decisive push by Anthropic to capitalize on its recent surge in popularity and to counter the competitive pressure from OpenAI and other AI providers.

Sources

Primary source
Independent coverage

Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.

More from SectorHQ:📊Intelligence📝Blog

🏢Companies in This Story

Related Stories