Anthropic Says It Is Using Only 20% of Opus 4.6, Prompting Calls for Full Deployment
Photo by 烧不酥在上海 老的 (unsplash.com/@geraltyichen) on Unsplash
While Opus 4.6’s sharper replies and cleaner code suggest a full‑scale upgrade, Anthropic is reportedly tapping only 20% of its capabilities—leaving the bulk of effort controls, Agent Teams and adaptive features unused, Augmentedswe reports.
Key Facts
- •Key company: Anthropic
Anthropic’s rollout of Opus 4.6 has sparked a debate over utilization depth, with the company reportedly exposing only a fraction of the model’s new capabilities. According to Augmentedswe, the upgrade’s headline improvements—sharper replies and cleaner code—represent merely “the 20%” of the value proposition, while the remaining 80% resides in a suite of workflow‑changing features such as Effort controls, Agent Teams, /insights, adaptive thinking, and a 200‑page system card that have yet to see widespread adoption (Augmentedswe, Mar 02 2026).
The most visible of these additions is the Effort parameter, a four‑level knob that lets developers dictate how much “thinking” Claude applies to each request. The default “high” setting, which was inherited from Opus 4.5, now consumes tokens on tasks that could be handled with a lighter touch, inflating cost and latency. By selecting “low” or “medium,” users can suppress deep reasoning for simple lookups or routine code generation, while “max” unlocks unrestricted deliberation for genuinely hard problems (Augmentedswe). Anthropic’s own documentation stresses that “Opus 4.6 often thinks more deeply and more carefully revisits its reasoning before settling on an answer,” and recommends dialing effort down to “medium” when the model is overthinking (Augmentedswe). The API call syntax has been updated accordingly, with the former budget_tokens field deprecated in favor of the new effort field (Augmentedswe).
Beyond token budgeting, Opus 4.6 introduces Agent Teams, a framework for orchestrating multiple Claude instances as sub‑agents that can collaborate on complex tasks. This capability is intended to replace ad‑hoc chaining of prompts with a more structured, stateful workflow, but Anthropic has not yet published concrete usage metrics. The same source notes that the model’s “adaptive thinking” engine now allocates computational resources internally, further reducing the need for manual token limits (Augmentedswe). Together, these tools aim to shift control from the model to the developer, allowing finer‑grained cost management and more predictable performance across heterogeneous workloads.
Industry observers have taken note of the gap between the model’s advertised potential and its current deployment. VentureBeat’s coverage of Anthropic’s integration with Microsoft Excel and PowerPoint highlights the company’s broader push to embed Claude’s shared context across productivity suites, yet it does not mention whether the new Effort or Agent Teams features are being leveraged in those integrations (VentureBeat). ZDNet’s reporting on the Claude Enterprise plan similarly emphasizes scale and security but remains silent on the operational rollout of Opus 4.6’s advanced controls (ZDNet). The silence suggests that many enterprise customers may still be operating with the legacy default settings, reaping only the surface‑level performance gains.
The call for “full deployment” is gaining traction among developers who have already begun experimenting with the Effort API. Early adopters report immediate speed improvements when switching to “low” for sub‑agents handling trivial queries, and dramatic reasoning depth when invoking “max” on hard debugging sessions (Augmentedswe). However, the learning curve associated with configuring Agent Teams and interpreting the extensive system card could be a barrier to broader adoption, especially for teams accustomed to the simpler Opus 4.5 interface. Anthropic’s internal guidance, as quoted by Augmentedswe, stresses that the responsibility for setting effort now lies with the user, not the model—a paradigm shift that may require new tooling and operational best practices.
If Anthropic can translate the latent 80% of Opus 4.6 into measurable productivity gains, the upgrade could redefine how enterprises balance AI capability against cost. Until then, the model’s headline benchmarks will continue to mask a substantial underutilization of its most transformative features, prompting both customers and analysts to press the company for clearer rollout roadmaps and usage analytics.
Sources
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.