Skip to main content
Anthropic

Anthropic fixes Claude Code CLI bug, adds always‑on AI agent channels, draws Senate

Published by
SectorHQ Editorial
Anthropic fixes Claude Code CLI bug, adds always‑on AI agent channels, draws Senate

Photo by Kevin Ku on Unsplash

Anthropic patched a critical CVE‑2026‑33068 flaw in its Claude Code CLI, correcting a configuration‑loading order bug that could have escalated permissions, while also launching always‑on AI agent channels and drawing attention from the Senate, reports indicate.

Key Facts

  • Key company: Anthropic

Anthropic moved quickly after the CVE‑2026‑33068 disclosure, releasing a patch that reorders the configuration‑loading sequence in Claude Code’s CLI. According to a security report by Olga Larionova, the flaw stemmed from the tool reading a repository’s .claude/settings.json before establishing workspace trust, allowing a malicious bypassPermissions field to elevate system access with a CVSS 7.7 rating. The fix, rolled out on March 21, ensures that trust prompts precede any settings import, effectively closing the high‑severity attack vector that could have granted unauthorized privileges to untrusted code [Larionova, report].

Just days later, Anthropic announced the “channels” extension for Claude Code, turning the developer‑oriented CLI into an always‑on AI agent. The Decoder explains that channels let messages, notifications, and webhooks flow directly into a running Claude session, enabling two‑way communication through Anthropic’s MCP servers. In the research preview, Telegram and Discord are supported out of the box, and developers can craft custom integrations, allowing Claude to react to CI results, chat alerts, or monitoring events even when the user is offline [The‑Decoder].

The new capability pushes Anthropic’s tooling closer to the broader AI‑agent narrative popularized by OpenClaw. Forbes notes that the “Claude Dispatch” update lets users launch and monitor desktop tasks from a phone, effectively positioning Claude as an operating layer for daily work rather than a purely on‑demand assistant [Forbes]. Version 2.1.80 or later is required, and the feature is currently in a research preview, signaling Anthropic’s intent to iterate rapidly based on developer feedback.

The rollout has attracted political attention. A policy analysis linked to Senator Bernie Sanders highlights Claude’s expanded functionality as a focal point for discussions on privacy risks and AI regulation in 2026 [blockchain.news]. Sanders’ office raised concerns that always‑on agents could inadvertently expose sensitive data through continuous inbound channels, prompting calls for clearer oversight on how AI services handle inbound webhooks and user‑initiated triggers.

Anthropic’s dual response—patching a critical security bug while unveiling a high‑visibility product upgrade—underscores the company’s balancing act between safety and innovation. By addressing the configuration‑loading defect, the firm mitigated immediate exploitation risk, and by launching channels, it positioned Claude Code as a proactive, real‑time collaborator for developers. The Senate’s scrutiny adds a regulatory dimension that could shape how always‑on AI agents are governed, making the coming months a litmus test for Anthropic’s ability to navigate both technical and policy challenges.

Sources

Primary source
Independent coverage
Other signals
  • Dev.to AI Tag

Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.

More from SectorHQ:📊Intelligence📝Blog

🏢Companies in This Story

Related Stories