Anthropic’s Claude Code faces login glitches as new Code Review feature rolls out, but
Photo by Possessed Photography on Unsplash
More than 10 attempts were needed to purge a community‑managed plugin that silently opens browsers, grants shell access and persists across five layers, reports indicate.
Key Facts
- •Key company: Anthropic
Anthropic’s rollout of the Code Review feature for Claude Code has been marred by a service outage that left many developers unable to log in or experiencing sluggish performance, according to reports from 9to5Mac. The company confirmed that the disruption affected both the Claude Code web interface and the Claude.ai portal, and it has since marked the incidents as resolved. Users who attempted to access the platform this morning encountered repeated authentication failures, prompting Anthropic to open a public outage tracker and invite developers to subscribe for updates. The timing of the outage is notable because it coincided with the launch of Code Review, a research‑preview tool aimed at automating deep, multi‑agent pull‑request analyses for Team and Enterprise customers.
Code Review is positioned as a response to a dramatic rise in code‑output volume—Anthropic engineers say output has grown 200 % over the past year—creating a bottleneck in human review capacity. In internal testing, the feature boosted substantive review comments on pull requests from 16 % to 54 % and produced findings that were deemed incorrect by engineers less than 1 % of the time. For large changes exceeding 1,000 lines, the AI surfaced issues on 84 % of submissions, averaging 7.5 problems per PR. The service is deliberately built for depth rather than speed, with average review times of roughly 20 minutes and a cost range of $15‑$25 per review, according to Anthropic’s own blog post. While the AI does not auto‑approve changes, it is intended to “close the gap” so human reviewers can keep pace with rapid shipping cycles.
The outage, however, has raised questions about the reliability of Anthropic’s new offering, especially in light of a separate security concern that surfaced in the Claude Code plugin marketplace. A community‑managed plugin dubbed “Serena” was found to execute unpinned third‑party code on every session, open browsers without consent, and retain shell access across five persistence layers, making it effectively impossible to uninstall. The vulnerability was detailed in a public PSA that warned a compromised GitHub repository could automatically compromise every user who installed the plugin. The report urged Anthropic to establish a bug‑bounty program and highlighted the difficulty of removing the malicious component, which required more than ten attempts. Although the plugin issue is distinct from the login glitch, both incidents underscore the challenges Anthropic faces in scaling a secure, enterprise‑grade AI development platform.
TechCrunch corroborated the breadth of the outage, noting that Claude’s service disruption was “widespread” and that the company was actively investigating the root cause. ZDNet’s coverage of the Code Review launch emphasized the tool’s reliance on AI agents to detect bugs that might otherwise lead to costly production incidents, reinforcing Anthropic’s narrative that automated, high‑fidelity reviews can mitigate risk. Yet the simultaneous occurrence of performance problems and a high‑risk plugin vulnerability may temper enthusiasm among developers who are weighing Claude Code against competing solutions from Microsoft, Google, and open‑source alternatives. As Anthropic works to restore full functionality and tighten its plugin vetting processes, the episode serves as a reminder that the race to embed AI deeper into software development pipelines is as much about operational resilience as it is about technological innovation.
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.