Claude Code Deploys Parallel AI Agents, Figma Integration, and Star Chamber Multi‑LLM
Photo by Steve Johnson on Unsplash
While developers once relied on manual code reviews, Anthropic’s Claude Code now runs parallel AI agents that automatically spot bugs, security gaps and regressions, the Decoder reports.
Key Facts
- •Key company: Claude Code
- •Also mentioned: Anthropic, Figma
Claude Code’s newest capabilities push the platform from a single‑model assistant into a multi‑agent development hub. According to The Decoder, Anthropic’s parallel AI agents now run automatically on every pull request, scanning for bugs, security gaps and regressions in real time. The research preview, currently limited to Team and Enterprise customers, has already shifted the distribution of code‑review outcomes: before the rollout only 16 % of changes earned substantive comments, while after deployment that figure has risen to 54 % (The Decoder). For large changes exceeding 1,000 lines, the agents flag problems in 84 % of cases, a jump that Anthropic says helped lift overall code output per developer by 200 % over the past year.
The parallel‑agent architecture is complemented by a new “Star Chamber” skill that aggregates feedback from multiple large language models. As Peter Wilson explains in his Technical Content blog, the Star Chamber runs the same code review across several LLM providers and then builds a consensus view, highlighting where models agree, where they diverge, and where each offers unique insights. Wilson argues that relying on a single model leaves blind spots—patterns it may overlook or confident hallucinations it might produce—so the multi‑LLM approach gives developers a more robust, structured assessment of code quality.
Beyond review, Claude Code now reaches into design workflows through a native Figma integration. Bora’s DesignExplained guide shows how developers can push a running application directly from the terminal into editable Figma layers, producing real component trees rather than static screenshots. The workflow reverses the traditional “design‑first” pipeline, allowing code‑first teams to generate up‑to‑date design documentation that stays in sync with the codebase. This capability, highlighted in VentureBeat’s coverage of Anthropic’s broader embedding of Slack, Asana and Figma inside Claude, is positioned as a productivity boost for solo builders and small teams that need rapid design hand‑offs without rebuilding UI screens from scratch.
Anthropic also introduced a framework for “custom skills,” enabling developers to codify repeatable command sets in a .claude/skills/ directory. As detailed in the How‑to‑Build Claude Code Custom Skills report, a skill such as /code-review can be defined once in a SKILL.md file and invoked repeatedly across projects, eliminating the need to re‑type lengthy prompts. The same mechanism supports other automation, like a /secret‑scanner skill that flags hard‑coded credentials. By treating these skills as reusable modules, Anthropic aims to lower the friction of integrating AI assistance into existing CI/CD pipelines.
Taken together, the parallel agents, multi‑model Star Chamber, Figma bridge and extensible skill system signal Anthropic’s ambition to make Claude Code the central command center for software development. While the research preview is still limited in scope, early metrics suggest a tangible uplift in code quality and developer throughput. If the adoption curve mirrors the 200 % increase in output reported by Anthropic, enterprises could see a measurable reduction in manual review bottlenecks, a shift that may redefine how development teams allocate engineering resources in the next wave of AI‑augmented software delivery.
Sources
- Dev.to AI Tag
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.