Skip to main content
Anthropic

Anthropic partners with Google to boost Claude on next‑gen TPUs, while Claude Code

Published by
SectorHQ Editorial
Anthropic partners with Google to boost Claude on next‑gen TPUs, while Claude Code

Photo by Alexandre Debiève on Unsplash

While Anthropic has long run Claude on limited TPU resources, 9to5Google reports the startup just sealed a deal with Google and Broadcom for multiple gigawatts of next‑gen TPU capacity—potentially online by 2027—to power its next‑generation Claude models.

Key Facts

  • Key company: Anthropic
  • Also mentioned: Google, Broadcom

Anthropic’s recent partnership with Google and Broadcom promises to lift the compute ceiling for its flagship Claude models by delivering “multiple gigawatts of next‑generation TPU capacity” that could become operational as early as 2027, according to the company’s blog post cited by 9to5Google. The agreement expands on Anthropic’s November 2025 commitment to invest $50 billion in U.S. computing infrastructure, with the bulk of the new hardware slated for domestic data centers. By tying Claude’s frontier models to Google’s upcoming TPU‑v5 architecture—expected to deliver higher FLOP‑per‑watt efficiency and larger on‑chip memory than the current TPU‑v4—Anthropic aims to meet “extraordinary demand from customers worldwide,” a phrase the firm uses to describe a surge in enterprise contracts that has more than doubled the number of accounts spending over $1 million annually in the past two months.

The TPU expansion dovetails with Anthropic’s parallel push on the software side: the release of Claude Code v2.1.92, announced on the r/ClaudeAI subreddit and reported by media.patentllm.org, introduces a beta command called “/ultraplan.” This feature lets developers draft end‑to‑end development plans in the cloud, review them in a browser‑based interface with inline comments, and then execute the plans either remotely on Anthropic’s infrastructure or locally via a command‑line client. The design addresses a well‑known friction point in AI‑assisted coding—maintaining alignment between the model’s suggested architecture and the developer’s intent—by providing a visual inspection step that can catch logical errors before any code is materialized.

From a systems perspective, the synergy between the new TPU capacity and the Ultraplan workflow could reshape Anthropic’s development pipeline. The next‑gen TPUs are expected to support larger model contexts and higher batch sizes, which in turn enable Claude Code to generate more comprehensive plan drafts without sacrificing latency. The browser‑based review layer, meanwhile, offloads the human‑in‑the‑loop verification to a lightweight front‑end, reducing the need for round‑trip API calls that would otherwise consume additional compute cycles. In practice, a developer could submit a high‑level specification, receive a multi‑step plan rendered in the Ultraplan UI, annotate or adjust any step, and then trigger a single execution pass that leverages the gigawatt‑scale TPU pool to synthesize code, run tests, and even provision cloud resources—all within a unified session.

Benchmark data from the same media.patentllm.org report highlights that Claude Code’s recent enhancements are already competitive with open‑source alternatives. An open‑source AI system named ATLAS outperformed Claude Sonnet on a $500‑GPU coding benchmark, but Claude Code’s new Ultraplan capabilities aim to close that gap by integrating planning and execution more tightly than a pure inference model can. While the report does not provide a direct performance comparison between Claude Code and ATLAS, the emphasis on “cloud‑based AI development” suggests that Anthropic is betting on the scalability afforded by the upcoming TPU fleet to deliver higher throughput and lower time‑to‑solution for complex software projects.

The commercial implications are evident in Anthropic’s disclosed growth metrics. The 9to5Google article notes that demand for Claude services has risen sharply in 2026, with revenue climbing alongside the expanding customer base. By securing a dedicated, high‑performance compute pipeline and augmenting its developer tooling with Ultraplan, Anthropic positions itself to capture a larger slice of the enterprise AI market that is increasingly looking for end‑to‑end solutions rather than isolated model APIs. The combination of hardware acceleration and workflow integration could also serve as a differentiator against rivals such as OpenAI and Google’s own AI offerings, which currently rely on more generalized cloud compute rather than a bespoke, gigawatt‑scale TPU allocation.

In sum, Anthropic’s dual strategy—locking in next‑generation TPU capacity through a multi‑year partnership with Google and Broadcom, and rolling out a sophisticated cloud‑first developer environment via Claude Code’s Ultraplan—reflects a concerted effort to scale both the raw compute and the usability of its AI products. If the planned hardware arrives on schedule and the Ultraplan workflow matures beyond its beta stage, Anthropic could deliver a tightly coupled hardware‑software stack that enables developers to move from high‑level design to production code with unprecedented speed and reliability.

Sources

Primary source
Other signals
  • Dev.to Machine Learning Tag

Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.

More from SectorHQ:📊Intelligence📝Blog

🏢Companies in This Story

Related Stories