Anthropic expands Google‑Broadcom partnership, securing gigawatts of next‑gen TPU compute
Photo by ThisisEngineering RAEng on Unsplash
Anthropic reports it has signed a new agreement with Google and Broadcom to secure multiple gigawatts of next‑gen TPU capacity, slated to come online in 2027 to power its frontier Claude models.
Key Facts
- •Key company: Anthropic
- •Also mentioned: Anthropic, Broadcom, OpenAI
Anthropic’s new compute pact represents a quantum leap in the scale of AI hardware available to a private‑sector lab. The company disclosed that the agreement with Google and Broadcom will deliver “multiple gigawatts” of next‑generation Tensor Processing Unit (TPU) capacity, with the first pods slated to become operational in 2027. A gigawatt‑scale TPU farm translates to roughly 10 × 10⁹ floating‑point operations per second per watt, dwarfing the current generation of TPU v4 clusters that power most of Google’s own services. By leveraging Broadcom’s latest silicon‑interconnects, the partnership promises lower latency and higher bandwidth across the TPU fabric, a critical factor for training Claude’s frontier models, which now exceed 200 billion parameters. The architecture will also support “model‑parallel” pipelines that split a single neural net across dozens of chips, enabling Anthropic to push model depth and width without hitting the memory ceiling that throttles most contemporary training runs.
The timing of the compute expansion dovetails with a dramatic surge in Anthropic’s commercial traction. According to the company’s own announcement, run‑rate revenue has already crossed the $30 billion threshold, up from roughly $9 billion at the close of 2025. More than 500 enterprise clients now spend over $1 million each on Claude‑based services, a metric the firm highlighted when it closed its Series G round in February. The new TPU capacity is explicitly framed as a response to “extraordinary demand from customers worldwide,” suggesting that Anthropic expects its API and hosted‑Claude offerings to scale at a rate comparable to the early‑stage growth of OpenAI’s ChatGPT platform. By locking in hardware supply years in advance, Anthropic mitigates the risk of capacity bottlenecks that have plagued other AI firms during peak training cycles.
From a technical standpoint, the next‑generation TPUs will likely incorporate Google’s “v5” design, which introduces a 3‑D stacked memory hierarchy and a revised systolic array that can execute mixed‑precision (FP8/FP16) matrix multiplications with up to 30 % higher throughput than the v4 line. Broadcom’s contribution centers on its silicon‑photonic interconnects, which replace traditional copper links and reduce signal‑to‑noise ratios across the multi‑chip module. This architecture enables “tensor‑wide” data movement, a prerequisite for the massive data parallelism required by Claude’s upcoming “Claude‑X” series, which Anthropic has hinted will push model sizes into the trillion‑parameter regime. The combination of higher compute density and lower communication latency directly addresses the “memory wall” that has limited scaling efficiency in earlier TPU deployments.
Strategically, the deal underscores Anthropic’s commitment to an infrastructure‑first growth model, mirroring the approach taken by its larger rival OpenAI, which has also deepened ties with major cloud providers. Bloomberg reported that Anthropic, OpenAI and Google are jointly sharing Intel‑based technologies to block Chinese model‑distillation efforts, indicating that the compute partnership may have a geopolitical dimension as well (Bloomberg, 2026‑04‑06). By securing a dedicated pipeline of cutting‑edge TPU hardware, Anthropic not only safeguards its own training roadmap but also positions itself as a de‑facto custodian of next‑gen AI compute in the United States. The move could force competitors to either accelerate their own hardware alliances or risk falling behind in the race to deliver ever‑larger, more capable foundation models.
Sources
Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.