Broadcom, Google Seal AI Chip Pact Through 2031 as Anthropic Posts $11 B Revenue Surge
Photo by Possessed Photography on Unsplash
Broadcom and Google have sealed a multiyear AI‑chip partnership lasting through 2031, while Anthropic reported an $11 billion revenue surge this month, boosting its ARR to $19 billion by early 2026.
Key Facts
- •Key company: Broadcom
- •Also mentioned: Google, Anthropic
Broadcom’s announcement that it will supply Google‑built tensor processing units (TPUs) to Anthropic marks the first large‑scale, non‑Nvidia compute pipeline for a leading foundation‑model provider. According to Bloomberg, the partnership “offers an alternative to technology from Nvidia,” and will see Broadcom fabricating ASICs that integrate Google’s TPU IP stack across the two‑year rollout schedule that ends in 2031 — the same horizon cited in the News.az report on the deal. The move is technically significant because TPUs are optimized for dense matrix multiplication and high‑throughput inference, which aligns with Anthropic’s current model sizes and its projected scaling to 19 billion ARR by early 2026 (as detailed in the Anthropic revenue growth report). By embedding Google’s software stack directly into Broadcom silicon, the consortium hopes to reduce latency and improve power efficiency compared with the CUDA‑based pipelines that dominate today’s data‑center AI workloads.
The financial backdrop underscores why the three‑way pact matters. Anthropic’s revenue surged by $11 billion in March, taking its ARR from $1 billion at the start of 2025 to $9 billion by the end of that year, and then to $19 billion by February 2026 (Anthropic revenue growth report). Broadcom’s own ARR now exceeds $30 billion, a figure that includes the $6 billion added in February through Anthropic’s comment on the partnership (Anthropic revenue growth report). Those numbers illustrate a rapid acceleration of AI‑related spend that is outpacing traditional semiconductor cycles, prompting Broadcom to lock in a decade‑long supply contract with Google to secure a predictable pipeline of high‑volume TPU‑based orders.
From a hardware architecture perspective, the Broadcom‑Google collaboration will blend Broadcom’s advanced packaging and high‑density interconnect technologies with Google’s second‑generation TPU cores, which feature 128 × 128 systolic arrays and on‑chip high‑bandwidth memory (HBM). The resulting ASICs are expected to deliver up to 2 × the FLOP‑per‑watt efficiency of contemporary Nvidia GPUs, according to the Technology Org briefing on the deal. This efficiency gain is critical for Anthropic’s training clusters, which are projected to consume petaflops of compute continuously as they expand model parameters toward the trillion‑parameter regime. By moving the compute stack into a custom silicon solution, Anthropic can also sidestep the licensing fees and driver stack complexities associated with Nvidia’s CUDA ecosystem.
Strategically, the partnership positions Broadcom as a direct competitor to Nvidia in the AI‑compute market, a shift highlighted by Business Korea’s report that Broadcom “builds Google TPU infrastructure for Anthropic against Nvidia.” The deal also deepens Google’s role as a cloud‑infrastructure provider, giving it a hardware foothold that complements its existing TPU‑as‑a‑service offering. Analysts note that the multiyear nature of the agreement—extending through 2031—provides both parties with a stable revenue runway, which is especially valuable given the volatility of AI‑chip demand cycles. The Streamlinefeed coverage frames the arrangement as “a strategic bet on the future of AI compute,” emphasizing that the three firms are collectively hedging against the risk that Nvidia’s market share could erode if alternative architectures achieve comparable performance at lower cost.
Finally, the timing of the announcement dovetails with Anthropic’s aggressive market expansion. With ARR projected to hit $19 billion by early 2026, Anthropic will need to scale its inference fleet dramatically, and the Broadcom‑Google TPU pipeline offers a path to meet that demand without relying on Nvidia’s supply chain, which has historically faced capacity constraints. If the partnership delivers the promised efficiency and cost advantages, it could reshape the competitive dynamics of AI hardware, forcing Nvidia to accelerate its own roadmap or seek similar joint‑venture arrangements. For now, the technical community will be watching the first silicon shipments later this year to gauge whether the Broadcom‑Google‑Anthropic triad can deliver on its lofty performance promises.
Sources
- Businesskorea
- News.az
- streamlinefeed.co.ke
- Technology Org
- Hacker News Newest
Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.