Skip to main content
Broadcom

Broadcom eyes Anthropic's $30bn run rate as it adopts new Google TPU

Published by
SectorHQ Editorial
Broadcom eyes Anthropic's $30bn run rate as it adopts new Google TPU

Photo by Compare Fibre on Unsplash

Broadcom announced it will supply next‑generation AI and datacenter networking chips to Google, enabling Anthropic’s planned consumption of 3.5 GW of new Google TPUs, while the startup reports a $30 bn run rate, Theregister reports.

Key Facts

  • Key company: Broadcom
  • Also mentioned: Broadcom, Anthropic

Broadcom’s regulatory filing reveals that the “Long Term Agreement” with Google will see the semiconductor maker design and fabricate custom Tensor Processing Units (TPUs) for the search giant’s next‑generation AI racks. Hock Tan, Broadcom’s chief executive, has argued that hyperscalers lack the in‑house expertise to produce such bespoke accelerators, and he projects that the company’s AI‑chip business could generate more than $100 billion in revenue by 2027 if it captures the bulk of the market (Theregister). The filing does not disclose the exact silicon architecture, but it confirms that Broadcom will provide both the ASICs and the high‑speed networking silicon required to stitch together multi‑petaflop clusters, a combination that has historically been Google’s domain.

The second component of the filing is a “Supply Assurance Agreement” that obligates Broadcom to deliver networking ASICs, optical transceivers, and other rack‑level components to Google through 2031. This long‑term commitment is intended to guarantee the availability of the interconnect fabric that underpins the massive TPU pods, which will be provisioned at a combined power envelope of 3.5 GW for Anthropic alone starting in 2027 (Theregister). By securing the networking stack from a single supplier, Google can maintain tight latency budgets and deterministic bandwidth, crucial for the transformer‑based workloads that dominate Anthropic’s Claude models.

Anthropic’s own disclosure in the same filing shows a dramatic acceleration in its commercial traction: the startup’s run‑rate revenue has surpassed $30 billion, up from roughly $9 billion at the end of 2025 (Theregister). The company attributes this growth to a doubling of its enterprise customer base—from 500 to over 1,000 businesses each spending more than $1 million annually—within a two‑month window. Anthropic plans to consume the Broadcom‑supplied TPUs via Google Cloud, but it also retains a heterogeneous compute strategy, leveraging AWS Trainium, Nvidia GPUs, and its own internal clusters to “match workloads to the chips best suited for them” (Theregister). This multi‑vendor approach mitigates the risk of over‑reliance on a single accelerator family, a point Broadcom explicitly flagged as a risk factor in its filing.

From a technical perspective, the 3.5 GW figure translates to roughly 1.2 million TPU v5‑style cores operating at peak efficiency, assuming a per‑core power draw of about 2.9 W—consistent with Google’s publicly disclosed TPU specifications. The networking ASICs that Broadcom will provide are expected to support 400 Gb/s Ethernet or higher, enabling the dense all‑to‑all mesh required for model‑parallel training of Claude‑3‑level parameters. By integrating its own silicon‑level power management and silicon‑interposer technology, Broadcom can reduce the overall rack footprint and improve thermal density, a critical factor when scaling to gigawatt‑scale deployments.

The partnership also underscores a broader industry shift: hyperscalers are outsourcing custom accelerator development to specialist foundries rather than building in‑house design teams. Broadcom’s claim that “hyperscalers don’t have the skill to create custom accelerators” (Theregister) reflects a strategic bet that the company’s expertise in high‑performance networking and silicon‑photonic interconnects will become a moat against rivals such as Nvidia and AMD, which are also courting Google’s AI hardware pipeline. If Broadcom’s $100 billion revenue forecast for 2027 materializes, it would position the firm as a dominant supplier not only for Google’s TPU ecosystem but also for the broader AI compute market that Anthropic and other large language model providers are rapidly expanding.

Sources

Primary source

Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.

More from SectorHQ:📊Intelligence📝Blog

🏢Companies in This Story

Related Stories