AMD, Broadcom and Nvidia Team with Hyperscalers to Shape Future AI Cluster Interconnects
Photo by Possessed Photography on Unsplash
Earlier AI clusters scraped by with sub‑terabit links, but a new alliance of AMD, Broadcom and Nvidia with the hyperscalers promises optical interconnects that could eventually hit 3.2 Tb/s, benefitting Meta, Microsoft and OpenAI, reports indicate.
Key Facts
- •Key company: Nvidia
- •Also mentioned: Nvidia, Broadcom
The consortium’s first concrete milestone is a joint development roadmap that targets a 3.2‑terabit‑per‑second (Tb/s) optical link for next‑generation AI clusters, a figure that would dwarf today’s sub‑terabit fabrics, according to a report by Tom’s Hardware. The blueprint calls for AMD’s silicon‑photonic transceivers, Broadcom’s high‑density optical modules, and Nvidia’s GPU‑centric networking stack to be co‑designed with the cloud operators’ own data‑center engineers. Meta, Microsoft and OpenAI have already pledged to integrate the prototype into their upcoming hyperscale deployments, giving the alliance a clear path from lab to production.
Industry analysts see the move as a response to the bandwidth crunch that has emerged as large language models scale. Current interconnects, which typically top out at 400‑800 Gb/s per lane, force data‑center operators to over‑provision compute or accept latency penalties when sharding models across dozens of nodes. By pushing the optical layer to 3.2 Tb/s, the partnership aims to cut the number of required links by a factor of four, thereby reducing both power consumption and cabling complexity—a benefit that “could translate into measurable cost savings at scale,” the Tom’s Hardware piece notes.
The technical collaboration also dovetails with broader ecosystem trends. Reuters has highlighted Nvidia’s recent software‑centric partnerships, such as its work with Palantir and CenterPoint Energy to accelerate AI data‑center construction, underscoring the chipmaker’s strategy of bundling hardware with orchestration tools. While the current AMD‑Broadcom‑Nvidia effort is hardware‑first, the same report suggests that the consortium will eventually embed Nvidia’s networking software stack to manage the ultra‑high‑speed links, echoing the company’s approach in other verticals.
If the 3.2 Tb/s target materializes, it would give the three vendors a competitive edge over rivals still reliant on legacy Ethernet standards. The alliance’s timing is notable: Meta, Microsoft and OpenAI are in the midst of expanding their AI training clusters to meet the demand for ever larger models, and each has publicly signaled a willingness to adopt new interconnect technologies that can keep pace. By aligning their roadmaps now, AMD, Broadcom and Nvidia are positioning themselves as the default supply chain for the next wave of AI infrastructure, a stance that could reshape procurement decisions across the hyperscale market for years to come.
Sources
- Tom's Hardware
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.