Skip to main content
Nvidia

Nvidia Secures Deal to Deliver One Million Chips to Amazon’s Cloud by 2027

Published by
SectorHQ Editorial
Nvidia Secures Deal to Deliver One Million Chips to Amazon’s Cloud by 2027

Photo by Road Ahead (unsplash.com/@roadahead_2223) on Unsplash

1 million GPUs. That’s the volume Nvidia will ship to Amazon’s cloud by 2027, a deal that also bundles its Spectrum networking and newly‑released Groq chips, Businesstimes reports.

Key Facts

  • Key company: Nvidia
  • Also mentioned: Amazon

Nvidia’s agreement with Amazon Web Services (AWS) marks the most extensive single‑customer GPU supply contract announced this year, with the chipmaker committing to ship one million graphics processing units (GPUs) to the cloud provider by the end of 2027, according to a Reuters interview with Nvidia’s vice‑president of hyperscale and high‑performance computing, Ian Buck (Reuters). The rollout will begin in 2024 and run through the 2027 fiscal year, aligning with CEO Jensen Huang’s projection that the company’s Rubin and Blackwell families of chips could generate roughly $1 trillion in sales across the broader market (Reuters). While the financial terms of the deal were not disclosed, the transaction is broader than a pure GPU sale; it also bundles Nvidia’s Spectrum networking silicon, the newly launched Groq inference chips, and the Connect X and Spectrum X networking platforms that will be integrated into AWS data centres (Reuters).

The inclusion of Groq chips is a strategic move aimed at strengthening inference workloads, the stage where trained AI models generate outputs for end‑users. Buck emphasized that “inference is hard. It’s wickedly hard,” and that achieving top‑tier performance “is not a one‑chip pony” (Reuters). AWS plans to deploy a suite of seven Nvidia chips for inference, with Groq serving as a core component alongside six other Nvidia silicon families. This multi‑chip approach is intended to reduce latency and improve throughput for high‑volume AI services such as generative text, image synthesis, and real‑time recommendation engines that dominate AWS’s enterprise AI portfolio.

Beyond compute, the deal deepens Nvidia’s networking partnership with AWS. The Connect X and Spectrum X solutions will replace or augment AWS’s custom‑built networking fabric, enabling higher bandwidth and lower latency across the cloud provider’s AI‑focused clusters (Reuters). Buck noted that while AWS has historically relied on its own networking hardware, the collaboration will focus on “important workloads and biggest customers across AI,” suggesting that the joint deployment will target high‑value, latency‑sensitive applications for enterprises and SaaS providers (Reuters). This move also signals a shift in AWS’s hardware strategy, which has previously emphasized in‑house silicon development to control costs and performance.

The timing of the agreement dovetails with Nvidia’s broader market expansion following its $17 billion licensing deal with an AI chip startup late last year, which paved the way for the Groq line (Reuters). The licensing arrangement gave Nvidia access to the startup’s inference‑optimised architecture, allowing it to integrate Groq into its portfolio without building the technology from scratch. Analysts have noted that the combined hardware stack—GPUs, specialized inference chips, and high‑performance networking—positions Nvidia as a one‑stop shop for hyperscale cloud operators seeking to scale AI workloads efficiently (TechCrunch, cited in the source list). While the exact revenue impact of the AWS contract remains opaque, the scale of a million‑GPU commitment suggests a multi‑billion‑dollar contribution to Nvidia’s top line over the next four years.

Industry observers view the Nvidia‑AWS pact as a bellwether for the next wave of AI infrastructure spending. With hyperscale providers projected to double their AI‑focused compute capacity by 2028, securing a reliable supply of cutting‑edge GPUs and complementary silicon is critical to meeting demand (Reuters). The deal also underscores the growing importance of inference as a revenue driver, complementing the traditionally higher‑margin training market that has dominated Nvidia’s growth narrative. As AWS integrates Nvidia’s full stack into its services, customers can expect more seamless scaling of AI models, potentially accelerating the adoption of generative AI across sectors ranging from finance to healthcare.

Sources

Primary source

Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.

Compare these companies

More from SectorHQ:📊Intelligence📝Blog

🏢Companies in This Story

Related Stories