Micron Starts High‑Volume Production of 36GB HBM4 and 28 Gbps PCIe Gen6 SSDs for Nvidia
Photo by Possessed Photography on Unsplash
Micron has begun high‑volume production of 36 GB HBM4 memory and 28 Gbps PCIe Gen6 SSDs for Nvidia, delivering 2.3× bandwidth and 20% better power efficiency, Tomshardware reports.
Key Facts
- •Key company: Micron
- •Also mentioned: Micron
Micron’s announcement at Nvidia’s GTC 2026 marks the first time a single memory supplier has moved three distinct products—36 GB HBM4 DRAM, a PCIe Gen 6 SSD, and a SOCAMM2 module—into volume shipment for the same GPU platform. The HBM4 stack, a 12‑hi configuration, pushes pin speeds past 11 Gb/s, delivering more than 2.8 TB/s of aggregate bandwidth. Micron’s internal power‑efficiency calculator shows that, versus its own 36 GB HBM3E part, the new HBM4 offers a 2.3× bandwidth uplift while consuming over 20 % less power per gigabyte transferred (Tom’s Hardware). Those gains are critical for Nvidia’s Vera Rubin architecture, which targets multi‑petaflop AI workloads that are increasingly bandwidth‑bound.
The PCIe Gen 6 SSD, also entering high‑volume production, is positioned as the industry’s first data‑center drive built on the 28 Gbps PCIe 6.0 interface. While the Tom’s Hardware story does not disclose raw throughput numbers, the 28 Gbps link rate translates to a theoretical peak of 3.5 GB/s per lane, far outpacing the 16 Gbps Gen 5 baseline that underpins most current enterprise SSDs. Micron’s press release ties the SSD’s launch directly to the Vera Rubin platform, indicating that Nvidia intends to pair the drive with its next‑generation GPUs to reduce data‑movement latency in large‑scale training clusters (Wccftech).
The SOCAMM2 module, a 192 GB package that integrates HBM‑type memory with a system‑on‑chip controller, rounds out Micron’s portfolio for Vera Rubin. Although the coverage does not provide detailed performance metrics, the module’s inclusion signals Nvidia’s push toward tighter memory‑compute integration. By embedding the memory controller alongside the DRAM stacks, SOCAMM2 can cut board‑level signaling overhead and improve deterministic latency—attributes that are increasingly valuable for transformer‑based models that demand rapid, low‑latency access to massive parameter sets.
Micron’s simultaneous volume rollout of all three components underscores a broader industry trend: memory vendors are moving from incremental DRAM upgrades to holistic, platform‑level solutions that address bandwidth, power, and integration challenges in one package. Samsung’s recent launch of a 12‑stack HBM3E, cited by ZDNet, illustrates the competitive pressure to deliver higher‑density, higher‑speed stacks (ZDNet). Micron’s claim of a 2.3× bandwidth increase over HBM3E places it ahead of that baseline, suggesting that the company is leveraging its 3‑D stacking expertise to capture a larger share of the AI‑focused memory market.
Analysts have long warned that AI accelerators will be limited not by compute cores but by the ability to feed data fast enough. Micron’s HBM4, with its 2.8 TB/s bandwidth and improved power profile, directly addresses that bottleneck, while the Gen 6 SSD provides a complementary high‑throughput storage tier. If Nvidia’s Vera Rubin platform can successfully integrate these components, it could set a new performance ceiling for data‑center AI training, forcing rivals to accelerate their own memory‑centric roadmaps. The next quarter will reveal whether customers adopt the full stack at scale, but the technical specifications disclosed by Micron and corroborated by Tom’s Hardware suggest a decisive step forward for memory‑driven AI performance.
Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.