Nvidia upgrades DGX Station with GB300 Blackwell Ultra chip, unveils 1TB HBM4E Rubin
Photo by KC Shum (unsplash.com/@kcshum) on Unsplash
Nvidia unveiled an upgraded DGX Station powered by the new GB300 “Blackwell Ultra” desktop superchip, delivering 748 GB of memory, 20 PFLOPs AI compute and a 1 TB HBM4E Rubin module, Wccftech reports.
Key Facts
- •Key company: Nvidia
Nvidia’s GTC 2026 showcase revealed that the upgraded DG‑X Station now ships with the GB300 “Blackwell Ultra” desktop superchip, a step up from the GB200 that debuted a year earlier. The GB300 integrates a new generation of Tensor cores and a larger on‑die cache, allowing the workstation to reach 20 peta‑FLOPs of AI compute while supporting a combined 748 GB of memory across its HBM4E Rubin module and DDR5 system RAM. According to Wccftech, the 1 TB HBM4E Rubin memory is the first of its kind, offering roughly double the bandwidth of the previous HBM3 stack and enabling larger model contexts without the need for off‑chip paging.
The architecture of the Blackwell Ultra chip is built around Nvidia’s “Kyber” interconnect, a high‑speed fabric that links multiple GPUs and memory subsystems within a single chassis. Wccftech notes that the Rubin Ultra tray, demonstrated at the same event, slots directly into Kyber racks, providing a unified memory pool that can be addressed by all GPUs in the station. This design reduces latency for tensor operations and supports the emerging trend of “model‑parallel” training where massive neural networks are split across several accelerators. The 1 TB HBM4E stack delivers up to 4 TB/s of memory bandwidth, a figure that rivals the throughput of high‑end data‑center servers while retaining a desktop form factor.
In addition to the raw compute boost, Nvidia introduced the RTX PRO 4500 Blackwell Server Edition as a complementary single‑slot GPU for enterprise workloads. As reported by Wccftech, the RTX PRO 4500 packs over 10 k cores and 32 GB of GDDR7 memory, targeting inference and mixed‑precision tasks that benefit from the same Blackwell micro‑architecture but with a lower power envelope. The presence of both the GB300 workstation chip and the RTX PRO 4500 in Nvidia’s portfolio underscores a strategy to cover the full spectrum of AI development—from research‑grade training on the DG‑X Station to production‑grade inference on rack‑mounted servers.
Financial analysts see the hardware rollout as a catalyst for Nvidia’s broader revenue ambitions. Bloomberg reported that Nvidia’s CEO reiterated a $1 trillion AI‑chip revenue target by 2027, a forecast that hinges on scaling both training and inference markets. Reuters echoed this outlook, emphasizing that the company is “betting on AI inference as a trillion‑dollar opportunity.” The new DG‑X Station, with its unprecedented memory capacity and compute density, is positioned to capture a slice of the enterprise AI spend that traditionally flows to larger data‑center solutions, offering a turnkey workstation for labs and small‑to‑medium businesses that need on‑premise performance without the overhead of a full server farm.
The launch also marks a shift in Nvidia’s manufacturing strategy for the Chinese market. Reuters disclosed that Nvidia is restarting production of a China‑specific AI chip variant, a move that could broaden the addressable market for Blackwell‑based products despite ongoing geopolitical constraints. By aligning the GB300’s capabilities with a localized supply chain, Nvidia aims to mitigate export restrictions while still delivering the same high‑performance features to Chinese customers. This dual‑track approach may help sustain the projected revenue growth outlined in the company’s trillion‑dollar forecast.
Overall, the GB300‑powered DG‑X Station represents a convergence of memory, interconnect, and compute advancements that compress data‑center‑level performance into a desktop chassis. With 20 PFLOPs of AI compute, 1 TB of HBM4E memory, and a Kyber‑enabled fabric, the workstation is poised to become a reference platform for developers pushing the limits of large‑scale models. As Nvidia continues to expand its Blackwell ecosystem—from the RTX PRO 4500 server GPU to the Rubin Ultra tray—the company solidifies its dominance across the AI hardware stack, reinforcing the market narrative that its technology will drive the next wave of artificial‑intelligence innovation.
Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.