Nvidia powers OpenAI’s next‑gen AI chips as both firms drive 6G AI‑native platforms.
Photo by Brecht Corbeel (unsplash.com/@brechtcorbeel) on Unsplash
OpenAI has become Nvidia’s biggest customer for next‑generation AI chips, tapping Groq technology to power joint development of 6G AI‑native platforms, reports indicate.
Key Facts
- •Key company: Nvidia
- •Also mentioned: OpenAI, BT Group
OpenAI’s partnership with Nvidia has deepened beyond the standard GPU supply chain, as the AI lab now relies on Nvidia’s next‑generation Groq‑based inference processors to power its upcoming 6G‑ready models, according to a report by eTeknix. The move makes OpenAI Nvidia’s largest single customer for the new chip family, a status the two firms are leveraging to co‑develop a suite of AI‑native hardware and software components that will undergird the next generation of wireless networks. OpenAI’s engineering teams are already integrating Groq’s low‑latency tensor cores into the inference pipeline for its flagship GPT‑4‑Turbo and Whisper‑2 services, aiming to cut response times by up to 30 % while handling the projected 10‑fold surge in token traffic expected from 6G deployments.
The collaboration dovetails with a broader industry push announced at Mobile World Congress, where Nvidia and a coalition of telecom operators and infrastructure providers—including BT Group, Cisco, Deutsche Telekom, Ericsson, Nokia, SK Telecom, SoftBank Corp., and T‑Mobile—committed to build “open, secure and trustworthy” AI‑native 6G platforms, as detailed in Nvidia’s own press release. The consortium plans to embed AI across the radio access network (RAN), edge, and core layers, creating a software‑defined wireless fabric that can support billions of autonomous devices, sensors, and robots. By leveraging Nvidia’s GPU‑accelerated AI cores and OpenAI’s large‑scale language models, the alliance hopes to deliver real‑time sensing, decision‑making, and secure communications in a single, unified stack.
Industry analysts note that the Groq technology represents a shift from Nvidia’s traditional CUDA‑centric approach toward a more specialized inference engine optimized for transformer workloads. Wired reports that Nvidia’s new “Vera Rubin” chips—named after the astronomer—are already in full production and are being positioned as the backbone for AI‑driven edge compute, a claim corroborated by the company’s CEO Jensen Huang at the CES 2025 keynote. OpenAI’s adoption of these chips signals confidence in the performance gains promised by the Groq architecture, especially as the lab seeks to scale its services to meet the anticipated demand of 6G‑enabled applications such as autonomous transportation, remote surgery, and immersive mixed‑reality experiences.
Despite the deepening ties, OpenAI’s relationship with Nvidia is not without friction. A Reuters exclusive notes that OpenAI has expressed dissatisfaction with certain Nvidia inference products, prompting the lab to explore alternative suppliers for specific workloads. Nevertheless, the partnership around Groq chips appears to have mitigated those concerns, with OpenAI designating Nvidia as its primary hardware vendor for the next wave of AI‑native services. The company’s leadership has indicated that the joint development effort will also accelerate the rollout of secure, supply‑chain‑resilient AI modules, a priority highlighted by the telecom consortium’s emphasis on trustworthiness and interoperability.
The stakes for both firms are high. If the joint platform succeeds, it could set the de‑facto standard for 6G infrastructure, giving Nvidia a foothold in a market projected to generate tens of billions of dollars in annual revenue by the early 2030s. For OpenAI, the partnership offers a path to sustain its rapid growth—its enterprise API revenue surged to $3.4 billion last year—while ensuring that its models remain performant on the ultra‑low‑latency networks that 6G promises. As the consortium moves from announcement to implementation, the next few quarters will reveal whether the Nvidia‑OpenAI synergy can deliver on the promise of an AI‑native, globally connected future.
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.