Nvidia Doubles Down on AI Chip Supremacy
Jensen Huang Lays Out Nvidia’s Next AI Play
At the Computex 2024 technology show in Taiwan, Nvidia CEO Jensen Huang detailed the company’s aggressive new strategy to stay ahead in the increasingly fierce AI chip race. The centerpiece was the unveiling of the next-generation Rubin platform, expected to begin sampling in 2025, which will include the Rubin AI GPUs, CPUs based on Arm technology, and advanced networking gear. Huang also announced the rollout of the Blackwell Ultra chip in 2025—merely a year after the current Blackwell line—and another Blackwell successor in 2026. These rapid releases reflect Nvidia’s urgency as competitors like AMD launch rival AI processors and big tech companies such as Google and Microsoft invest heavily in their own AI chips. Huang is rallying the ecosystem to stick with Nvidia’s CUDA software and hardware stack by delivering unmatched performance and iteration speed.
Keeping Rivals at Bay with Speed and Ecosystem Advantage
With Nvidia’s market cap soaring past $2.8 trillion and its chips powering everything from OpenAI’s models to Microsoft AI workloads, Huang emphasized the importance of rapid innovation. Central to Nvidia’s plan is the continuous optimization of its platform architecture, which combines chips, systems, and an extensive software framework—CUDA, deep learning tools, and networking APIs—to deliver scalable performance. While AMD and Intel pitch open standards, and hyperscalers like Amazon and Meta push custom chips for cost and efficiency, Nvidia aims to maintain a premium position by being the most advanced and most integrated. The company also introduced new systems like the GB200-based DGX supercomputer and networking tools built with