NVIDIA Unveils Next-Generation AI Data Center GPUs at CES 2026

The company says the platform will help data centers run larger AI workloads as demand keeps growing worldwide. The Vera Rubin architecture is a step up from NVIDIA's previous Blackwell generation. It shows that NVIDIA is still a leader when it comes to intelligent infrastructure, according to industry reports.

Introducing the Vera Rubin platform

Named after astronomer Vera Rubin, the platform brings together specialized components to create a large-scale AI system. At the heart of this computer are the Rubin GPUs. They can deliver up to 50 petaflops of AI compute for inference. This is five times more powerful than the old Blackwell architecture. The Rubin GPUs work with the Vera CPU. 

The Vera CPU comes with a high core count, designed to handle heavy data processing tasks efficiently. So it is perfect for the kind of work that modern artificial intelligence needs to do, based on NVIDIA’s official updates.

The Vera Rubin architecture stands out because of how it is designed. They made sure that the GPUs, the CPUs, the networking, and the data processing units all work together perfectly as one system. This design helps the Vera Rubin architecture deliver stronger performance. It is more efficient and can handle many tasks at the same time.

Efficiency and scalability

The Vera Rubin platform is not about having a lot of computing power. It is also about being efficient and saving money when it needs to be scaled up. When you compare it to Blackwell, the Vera Rubin platform can train models with fewer GPUs. This means it can do things faster and cost less per token.

The Vera Rubin platform has some optimizations that help it work better. These include memory bandwidth and advanced networking with things like NVLink 6 and Spectrum-X Ethernet Photonics. It also has storage solutions. All of these things help the Vera Rubin platform work well while using less energy and saving on hardware costs, based on NVIDIA’s official updates.

This efficiency matters because AI models are getting larger. For systems that act on their own and large language models that rely on many parameters and quick responses, efficiency becomes critical.

CES demonstration and industry adoption

At CES, NVIDIA showed off the Vera Rubin NVL72 rack-scale system. The system uses liquid cooling to manage heat efficiently. It is also very dense, packing a large number of Vera Rubin GPUs and Vera CPUs into a single rack.

The Vera Rubin NVL72 rack-scale system is already being adopted, with availability expected through cloud partners such as Microsoft Azure and AWS in the second half of 2026.

People who study the industry are really paying attention to how this technology's going to change the way companies use artificial intelligence. This matters because the platform could become the foundation for AI services worldwide.

The way AI is changing our world

AI is growing faster than ever. It now takes on work that once seemed out of reach and helps people get things done, be creative, and solve problems in new ways. The Vera Rubin platform stands out for its speed and efficiency, and it’s designed to handle demanding workloads more efficiently. This allows large companies to run AI models more quickly and reduce costs.

NVIDIA’s platform moves the company beyond simply selling graphics processing units. It positions NVIDIA as a key driver of new artificial intelligence technology. Together, the Vera Rubin platform and NVIDIA play an important role in shaping the future of AI.

Conclusion

The Vera Rubin platform marks an important moment for AI infrastructure. It brings together strong computing performance with practical efficiency, making it easier for data centers to scale as AI workloads grow. According to industry reports, platforms like Vera Rubin could support real progress across industries, helping teams run larger models and carry out more advanced research.

NVIDIA Unveils Next-Generation AI Data Center GPUs at CES 2026

Amazon Adopts NVIDIA’s NVLink, Unveils New Serve ...