NVIDIA CES 2026: DLSS 4.5, Neural Shaders, and the Vera Rubin AI Supercomputer
Key Takeaways
- ✓ DLSS 4.5 introduces 6X Multi-Frame Generation—inserting up to 5 AI frames between each rendered frame
- âś“ RTX Neural Shaders bring film-quality materials and lighting to real-time rendering
- âś“ Vera Rubin NVL72 AI supercomputer promises 5x faster inference and 10x lower cost per token than Blackwell
- âś“ G-SYNC Pulsar delivers smoother motion through variable frequency backlight strobing
- âś“ Jensen Huang declares the future of graphics is neural rendering, not traditional rasterization
NVIDIA’s Vision for the Future of Graphics
At CES 2026, NVIDIA CEO Jensen Huang made a bold declaration: the future of graphics is neural rendering.
The company’s keynote and subsequent announcements revealed a fundamental shift in how GPUs will handle both gaming and AI workloads—with major implications for creators working with AI video generation tools.
DLSS 4.5: 6X Multi-Frame Generation
The headline feature for gamers and creators alike is DLSS 4.5, which introduces a second-generation transformer model and dramatically expanded Multi-Frame Generation.
What’s New in DLSS 4.5
| Feature | DLSS 4.0 | DLSS 4.5 |
|---|---|---|
| Multi-Frame Generation | 3X (2 frames inserted) | 6X (5 frames inserted) |
| AI Model | First-gen transformer | Second-gen transformer |
| Latency Impact | Moderate | Reduced via optimization |
| Image Quality | High | Higher with better temporal stability |
The 6X Multi-Frame Generation can insert up to five AI-generated frames between each traditionally rendered frame. In demonstrations, NVIDIA showed Black Myth: Wukong running at 246 fps with only 53ms latency on an RTX 5090 using 6X MFG.
Why This Matters for AI Video
Multi-Frame Generation technology shares DNA with AI video generation:
- Frame interpolation: The same concepts power tools like Runway’s frame expansion
- Temporal consistency: Maintaining coherent motion across generated frames is crucial for both gaming and AI video
- Real-time inference: The techniques developed here will eventually enable faster local AI video generation
RTX Neural Shaders: Film-Quality Rendering
Perhaps the most significant long-term announcement is RTX Neural Shaders—a fundamental reimagining of how materials and lighting are rendered.
Traditional vs. Neural Rendering
Traditional rendering uses mathematical approximations to simulate how light interacts with surfaces. Neural rendering replaces these approximations with AI models trained on real-world physics.
The result: materials that look indistinguishable from reality, with accurate subsurface scattering, complex reflections, and realistic shadows.
Applications Beyond Gaming
RTX Neural Shaders will impact:
- Virtual production: Real-time backgrounds that match physical camera footage
- AI video generation: Better training data and more realistic outputs
- 3D asset creation: Faster iteration on photorealistic materials
- Architectural visualization: Real-time walkthroughs with film-quality lighting
“The RTX 5090 could represent the pinnacle of traditional raster-based GPUs. The future is neural rendering.” — Jensen Huang, NVIDIA CEO
Vera Rubin NVL72: The Next AI Supercomputer
Looking beyond consumer GPUs, NVIDIA unveiled the Vera Rubin NVL72—its next-generation AI supercomputer architecture.
Vera Rubin Specifications
| Component | Details |
|---|---|
| GPU | Rubin GPU (50 PFLOPS inference, 35 PFLOPS training per chip) |
| CPU | Vera CPU |
| Interconnect | NVLink 6 |
| Networking | ConnectX-9 SuperNIC, BlueField-4 DPU |
| Performance | 5x greater inference vs. Blackwell |
| Efficiency | 10x lower cost per token vs. Blackwell |
| Availability | Second half of 2026 |
What This Means for AI Tools
The Vera Rubin architecture will power the next generation of cloud AI services. For creators, this translates to:
- Faster cloud generation: Tools like Runway, Pika, and Sora will run on more powerful hardware
- Lower costs: 10x efficiency improvement could mean lower pricing or more generous free tiers
- More complex models: Higher performance enables longer videos, higher resolutions, and more parameters
G-SYNC Pulsar: Smoother Motion
NVIDIA also unveiled G-SYNC Pulsar, the next evolution of Variable Refresh Rate technology.
G-SYNC Pulsar combines adaptive sync with variable frequency backlight strobing, reducing motion blur while maintaining smooth frame delivery. For video editors and creators who spend hours reviewing footage, this means less eye strain and more accurate motion assessment.
The Neural Rendering Future
Jensen Huang’s keynote emphasized a clear message: traditional rendering has peaked.
The path forward is neural rendering—using AI to generate pixels rather than calculating them mathematically. This has profound implications:
Short Term (2026-2027)
- RTX Neural Shaders in major game engines
- Hybrid workflows combining traditional and neural rendering
- Improved AI video upscaling and enhancement
Medium Term (2027-2029)
- Real-time neural rendering for virtual production
- AI-generated environments indistinguishable from reality
- Unified workflows for gaming, film, and interactive media
Long Term (2030+)
- Fully neural graphics pipelines
- Real-time generation of photorealistic content from text descriptions
- The line between “rendered” and “generated” disappears
What Creators Should Watch
For AI Video Creators
| Announcement | Impact |
|---|---|
| DLSS 4.5 | Better understanding of frame interpolation techniques |
| Neural Shaders | Higher-quality training data for future AI models |
| Vera Rubin | Faster, cheaper cloud AI video generation |
For Content Producers
| Announcement | Impact |
|---|---|
| G-SYNC Pulsar | Better monitoring for video editing |
| Neural Shaders | Real-time virtual production improvements |
| RTX 5090 | More powerful local AI workflows |
Explore AI Video Tools
See how the latest AI video generators compare and find the right tool for your workflow
View Comparisons →RTX 5090: The Last Raster GPU?
The GeForce RTX 5090, announced alongside these features, may represent a turning point. Jensen Huang hinted that it could be “the pinnacle of traditional raster-based GPUs”—suggesting future generations will be primarily neural rendering devices.
For now, the RTX 5090 offers:
- Full DLSS 4.5 support with 6X Multi-Frame Generation
- RTX Neural Shaders
- Improved local AI inference for tools like ComfyUI
- Better performance for AI video upscaling and enhancement
Timeline Summary
| Feature | Availability |
|---|---|
| DLSS 4.5 | Available with RTX 50 Series |
| RTX Neural Shaders | Rolling out in 2026 |
| G-SYNC Pulsar | Available with compatible monitors |
| GeForce RTX 5090 | Launching Q1 2026 |
| Vera Rubin NVL72 | Second half 2026 |
FAQ
What is DLSS 4.5?
DLSS 4.5 is NVIDIA's latest Deep Learning Super Sampling technology, featuring a second-generation transformer model and 6X Multi-Frame Generation that can insert up to 5 AI-generated frames between each rendered frame.
What are RTX Neural Shaders?
RTX Neural Shaders use AI to render materials and lighting in real-time, replacing traditional mathematical approximations with neural networks trained on real-world physics. The result is film-quality visuals in real-time applications.
When will Vera Rubin be available?
The Vera Rubin NVL72 AI supercomputer is scheduled for the second half of 2026. It promises 5x greater inference performance and 10x lower cost per token compared to the current Blackwell architecture.
How does this affect AI video generation?
These advancements impact AI video in multiple ways: improved frame interpolation techniques (DLSS 4.5), higher-quality training data (Neural Shaders), and faster/cheaper cloud inference (Vera Rubin) will all benefit AI video tools.
Is the RTX 5090 good for AI video creation?
Yes. The RTX 5090 supports DLSS 4.5, RTX Neural Shaders, and improved local AI inference. It's particularly beneficial for local AI workflows using tools like ComfyUI and for AI video upscaling.
Sources
- Tom’s Hardware: NVIDIA Launches Vera Rubin NVL72
- NVIDIA GeForce News: CES 2026 RTX Announcements
- PC Gamer: NVIDIA DLSS 4.5 Multi-Frame Generation
- Tom’s Hardware: Jensen Huang on Neural Rendering