
Neural rendering might be the name of the game in future.
Yes, I know: more AI. Love it or hate it, that’s the way things are going. For us gamers, it started with upscaling, then frame gen, then Multi Frame Gen, and soon, it seems, fully AI-generated frames.
At GDC today Nvidia announced that “neural shading support will come to DirectX preview in April, unlocking the power of AI Tensor Cores in NVIDIA GeForce RTX GPUs inside of graphics shaders used to program video games…
“Nvidia RTX Neural Shaders SDK enables developers to train their game data and shader code on an RTX AI PC and accelerate their neural representations and model weights with Nvidia Tensor Cores at runtime. This significantly enhances the performance of neural rendering techniques, allowing for faster and more efficient real-time rendering with Tensor Cores.”
In other words, AI will be used not just to interpolate frames and generate new ones based on a traditionally rendered frame but also to help render that original frame. It’s AI being added to another step of the rendering pipeline.
The end-goal might presumably be to have the game engine tell the GPU information about the primary in-game qualities—objects, movement, and so on—and have AI flesh out the rest of the picture.
It’s difficult to imagine how that could work without any information on how to flesh out said picture, but that would be where the “game data and shader code” training comes in: Developers can give the AI model a good idea of what stuff should be like when rendered, and then when users actually play the game, the AI model can do its damndest to replicate that.
As Nvidia’s Blackwell white paper explains: “Rather than writing complex shader code to describe these [shader] functions, developers train AI models to approximate the result that the shader code would have computed.”
This will presumably be tailored to Blackwell given Nvidia has worked with Microsoft to develop the Cooperative Vectors API, though Nvidia does say that “some of [the developer-created neural shaders] will also run on prior generation GPUs.”
We already had an idea that this was in the works, as in December 2024 we saw Inno3D speak about “Neural Rendering Capabilities” in its then-upcoming graphics cards. We’d also seen mention of neural rendering from Nvidia before, but not in a context that could actually be implemented in games just yet.
Nvidia VP of Developer Technology John Spitzer calls this “the future of graphics” and Microsoft Direct3D dev manager Shawn Hargreaves seems to agree, saying that its addition of “Cooperative Vectors support to DirectX and HLSL… will advance the future of graphics programming by enabling neural rendering across the gaming industry.”
It’s almost a reflex for me to be sceptical of anything AI, but I must remember that my scepticism over frame gen has been slowly abated. I remember seeing character hands moving through the in-game HUDs and writing off DLSS 3 frame gen when it launched, but now those problems are rare and even latency isn’t half-bad if you have a high baseline frame rate.
So I’ll try to keep my mind open to at least the possibility that this could actually be a step forward. At any rate, we’ll find out before long—just a few weeks until devs can start trying it out.
Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.