Nvidia’s keynote speech at Computex 2024 contained nothing about its next generation of GeForce graphics cards. So, for the time being, we’re left to browse through the usual sources of leaks and rumours to build a picture of what’s coming. The latest of which suggests that the RTX 5090 will be using a 512-bit wide memory bus, but the rest will be the same as in the current RTX 40-series.
The source of said rumour is Kopite7kimi on X, who has a pretty good reputation for making accurate predictions and statements about future developments in GPUs. In a recent post, the leaker set down the memory configurations for the five Blackwell GPU variants expected to be launched later this year (though some of them may not be announced until 2025).
First up is the GB202, which will undoubtedly be used in the RTX 5090 and a raft of professional-grade graphics cards. The biggest GPUs always have the widest memory bus, so that all those shaders can be kept busy with data—and in the case of the Blackwell monster, it’s being claimed that it will sport a 512-bit wide memory bus and GDDR7 VRAM chips.
If one paired that with Micron’s slowest GDDR7 chips, which run at 28 MT/s, you’re looking at an aggregate bandwidth of 1.8 TB/s or so—roughly 77% more bandwidth than the RTX 4090. Even if the RTX 5090 ‘only’ sports a 384-bit bus, it would still have 33% more bandwidth thanks to the use of faster GDDR7 (the RTX 4090 uses 21 MT/s GDDR6X).
Kopite7kimi suggests the other GPU variants remain unchanged concerning the memory bus width, though. The GB203 is 256-bits, the GB205 is 192-bits, and the bottom-end GB206 and GB207 are both 128-bits. That’s the same as the AD103, AD104, AD106, and AD107. However, the use of GDDR7 across most of the graphics cards that will use these GPUs should see a considerable uplift in bandwidth.
GB202 12*8 512-bit GDDR7GB203 7*6 256-bit GDDR7GB205 5*5 192-bit GDDR7GB206 3*6 128-bit GDDR7GB207 2*5 128-bit GDDR6June 11, 2024
It’s worth noting that the width of a memory bus doesn’t just impact VRAM bandwidth—it also determines how much VRAM can be added to the graphics card. At the moment, all of Micron’s GDDR7 modules are 32-bit wide and have densities of 16Gb or 8GB, so a 256-bit bus would top out at 16GB.
So other than the RTX 5090, none of the forthcoming Blackwell cards will be sporting more VRAM than the current Ada Lovelace models, assuming those specs are correct. If the successor to the RTX 4090 does have a 512-bit memory bus, we could be looking at a graphics card with 32GB of VRAM. Yeah, best start saving now…
Something else that Kopite7kimi suggested in the post was the internal configuration of the shader blocks in each chip. For example, the 12*6 for the GB202 refers to the number of GPCs (Graphics Processing Clusters) and how many TPCs (Texture Processing Clusters) are in each GPC.
(Image credit: Nvidia)
The AD102 is also a 12*6 configuration, so does this mean the RTX 5090 won’t have more shaders than the RTX 4090? That’s a possibility, as Nvidia could be looking to improve overall performance by just using higher clock speeds. However, the GPC*TPC figure doesn’t tell you how many SMs (Streaming Multiprocessors) are in each TPC, nor how many shaders are in each SM.
(Image credit: Future)
Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.
In Ada Lovelace chips, there are two SMs per TPC and a total of 128 shaders per SM. Nvidia could be using more SMs per TPC, more shaders per SM, or a combination of both, in Blackwell GPUs. But now we’re in the realm of rampant guessing, so it’s best to just ignore all that until we know more.
Nvidia’s market share of discrete GPUs, both add-in cards and laptop chips, is so large that it could release a new round of graphics processors that aren’t really that much faster than their predecessors, and still sell a bucket load of them.
It may turn out that Blackwell GPUs aren’t fundamentally much faster than Ada Lovelace ones, but thanks to more VRAM bandwidth and perhaps more cache, better AI features, and so on, the RTX 50-series could still be notably better than RTX 40-series cards.
Time will tell, of course, but for now, all we can do is speculate on the rumours.