AMD’s next top-of-the-line GPU may not be a multi-chip computing monster, but it’s better for everyone
AMD’s top-of-the-line RDNA 3 GPU is no longer considered a multi-chip GPU, at least from a graphics computing standpoint. AMD’s next-generation top-of-the-line GPU, the Navi 31 chip, has been expected to be the first to bring the chiplet design of its latest Ryzen CPUs to its graphics cards.
Honestly, that’s not the case with the new rumors, which is a relief.
As AMD’s next-generation GPU approaches its expected October release date and begins shipping to its partners, we’re getting closer to the horizon of the leaked hype event. That place where fact and fiction start to merge, and it all turns into a frenzy of fake numbers, sprinkled with a little lighthearted truth.
But that doesn’t mean things are settled in any way now. The Twitter x YouTube leak mechanism has been fading away, and AMD’s flagship Navi 31 GPU has been given so many theoretical and fanciful specs that it’s hard to track where the general consensus is right now.
It used to be a 92 TFLOP beast with around 15,360 shaders arranged on a pair of graphics compute chips (GCDs), and those specs have been reduced to 72 TFLOPs and 12,288 shaders. Now we’re hearing rumors that all the noise about dual graphics chip designs is wrong, and the reality of multi-chip designs is more about floating caches than extra compute chips.
The latest video from Red Gaming Tech appears to have been corroborated by a Twitter leaker, which suggests that the entire 12,288 shader count will be placed on a single 5nm GCD with a total of six 6nm multi-cache dies (MCDs) arranged around it, or possibly on top of it.
If AMD could create a GPU compute chiplet that could exist alongside other GPU compute chiplets in a single package and be completely invisible to the system, I would really like it. However, for a gaming graphics card, it’s a tall order. For data center machines and systems running entirely compute-based workloads (such as in render farms), doubling the GPU and running the tasks on many different chips is feasible. When you have different silicon rendering different frames or different parts of frames in the game, that’s another issue entirely.
And, so far, this is ultimately beyond our GPU tech overlords. We used to have SLI and CrossFire, but developers had a hard time implementing this technology to great effect, so even if they did manage to make games run faster on multiple GPUs, the scaling wasn’t linear.
You’ll pay twice as much for two graphics cards to get maybe 30-40% higher frame rates. in some games. Sometimes more, sometimes less. It’s a lottery, a lot of hard work for the developers, and eventually completely abandoned by the industry.
The holy grail is to make a multi-GPU system or chip invisible, so your operating system and the applications running on it see it as one graphics card.
When we first heard rumors that the Navi 31 would be an MCM GPU, that was hope, backed up by leaks and job listings. But I wouldn’t say hope or fear.
At some point, someone will do it, and we think it’s about time for AMD to bring its Zen 2 skills to chiplet-based GPUs. But we think it jumps first, ready for the inevitable onslaught of adopting new technology, and any unexpected lag issues that come with different games and countless gaming PC conflicts. Always hope that loading the full shader into the MCM mix will overcome any bottlenecks.
But now multi-chip designs seem to be purely about cache chips, potentially acting as memory controllers themselves, which would make it more straightforward from a systems perspective and limit unforeseen problems that could occur.
And still make powerful AMD graphics cards.
Nvidia’s new Ada Lovelace GPUs are going to have to be wary, because this generation is going to be a dud. We might actually be able to get them this time around, although the price may still be high.