Nvidia Ampere follow-up reportedly named Hopper

While we’re basking in the glow of a fresh set of benchmarks on a newly-revealed architecture, Nvidia and AMD are deep into the next one—and the one after that. Nvidia’s next next architecture, it appears, will be called Hopper, according to word from a reliable leaker on Twitter.

Nvidia We’re on Turing right now, and we’ve known for a while that the next architecture will likely be called Ampere, named for French physicist André-Marie Ampere (and the unit of electrical measurement). Ampere is set to release sometime in 2020 according to current rumors.

After that will come Hopper, named for American computer scientist Grace Hopper. Hopper programmed the Mark I computer during World War II and led the team that created the first machine-language compiler, which directly preceded the creation of COBOL, one of the first and longest-lasting high-level computer-programming languages.

According to Twitter user @kopite7kimi (via TechPowerUp), the name change isn’t the only thing that’ll be new about the Hopper chips. According to the user, Hopper will introduce MCM (Multi-Chip Module) GPU packages. Nvidia has been researching MCM architectures for some time.

A primer on MCM GPUs

Here’s the short version of how they theoretically work: Single-chip GPUs, or “monolithic” GPUs have a theoretical upper limit. Transistors can only shrink so far; the further you shrink them, the harder it becomes to get usable yields on the “wafers” that processors are manufactured into.

An MCM architecture breaks the work up into smaller, less-complex GPU Modules, or GPMs, that are easier to manufacture and would likely produce higher yields—and thus less waste and maybe even cost improvements. In a paper published in June 2017, Nvidia proposed an MCM GPU design that is 45.5% faster than the largest implementable (i.e. realistically manufacturable) GPU and within 10% of an optimal hypothetical—and “unbuildable” per Nvidia—monolithic GPU. In this paper, Nvidia also found that its optimized MCM was 27% faster than a multi-GPU system with the same total number of streaming multi-processors and the same amount of DRAM bandwidth.

This is rumor and conjecture right now, but it all seems like the sensible next step considering that transistors physically cannot get much smaller until we master carbon nanotubes. Hopper is far off in the future; not exactly a reason to hold off on picking up a new GPU. But it is something to look forward to in the coming years.

avatar
8 Comment threads
6 Thread replies
0 Followers
 
Most reacted comment
Hottest comment thread
11 Comment authors
SpunjjipsuedonymousAnonymous CowardBobNeutronbeam Recent comment authors
  Subscribe  
newest oldest most voted
Notify of
Bob
Guest
Bob

I don’t like it. NVidia is just going to kill it off in Season 3, and vaporize it into another dimension!

Neutronbeam
Guest
Neutronbeam

And after Hopper comes Pinky, Flopears, and Cuddlenose.

psuedonymous
Guest
psuedonymous

I propose giving all future Nvidia and AMD GPU architectures excessively saccharine names, in order to either eliminate fanboy “X is better than Y” rants out of sheet embarrassment, or at the very least make them a lot more entertaining for the test of us. It’d take some real dedication to spam “Flopsy Wopsy absolutely dominates Fluffy Wuffy in Furmark!” in seriousness.

Anonymous Coward
Guest
Anonymous Coward

So they’ve done multi-GPU for many years, but I guess this will be coupled together at a lower level and operate more effectively as a single GPU. Why hasn’t this already been done? Is the trick putting stuff onto a package together, thats it?

Krogoth
Guest
Krogoth

Physical and economical realities have finally caught-up on the manufacturing side. It is starting to become cheaper, easier to bolt together a bunch of smaller MCMs then attempt to create a massive, single piece of silicon.

Anonymous Coward
Guest
Anonymous Coward

So they will succeed in making MCM GPUs behave as monolithic GPUs, but could not manage it when the silicon was separated by some centimeters (on the same card for example)? The MCM CPUs are not much bothered because the CPU cores are clearly distinct from each other anyway.

Spunjji
Guest
Spunjji

As best I understand it, the problem of trying to get traditional multi-GPU setups to behave like a single GPU is primarily one of latency and secondarily one of bandwidth. Having multiple “small” GPU units so physically close with the kind of high-bandwidth, short-distance links you get from an EMIB-type setup should allow them to function a lot more like a monolithic GPU than two cards communicating over PCIe ever could. It still won’t be perfect – they seem to be saying 10% deficit over a “true” single GPU here.

Wirko
Guest
Wirko

Meet COBOLUDA, the successor to CUDA!

chuckula
Guest
chuckula

Eh, itsa Ponte Vecchio Nvidia!

Liron
Guest
Liron

It would be interesting if we had the option to choose between GPUs with more rasteriser modules and less raytracer modules, and GPUs with more raytracers and less rasterisers.

K-L-Waster
Guest
K-L-Waster

GPUlets, anyone?

JustAnEngineer
Guest
JustAnEngineer

Exactly. I believe that Erik’s summary is correct. Smaller GPUlets are easier to fabricate with much higher yields than ginormous monolithic GPUs. The CPU example of Ryzen’s success on a somewhat immature 7nm process provides an excellent demonstration.

chuckula
Guest
chuckula

Get to the Hoppah!

Krogoth
Guest
Krogoth

Raja: You said you would cancel me last?

Nvidia: I lied

Pin It on Pinterest

Share This