Electronics industry standards body JEDEC released an update to the High-Bandwidth Memory (or HBM) standard today. The updated standard, JESD235A, allows for two-high, four-high, or eight-high HBM stacks with capacities ranging from one to eight gigabytes per stack. While this updated HBM standard uses the same 1024-bit-wide, eight-channel interface per stack as the first generation of HBM, each stack can now support transfer speeds up to 256 GB/s. If you're not sure what any of that means, check out our HBM overview.
This update is exciting news for next-generation graphics cards or any other product that's paired with HBM. The first major product with HBM onboard, AMD's Radeon R9 Fury X, uses four HBM stacks that give it an effective 4096-bit path to memory. By our calculations, each of those first-gen HBM stacks has 128 GB/s of bandwidth on tap, so the Fury X has a theoretical 512 GB/s of memory bandwidth available. A similar product with four stacks of this next-generation HBM seems like it could enjoy theoretical bandwidth of a terabyte per second or so, if our back-of-the-napkin math is correct. Let that sink in for a moment.
JEDEC also says the JESD235A standard adds a "pseudo-channel architecture to improve effective bandwidth," as well as a feature to alert controllers when the temperature of the DRAM exceeds safe levels.
These numbers square well with what we know about Nvidia's next-generation Pascal GPU, too. Nvidia has always depicted that chip with four stacks of "3D memory," and it's already said that its next-generation chip would come with up to 32GB of RAM onboard. The company has also claimed that the chip's memory bandwidth would be about three times that of the Maxwell GPU. Going by the 336 GB/s theoretical bandwidth of the GeForce Titan X, that claim makes more sense now, as does the 32GB capacity figure.