Hynix slides tease vertically stacked memory with 256GB/s of bandwidth


— 6:00 AM on September 30, 2014

High-Bandwidth Memory, otherwise known as HBM, is a form of stacked DRAM designed to sit on the same package as a processor. Hynix has been working on the technology with AMD, and there's already a JEDEC standard governing the interface. Now, official-looking Hynix presentation slides linked on Reddit provide new insight into how this stacked memory works—and what the future holds.

According to the slides, Hynix's first-gen implementation stacks four DRAM dies on top of a single base layer. The dies are linked by vertical channels called through-silicon vias. By my count, there are 256 of those per slice, each one capable of transmitting at 1Gbps. That gives the four-way KGSD, or Known Good Stacked Die, a staggering 128GB/s of total bandwidth. For perspective, consider that the memory interface on the GeForce GTX 750 Ti tops out at just 86GB/s.

Hynix is currently layering 256MB dies to form 1GB stacks. The presentation indicates this is only the beginning, though. Hynix plans to push HBM through 2022, and improvements in density and performance are on tap for the next iteration. The slides say the second coming of HBM will double the transfer rate per pin, pushing the interface to 256GB/s—more bandwidth than even the high-end GeForce GTX 980.

The next generation will supposedly move to 1GB dies, enabling 4GB stacks. It looks like eight-die configs will also be an option, raising the maximum capacity to 8GB. Doubling the die count doesn't appear to increase the stack's total bandwidth, perhaps due to limitations in the base layer.

Although there's no mention of specific products using HBM, the slides assert that "over 21 design-ins [are] in progress." One of those could be AMD's upcoming Carrizo APU, which is rumored to have stacked, on-package memory.

 
   
Register
Tip: You can use the A/Z keys to walk threads.
View options

This discussion is now closed.