JEDEC lets RAM makers bake 12-layer, 24-GB HBM cakes


Here's a topic that we haven't talked about in a while. The JEDEC standards body has taken a pen to the JESD253 HBM standard, scratched out some figures, and put in higher, juicier ones. Enter the JESD253B revision, whose main claim to fame is support for 12-high TSV (through-silicon via) stacks. Along with that improvement, the total per-stack transfer rate is now 307 GB/s, up from JESD253's 256 GB/s.


AMD Vega 10 GPU flanked by HBM chips

The JESD235B standard uses the same 1024-bit-wide bus as before. However, the per-pin bandwidth is now 2.4 Gb/s, a boost that translates into the total 307 GB/s available to a single HBM package. On the way to making JESD253B-compliant HBM stacks, RAM makers can use two-, four-, eight-, and 12-layer configurations, too.

The improved layer count and flexibility in the number of layers means that memory makers should be able to build HBM stacks as large as 24 GB in total. We figure that one of the usual suspects in the RAM world will craft such a legendary item sooner or later.

The JEDEC press release lets out no further details about who or what is going to use these chips, but the prospect is certainly enticing. We wouldn't be all that surprised to see high-end compute accelerators using this refreshed take on HBM in time. Thanks to TR tipster SH SOTN for the heads-up.

Tip: You can use the A/Z keys to walk threads.
View options