JEDEC lets RAM makers bake 12-layer, 24-GB HBM cakes

Here's a topic that we haven't talked about in a while. The JEDEC standards body has taken a pen to the JESD253 HBM standard, scratched out some figures, and put in higher, juicier ones. Enter the JESD253B revision, whose main claim to fame is support for 12-high TSV (through-silicon via) stacks. Along with that improvement, the total per-stack transfer rate is now 307 GB/s, up from JESD253's 256 GB/s.

AMD Vega 10 GPU flanked by HBM chips

The JESD235B standard uses the same 1024-bit-wide bus as before. However, the per-pin bandwidth is now 2.4 Gb/s, a boost that translates into the total 307 GB/s available to a single HBM package. On the way to making JESD253B-compliant HBM stacks, RAM makers can use two-, four-, eight-, and 12-layer configurations, too.

The improved layer count and flexibility in the number of layers means that memory makers should be able to build HBM stacks as large as 24 GB in total. We figure that one of the usual suspects in the RAM world will craft such a legendary item sooner or later.

The JEDEC press release lets out no further details about who or what is going to use these chips, but the prospect is certainly enticing. We wouldn't be all that surprised to see high-end compute accelerators using this refreshed take on HBM in time. Thanks to TR tipster SH SOTN for the heads-up.

Comments closed
    • Chrispy_
    • 10 months ago

    IS HBM2 cost / availability improving yet?

    I still see it only on Vega and lolspensive compute cards from Nvidia. GDDR6 seems to have won the battle, if not the war….

      • Krogoth
      • 10 months ago

      Actually, GDDR6 is pretty much the dead end for GDDRx. HBMx is the future for ultra-high bandwidth applications (pretty much HPC/general compute not graphics).

      The main problem with HBM is that majority of the GPU SKUs don’t need the bandwidth yet. GDDR6 isn’t exactly that much cheaper either for different reasons (need tons of PCB layers and tracing to handle those insane clockspeeds, power consumption becoming more and more of an issue). The memory cartel also has much more at stake at GDDRx so they are much more reluctant to switch over en mass.

      The only winner is GDDR5 because it is so dang cheap and mature. It will likely continue to be used on lesser SKUs that don’t the need bandwidth that HBM or GDDR6 provide.

        • chuckula
        • 10 months ago

        GDDR6 wasn’t even supposed to exist. HBM or another stacked memory technology was supposed to displace GDDR entirely. The economics haven’t worked out in HBM’s favor yet though. It’s most certainly cheaper to use GDDR6 in the large majority of consumer products right now and you’ll see AMD adopt it with Navi and I’ll bet that Intel’s dgpus will use it too.

          • Krogoth
          • 10 months ago

          GDDR6 is only marginally cheaper. It is the swan song of the GDDRx standard which has been running into own walls since GDDR5. This is why memory companies were exploring other technologies.

          HBMx’s problem is mostly an issue of being a solution looking for a problem in the customer market. Cost being a secondary issue which is typical for any new memory technology/standard. GDDR5 had the same problems back when it was young.

          GDDR5 is simply too cheap and good enough for majority of GPU chips while high-end stuff can either go with GDDR6 or HBM. Dedicated high-end GPGPU SKUs from both camps have already moved away from GDDRx.

      • dragontamer5788
      • 10 months ago

      HBM2’s issue is that it can’t practically offer less than 256GB/s of bandwidth. So HBM2 can give you ~256GB/s (see the combination Intel + Vega20 chip), or it can give you ~512GB/s when you go dual-HBM stacks (Vega / lolspensive NVidia cards). Next stop is 768GB/s (3-stack) and then 1TB/s (4-stack).

      But what do you do for your graphics cards which only need ~100GB/s bandwidth?

      In effect, HBM2 is too fast / expensive for lower end cards. DDR4 is ~25.6GB/s per channel, so you’d need 4x DIMMs of DDR4 at 3200 to reach 100GB/s. That’s impractical as well.

      This is where GDDR6 comes in: ~40GBps to 56GBps per chip. To reach 100GB/s low-end cards, you only need 2x GDDR6. Midrange is still practical at ~4x to 6x chips (200GB to 300GB).

      Going GDDR6 x12 instead of HBM2 x2 probably has power-efficiency and cost issues. But it might be cheaper to do GDDR6x12 due to PCBs being way cheaper to design than active interposers.

        • Chrispy_
        • 10 months ago

        That actually explains a lot, thanks.

        • Mr Bill
        • 10 months ago

        (+3)This post needs to be the topic of an article. Thank you for pointing this out.

    • albundy
    • 10 months ago

    bring on the $10,000 video cards because the new ram is in short supply as always.

    • Growler
    • 10 months ago

    Would this help SSK’s iTunes performance?

      • morphine
      • 10 months ago

      [quote<]Would this help SSK's [s<]iTunes[/s<] performance?[/quote<] I doubt that even this HBM would.

      • Neutronbeam
      • 10 months ago

      YES, YES IT WOULD GUIZE!

      • sweatshopking
      • 10 months ago

      IT TAKES LITERALLY FOREVER TO PUT SONGS ON MY ONEPLUS

    • Krogoth
    • 10 months ago

    Memory bandwidth, memory bandwidth everywhere.

    • dragontamer5788
    • 10 months ago

    Samsung’s Aquabolt HBM2 stacks were 2.4 Gb/s or ~307GB/s. I’m guessing Samsung just managed to get their chips written into the HBM2 standard.

      • ImSpartacus
      • 10 months ago

      That’s exactly how it probably worked. I mean, who do you think is writing the standards?

      But it’s not really malicious. We’ve got SK Hynix with 2.4Gbps HBM as well (but they didn’t have a savvy name to go with it!).

      • Antimatter
      • 10 months ago

      Those were 8 layer stacks.

        • JustAnEngineer
        • 10 months ago

        The record is apparently [url=http://www.guinnessworldrecords.com/world-records/greatest-number-of-layers-in-a-layer-cake<]260 layers[/url<], though I must say that those folks seemed to lack precision. [url=http://www.guinnessworldrecords.com/world-records/tallest-cake-<]This effort[/url<] looks more impressive for both scale and assembly skill.

    • Voldenuit
    • 10 months ago

    Is this what was going to be on the $750 BOM Vega that got cancelled?

      • chuckula
      • 10 months ago

      The cake was true .
      The Vega was the lie!

        • Mr Bill
        • 10 months ago

        [url=https://www.youtube.com/watch?v=bSq93Hsn0Bg<]Cut The Cake![/url<]

      • jts888
      • 10 months ago

      Given that RTG has tried to bump virtually everything besides the fixed-function throughput largely perceived to be bottlenecking GCN in games, this would actually not shock me completely. It seems that more geometry and rasterization throughput would need more than the 4 Shader Engines currently allowed though. 🙁

Pin It on Pinterest

Share This