HBM3 and GDDR6 emerge fresh from the oven of Hot Chips

The Hot Chips conference is going strong in Cupertino, California, and the first juicy tidbits of information have started popping up from folks at the event. According to sites reporting from the conference, SK Hynix and Samsung talked about the new generation of High-Bandwidth Memory, aptly called HBM3.

HBM3 improves on the current-generation HBM2 in nearly every regard. It'll use dies with higher capacity and bandwidth. Each die should now pack 16Gb (2GB) per layer, meaning it's possible we'll eventually see graphics cards with up to 64GB of HBM3 on board. As a point of reference, Nvidia's Tesla P100 accelerator currently uses just 16GB of HBM2. The bandwidth increase is impressive, too: there'll be at least 512GB/s per package on tap (up from 256 GB/s), which translates into a potential aggregate bandwidth in the range of terabytes per second.

There are power savings on offer, too, as HBM3 purports to offer a "much [lower]" core voltage than HBM2's 1.2V. Ars Technica notes that both Hynix and Samsung are already at work on the new memory type. There's no fixed date for mass production yet, but Samsung expects to produce the memory in volume between 2019 and 2020.

While high-end HBM is interesting, it also adds a hefty price premium to any product that includes it. Samsung has a couple alernative in the works, though. The company unveiled its plans for a mass-market "low cost HBM," a variant of HBM2 with a comparatively lower bandwidth of 200 Gb/s per die, but with a pin speed of approximately 3 Gb/s. (HBM2 tops at 256 Gb/s per die.) Samsung says this memory type will come at a fraction of the price of HBM2, thanks to the removal or reduction of features like ECC, buffer dies, and TSVs (through-silicon vias).

That's not all, though. Samsung also predicts it will begin mass production of GDDR6 (the successor to GDDR5X) come 2018. This memory type should offer 14 Gb/s per die, slightly up from GDDR5X's 12 Gb/s, as well as lower power consumption. It should be noted, though, that GDDR5X chips in current-generation graphics cards don't run anywhere close to that theoretical 12 Gb/s limit yet, though, so there's no telling what the effective speed of GDDR6 will be.

Comments closed
    • Mat3
    • 3 years ago

    Capacities per chip are getting to the point where some uses wouldn’t even need to stack the memory (the cheaper version mentioned). Would take more area so bigger interposer, but should easily more than make up for that with higher speeds, better yields and easier cooling on the memory itself.

    • brucethemoose
    • 3 years ago

    Whatever happened to that “Wide I/O” standard Samsung was working on?

    Smartphones/Tablets seem like a perfect place for stacked memory, given their current bandwidth, power and space restrictions.

    • aspect
    • 3 years ago

    How much does HBM even cost per a die?

    • ronch
    • 3 years ago

    [quote<]Samsung expects to produce the memory in volume between 2019 and 2020.[/quote<] I clicked on this news article intending to post a sarcastic comment about seeing these newfangled stuff in the year 2020 then I saw this. So yeah, I'm psychic.

    • CuttinHobo
    • 3 years ago

    When we get 64GB graphics cards, can we install Batman Arkham Knight in the card’s onboard memory so it finally runs acceptably? D:

      • LostCat
      • 3 years ago

      It already does, just loaded it up yesterday.

    • rudimentary_lathe
    • 3 years ago

    I’m all for rapid tech improvement, but I’d also like the option to own some of this new stuff at a reasonable price at some point. As far as I know HBM2 has not been put into a single GPU yet, and HBM1 was only featured in the Fury cards. At least GDDR5X is out there already at a decent price and in real products. It seems premature to be talking about HBM3.

      • Airmantharp
      • 3 years ago

      GDDR6 will arrive when it’s ready/needed, replacing previous versions either due to price/performance or due to older versions simply not being made any more.

      HBM (any version) is still a luxury item, and really a solution in search of a problem given the work Nvidia (who makes the fastest parts) has done to minimize memory bandwidth needs. Who knows when they’ll actually be held back by 384bit GDDR5X?

        • Waco
        • 3 years ago

        HBM for graphics cards, yes, solution in search of a problem.

        In HPC? HBM isn’t fast enough, still!

      • danazar
      • 3 years ago

      That’s not how technology really works. High performance solutions don’t always get cheaper as time goes on, especially not within a generation. HBM2 is expensive because it’s complex, and GDDR5X is fast enough for consumer use and will always be cheaper to produce.

      If you want cheaper HBM, the way to get there is for it to go through a few iterations while fabbers learn how to lower costs. I mean, this very article mentions that Samsung is working on lower-cost HBM designs, which will be new designs. If you want something you can afford, it’ll come, but not as HBM2.

    • Chrispy_
    • 3 years ago

    I feel so bad for AMD. It put all the work into HBM and launched Fiji, a product that really didn’t benefit from extra memory bandwidth at all.

    Nvidia and Samsung are now running with the investment and AMD still have nothing to show for it other than a whole bunch of invested man-hours that didn’t provide an ROI.

      • bittermann
      • 3 years ago

      These companies promise AMD the world and then cost overruns or low yields bite them big time. Has been that way for years and AMD bites the bullet because they have no clout. It’s sad because a lot of delays are beyond AMD’s control but being fabless means your at the mercy of these chip manufacturers.

        • Airmantharp
        • 3 years ago

        Nvidia is fabless too- and those companies delivered for AMD in terms of performance, but AMD didn’t deliver a part that could make use of it.

          • bittermann
          • 3 years ago

          Agreed but they were not stuck with only Glo Fo for the longest time…Nvidia had more options and their main focus was on graphics as AMD was also pushing the cpu/apu bandwagon. Not trying to say AMD didn’t make BIG mistakes themselves just offering a different point.

            • Airmantharp
            • 3 years ago

            We’re talking about GPUs and HBM here, AMD has been stuck with TSMC the same way Nvidia has.

            CPUs are a whole different ballgame, and AMD has made far more mistakes there.

            (they are however quite lucky that GloFo is now using a process very similar to Samsung’s, opening up other options for capacity/performance/quality)

      • DPete27
      • 3 years ago

      [quote<]Fiji, a product that really didn't benefit from extra memory bandwidth[/quote<] But that's exactly what makes me pessimistic about HBM2 and HBM3. The industry has already shown that, while memory bandwidth can help, it's certainly not something that in dire needs of growing by leaps and bounds. Compression seems to be handling bandwidth needs pretty nicely at the moment. Furthermore, aside from Fiji w/ HBM and GTX 1080 w/ GDDR5X, there's still a large percentage of products that haven't even taken the first step to increasing their bandwidth with products that are already available. GDDR5X seemed like an awesome product. Offering a substantial increase in bandwidth over GDDR5 at much lower cost than HBM. I suspect it just wasn't producing at sufficient volume by the time the new generation GPUs launched. Unfortunate.

      • ronch
      • 3 years ago

      Well, they do love to work on things and give them away. Should we rename HBM to ‘OpenGPUMemory’?

      • Mat3
      • 3 years ago

      Why mention Samsung? They’ll be providing HBM memory chips and so far only them and Hynex are doing it. For Nvidia, it sucks they get to come in like leeches and use it after AMD does much of the heavy lifting, but it’s a necessary evil.

      Also, while Fiji could have been done with GDDR5, I’m certain the extra bandwidth helped in some cases, and aside from that, it probably helped even more with keeping power in check.

        • Airmantharp
        • 3 years ago

        The only thing that sucks is that AMD put all of the effort into getting HBM1 up and running for zero performance benefit (their biggest deficit, though it did help with TDP and size).

      • psuedonymous
      • 3 years ago

      “Nvidia and Samsung are now running with the investment and AMD still have nothing to show for it”

      Don’t forget that Nvidia and Samsung are ALSO part of the HBM Working Group. AMD are no lone wolf here.

    • maxxcool
    • 3 years ago

    “Mass Production” of unicorn hair…

    • ImSpartacus
    • 3 years ago

    Wait, gddr6?

    Is that just more mature gddr5x? I thought that gddr5x could eventually hit 16 Gbps (twice the max of gddr5 due to double the prefetch or something like that), it’ll take time.

    I mean, didn’t the jedec spec already have 14 Gbps…

    [url<]http://www.anandtech.com/show/9883/gddr5x-standard-jedec-new-gpu-memory-14-gbps[/url<] But then micron only sampled to 12 Gbps... [url<]http://www.anandtech.com/show/10193/micron-begins-to-sample-gddr5x-memory[/url<] Though only the 10Gbps stuff was mass produced... [url<]http://www.anandtech.com/show/10316/micron-confirms-mass-production-of-gddr5x-memory[/url<] So what is this gddr6? I feel like I'm missing something. :/

      • chuckula
      • 3 years ago

      GDDR6 will probably start at right around the top of GDDR5X’s range similar to how slow DDR4 (2133) is similar to fast DDR3.

      It’s interesting (maybe not in a good way) that more conventional memory technologies do not appear to be going away anytime soon even as we hear more about the fancier stacked memories.

        • maxxcool
        • 3 years ago

        Until we see stacked stuff on the CPU die itself, and on Memory modules in desktop and mid range cards it just is a super niche product.

        It inst cheap, it is not trivial to produce and nobody wants to risk bank rolling a ‘dot-oh’ memory config for super main stream production levels that would possibly catalyze the market for adoption.

          • the
          • 3 years ago

          Samsung’s announcement of stacked memory without an IO controller is probably targeted directly at systems where memory can be stacked directly above the logic die. Due to thermal dynamics, these are going to be low end/mobile chips.

        • the
        • 3 years ago

        The low end likely won’t be able to migrate to a stacked memory technology but HBM3 and organic interposers should be far more cost effective than current implementations. The result is likely HBM entering the more mainstream GPU market instead of premium enthusiast cards.

    • Neutronbeam
    • 3 years ago

    Can I get queso and salsa with those chips?

      • chuckula
      • 3 years ago

      You bloody yank.
      I want a proper malt for me chips!

        • Chrispy_
        • 3 years ago

        ‘Opp narf we’d knock yer bloody block off for not puttin’ gravy on them chips.

        Heathens, the lot of yer…..

          • chuckula
          • 3 years ago

          How much longer are we Poutine up with this?!?!

            • anotherengineer
            • 3 years ago

            If it’s Bacon Poutine, forever……….

            • tipoo
            • 3 years ago

            “You’re welcome”
            -Canada

    • chuckula
    • 3 years ago

    [quote<] The company unveiled its plans for a mass-market "low cost HBM," a variant of HBM2 with a comparatively lower bandwidth of 200 Gb/s per die, but with a pin speed of approximately 3 Gb/s. (HBM2 tops at 256 Gb/s per die.) Samsung says this memory type will come at a fraction of the price of HBM2, thanks to the removal or reduction of features like ECC, buffer dies, and TSVs (through-silicon vias).[/quote<] That will help but another bonus would be packaging technologies that don't require a silicon substrate. Did they mention anything about HBM on lower-priced substrates?

      • morphine
      • 3 years ago

      There was mention of an “organic interposer” as a cost-saving measure. Dunno if that’s the same thing, though.

        • chuckula
        • 3 years ago

        Yes that is exactly what I was talking about. Thanks for the info.

        As a point of reference, Knights Landing uses an organic interposer with its HMC stacked memory.

        It will be interesting to see HBM with lower cost packaging.

          • the
          • 3 years ago

          The other benefit of an organic interposer is that they can be larger than silicon interposers. That permits some interesting design scaling when using multiple dies for logic.

Pin It on Pinterest

Share This