SK Hynix fires up its foundries for 16 Gb/s GDDR6

In some circles, High-Bandwidth Memory (HBM) has already been crowned as the graphics card memory of the future. However, it's still not the time to forget about trusty ol' GDDR. SK Hynix has announced that it will produce GDDR6, the successor to the widely-used GDDR5 and GDDR5X, and plans to have the chips available in consumer products sooner than many might expect.

SK Hynix is calling its product the world's fastest 8Gb-capacity DRAM. Considering that its chips operate at 16 Gb/s per pin, that claim might not be bluster, seeing as GDDR5X tends to cap out at 12 Gbps in practice. Samsung might have cause to dispute SK Hynix's claim for first place in the speed ranking, seeing as it's already working on GDDR6 as well. However the footrace between the companies shakes out, SK Hynix says that users should expect GDDR6 to double the speed of GDDR5 while operating at 10% lower voltage.

So what kind of memory bandwidth will GDDR6 enable in future products? The company is being understandably coy about the details, but SK Hynix did tease some details about an upcoming graphics card with a 384-bit memory bus that offers up to 768 GB/s of theoretical bandwidth. For the sake of comparison, Nvidia's top-of-the-line Titan Xp also has a 384-bit memory bus and theoretically has 547.7 GB/s of RAM bandwidth.

That said, SK Hynix won't steal a business partner's thunder by announcing a new consumer graphics card in one of its press releases. The company did sketch out a few of its own plans. It will start mass producing GDDR6 this year and says that consumers will see the chips in a high-end graphics card in early 2018. SK Hynix says that it's been collaborating with a "core graphics chipset client," and hopes that its GDDR6 chips will gain acceptance among graphics card vendors as replacements for existing GDDR5 and GDDR5X technologies.

HBM3 is also on the horizon, but considering that early estimates put it in mass production no sooner than 2019 or 2020, SK Hynix might have a window of opportunity for GDDR6 to be the top dog in the market.

Comments closed
    • synthtel2
    • 2 years ago

    I’m assuming this standardizes the QDR that makes GDDR5X tick. I could understand not changing the name there, since it shared so much with GDDR5, but I did kind of expect this to be called GQDR or something like that. Is it actually not QDR, or are things just going to be inaccurately named from here on out (or does anyone not under NDA even know yet)?

      • ImSpartacus
      • 2 years ago

      I still haven’t found any technical write-up as to how GDDR6 substantially differs from GDDR5X.

      They both have the same 16 Gbps long term target, so I can only assume that they both oughta be called “GQDR” or something to that effect (though I can’t claim to have enough technical understanding to make that distinction).

        • Klimax
        • 2 years ago

        IIRC different prefetch and few other changes. Anadtech had good write-up on it.
        [url<]http://www.anandtech.com/show/9883/gddr5x-standard-jedec-new-gpu-memory-14-gbps[/url<]

          • ImSpartacus
          • 2 years ago

          I appreciate you sharing that. That’s the article that posted Micron’s initial 14 Gbps plans (which ended up being too ambitious).

          I’m more interested in the difference between GDDR6 and GDDR5X. The difference between GDDR5 and GDDR5X is relatively well understood and documented, as you pointed out.

    • DPete27
    • 2 years ago

    Early 2018….Volta?

    Man. AMD got on a bad cycle with memory tech. They got in on HBM too early with Fiji. Released Polaris too early to take advantage of GDDR5X. Now they’ll launch Vega on HBM2 when (presumably) cheaper GDDR6 will be right around the corner.

      • ImSpartacus
      • 2 years ago

      The really sad thing is that Vega 10’s HBM2 setup will have less bandwidth than the 1080 Ti’s memory subsystem.

      It’s most just dick measuring, not a big enough difference to matter. But the principle of it makes me sad for AMD.

      Nvidia is just killing them with the efficiency of their recent architectures. Nvidia doesn’t need the extra power savings from HBM.

      It’s humiliating for AMD, who needs to chill their halo cards at ~50C with a water cooler to extract every ounce of power savings.

        • DPete27
        • 2 years ago

        To me it’s a cost issue. Props to AMD for using new memory tech, but HBM/HBM2 is still just too expensive for the consumer market. Their profit margins using HBM must be awful. That’s one thing AMD sorely needs right now is more profit. IMO, profit is most likely to effect R&D, and R&D is what keeps you competitive in these markets. Cutting profit margins compared to your competitors is not good for a long-term business model.

          • ImSpartacus
          • 2 years ago

          Yep, and they are debuting a pro card for only $1000. Hold me.

          • K-L-Waster
          • 2 years ago

          [quote<]Props to AMD for using new memory tech, but HBM/HBM2 is still just too expensive for the consumer market.[/quote<] I would question giving out props, actually. Using something newer or more advanced isn't inherently better -- it's only better if it improves something for the customer. If it costs too much or limits the amount of onboard memory, it darn well better give a measurable improvement in performance or else it's just using "new!!" to give the illusion of improvement.

            • Chrispy_
            • 2 years ago

            Well, HBM2 has been shipping for a while now (Teslas and GP100 cards) and even with very slow 1.4GB/s clocks it’s managing 720GB/s at much lower power consumption than even GDDR5X could ever hope to acheive.

            By the time Vega launches we’re expecting 2GB/s clocks, so that should deliver 8GB with very low-latency, tight-timing, 512GB/s bandwidth.

            GDDR6 might be “right around the corner” from Samsung and Hynix’s perspective but don’t expect to see it in anything us mere mortals can afford for another 18 months.

            • Kougar
            • 2 years ago

            Could say the same for HBM3 tech though, that’s much much farther out.

            AMD has been paying for joint development of HBM with SK Hynix since day one. They are currently getting deep into HBM3 development, and AMD doesn’t collect royalties on HBM2 parts that NVIDIA sells.

            HBM tech has advantages, but if AMD’s GPUs will not compete at the high-end what is the point of using more expensive HBM2 on them? Especially if NVIDIA can keep its flagship offerings on cheaper GDDR6 while doing it.

    • ImSpartacus
    • 2 years ago

    [quote<]Considering that its chips operate at 16 Gb/s per pin, that claim might not be bluster, seeing as [b<]GDDR5X theoretically caps out at 12 Gbps[/b<].[/quote<] Do you have a source for that? Micron stated here that GDDR5X has a "theoretical" limit of 16 Gbps, with initial efforts at 10-12 Gbps. [url<]http://www.anandtech.com/show/9883/gddr5x-standard-jedec-new-gpu-memory-14-gbps[/url<] Specifically, this slide: [url<]http://images.anandtech.com/doci/9883/micron_gddr5x_575px.png[/url<] Intuitively, this makes sense since GDDR5 was thought to cap out at 8 Gbps at that time and GDDR5X doubles everything.

      • morphine
      • 2 years ago

      We’ve adjusted the sentence in question. “In theory” wasn’t strictly correct. Even the Titan Xp uses 11.4 Gbps RAM, though. Thanks for the heads-up.

        • ImSpartacus
        • 2 years ago

        You’re right. [url=https://www.micron.com/products/dram/gddr?306=GDDR5X&show=true<]Micron's GDDR5X product catalog only has 10, 11 & 12 Gbps options for sale right now[/url<]. They probably could push it harder (remember, GDDR5 was supposed to top out at 8 Gbps, yet Samsung is shipping 9 Gbps stuff somehow), but it doesn't make sense to continue with old tech when SK Hynix and Samsung both have shiny GDDR6 coming in 2018.

          • morphine
          • 2 years ago

          At this point, it’s kind of a moot point, yeah. With two huge foundries making GDDR6 already…

            • ImSpartacus
            • 2 years ago

            That’s correct, though I think there’s some medium-term competitive value with Micron having GDDR5X in its lineup as a happy medium between the 8-9 Gbps of GDDR5 and the 14-16 Gbps GDDR6. Lots of network equipment that traditionally eats up high-bandwidth GDDR5 (and presumably GDDR5X/GDDR6).

            And to be clear, there are [i<]three[/i<] huge foundries making (or getting very close to making) GDDR6. Weirdly, Micron is actually the most aggressive, [url=http://www.pcworld.com/article/3165301/components-graphics/driven-by-esports-micron-fast-tracks-superfast-gddr6-graphics-memory.html<]claiming they will beginning shipping the stuff in 2017 [/url<], though I wouldn't expect it rated much faster than 14 Gbps initially. We'll see what happens once they all start to sample later in the year.

      • Kougar
      • 2 years ago

      Aye, but David Kanter indicated clocking GDDR5X higher had diminishing returns due to the underlying architecture. So it’s not a straight linear increase in performance as the clocks would led us to expect, and why GDDR6 is needed.

        • ImSpartacus
        • 2 years ago

        Do you have a source for Kantor’s thoughts on that?

          • UberGerbil
          • 2 years ago

          I recall he said it in the podcast, but I think he’s probably mentioned it in a discussion in the forums at realworldtech.com also.

          • Kougar
          • 2 years ago

          It was brought up twice in the TTR podcast with him. Kanter seemed rather eager to get into that specific discussion so I got a bit miffed when Jeff rode him off the subject the first time around. xD

    • ozzuneoj
    • 2 years ago

    Remember when a 3D accelerator (that’s what we used to call them) was advertised as having super fast EDO memory? Remember when DDR vs SDR was the way to tell the difference between a high end and a mainstream card? Remember Turbocache?

    … oh how times have changed.

      • cmrcmk
      • 2 years ago

      Remember when AGP came out and it was a killer feature for any gaming rig? And then when PCIe eclipsed it?

      • f0d
      • 2 years ago

      remember when having 16 colors was a big deal and video memory was measured in kilobytes?

      • chµck
      • 2 years ago

      member when ATI released the 9800 pro?

        • Goty
        • 2 years ago

        I still have my 9700 Pro in a box somewhere.

          • BurntMyBacon
          • 2 years ago

          I still have an All-In-Wonder (ATi 3D Rage II). I have memorialized it in a functional, but offline (and never really used) Win98SE box (K6 233MHz, 128MB RAM, VIA Apollo VP2 Chipset). Oh the nostalgia.

      • jihadjoe
      • 2 years ago

      So remember when fast memory meant a high-end graphics card?

      The more things change, the more they stay the same.

      • Neutronbeam
      • 2 years ago

      I can’t remember when I could remember when. 🙁

      • BurntMyBacon
      • 2 years ago

      [b<]Stop it! You are making me feel old for being around long enough to have explained to others the difference between FPM and EDO when EDO came out![/b<]

        • willmore
        • 2 years ago

        You’re making me feel old that I didn’t even hesitate when I read FPM and EDO and just translated them on the fly in my head. 🙁

      • psuedonymous
      • 2 years ago

      Remember when an AGA chipset was the shit, and we looked down on all the ECS and OCS peasants? DAT HAM-8 mode…

    • tsk
    • 2 years ago

    I wonder if this will be a low cost alternative to HBM2. Do you think HBM will be the norm in 2019?

      • ImSpartacus
      • 2 years ago

      2019? Maybe.

      For 2018, no.

      Nvidia is rumored to debut GV104 as early as Q3 2017. It’s generally understood that it’ll likely use 12 Gbps GDDR5X on a 256-bit bus for 384 GB/s of bandwidth. GV102 will probably follow in early-mid 2018 with 14-16 Gbps GDDR6 on a 384-bit bus for 672-768 GB/s of bandwidth.

      AMD will have still be using Vega 10 and its 2 stacks of 2 Gbps HBM2 for 512 GB/s of bandwidth. It’s rumored that Vega 11 could be “half” of a Vega 10 with only one stack of HBM2 and a Polaris 10-like 256 GB/s of bandwidth. Vega 20 is the 2018 refresh of Vega 10 rumored to have 4 stacks of HBM2.

      So 2018 will be a transition year for both amd and Nvidia. 2019 is pretty early for a full transition to be finished.

      • cmrcmk
      • 2 years ago

      I expect that until they find a way to get rid of the interposer (unlikely) or greatly reduce its cost, HBMx will only power halo products.

      • DPete27
      • 2 years ago

      Nvidia is already using GDDR5X for a low cost alternative to HBM2, so….

        • ImSpartacus
        • 2 years ago

        GDDR5X is just a temporary stop gap that Micron came up with so they could make some dosh in 2016 & 2017 until 2018 when GDDR6 ramped up from Sammy, Hynix & Micron.

        It was supposed to debut at up to 14 Gbps and eventually scale to 16 Gbps, but that didn’t happen, so it needs to be abandoned for a more long term solution. It still was good enough though, so mission success for Micron.

Pin It on Pinterest

Share This