Samsung fires up its foundries for mass production of GDDR6 memory

As cool and exciting as HBM might be, virtually all discrete graphics cards are still using GDDR5 or GDDR5X memory. That's not especially likely to change, either, given the relative complexity and cost of the ultra-wide and super-dense HBM. Most of the next generation of graphics cards is likely to continue using more traditional DRAM packages. Samsung just announced that it's begun mass production of 16-gigabit GDDR6 chips for that task.

Just in case anyone reading this site should struggle with basic arithmetic, these chips store 2 GB of information. That's half the capacity of the usual 4 GB HBM2 stack that we see, although Samsung does offer 8 GB HBM2 chips. Perhaps more interesting is the performance potential of the new DRAM packages. Samsung says they perform at up to 18 Gbps per pin. Multiplying by their 32-bit data path gives us a peak throughput of 72 GB/s on a single chip.

Samsung says its new RAM is fabricated on a "10nm-class" process. The new memory only takes 1.35 V to hit that scorching transfer rate, too, while typical GDDR5 requires 1.55 V to do its thing. The chaebol outright says that it expects this memory to feature in "next-generation graphics cards and systems," and we'd be surprised if it didn't show up aboard some fancy new graphics cards before long.

Comments closed
    • Mat3
    • 2 years ago

    GDDR6 is a QDR memory.. couldn’t they just make HBM3 also QDR to double its bandwidth?

      • Freon
      • 2 years ago

      Probably at the cost of latency. HBM might have a latency advantage that isn’t worth trading off for unnecessarily large raw bandwidth numbers that have no real impact in performance if bandwidth is already high enough to service whatever chip its attached to.

      And unnecessarily large bandwidth probably also costs a bit more. Even if it only costs $5/ per GPU more or another $X in R&D and validation, if the actual performance gain is zero you don’t wouldn’t do it.

        • Mat3
        • 2 years ago

        True, but GPUs are more tolerant of latency and the main benefit would be halving the number of memory stacks which has to be worth something given the current cost and complexity of HBM. Not really an issue now, but sometimes they increase the number of memory chips only for the bandwidth and not the capacity. A mid-range card like the RX 480 for example doesn’t need 8GB but that’s what you get with a 256-bit bus.

      • psuedonymous
      • 2 years ago

      HBM gains its power and transceiver die area advantages from being a wide-but-simple interface. If you start pushing the per-pin speed up to what GDDR uses, then you lose those advantages and you may as well save a whole lot of money and use a wide GDDR interface in the first place.

    • strangerguy
    • 2 years ago

    My bet is on a GTX 2080 12GB using 6 of those chips on a 192-bit bus.

      • drfish
      • 2 years ago

      Pricing TBD but between $599 and $1399 depending on the value of [insert altcoin here].

      • freebird
      • 2 years ago

      Sounds more reasonable…but it is still going to be the most expensive GDDR6 available for most of 2018.

    • vonWolfhausen
    • 2 years ago

    anyone have a quick bandwidth compareo on hbm1/2 and gddr5/6 ? Google is a bit scatterbrained on the subject

      • Chrispy_
      • 2 years ago

      Wikipedia: AMD and Nvidia graphics chip lists. You can’t really separate the VRAM from the context of the video card, because bus width of the card and what frequency it can drive the VRAM at matter immensely.

      Some examples though,

      GTX 1070, [b<]GDDR5[/b<]: 256-bit @ 8GHZ = [b<]256 GB/s[/b<] GTX 1080, [b<]GDDR5X[/b<] : 256-bit @11GHz = [b<]320 GB/s[/b<] Fury X, [b<]HBM1[/b<]: 4096-bit @ 1GHz = [b<]512 GB/s[/b<] Vega 64, [b<]HBM2[/b<] : 2048-bit @ 1.9GHz = [b<]486 GB/s[/b<]

        • psuedonymous
        • 2 years ago

        To drop in some more:

        GTX 1080 Ti, [b<]GDDR5X[/b<]: 352-bit @ 11Gbit/s/pin = [b<]484 GB/s[/b<] A hypothetical 8GB [b<]GDDR6[/b<] card: 256-bit @ 18Gbit/s/pin = [b<]576 GB/s[/b<] A hypothetical 10GB [b<]GDDR6[/b<] card: 320-bit @ 18Gbit/s/pin = [b<]720 GB/s[/b<] A hypothetical 11GB [b<]GDDR6[/b<] card: 352-bit @ 18Gbit/s/pin = [b<]792 GB/s[/b<] A hypothetical 16GB [b<]GDDR6[/b<] card: 512-bit @ 18Gbit/s/pin = [b<]1,152 GB/s[/b<] GV100, [b<]HBM2[/b<]: 4096-bit @ 2Gbit/s/pin = [b<]1,024 GB/s[/b<] A hypothetical 16GB [b<]HBM2[/b<] card using Samsung's Aquabolt 8GB stacks: 2048-bit @ 2.4Gbit/s/pin = [b<]614.4 GB/s[/b<] A hypothetical 32GB [b<]HBM2[/b<] card using Samsung's Aquabolt 8GB stacks: 4096-bit @ 2.4Gbit/s/pin = [b<]1,228.8 GB/s[/b<]

    • Chrispy_
    • 2 years ago

    Wait, so we’re looking at 8 of these on a mid-level 256-bit card for 576GB/s?

    The equivalent is the 1080 with 256-bit GDDR5X and that only manages 320GB/s. Vega64 only has 483GB/s….

    I wonder what the catch is. GDDR5X has double the latency (also higher cost and lower clocks) which means the net gains were only about 25% performance increase over GDDR5. I’m assuming that GDDR6 is back to four transfers per clock instead of 8 transfers like 5X, but can’t find and details on it. 4 transfers is preferable from a clockspeed and latency issue at any rate.

      • Waco
      • 2 years ago

      The catch is cost – you’re talking about 16 GB of VRAM there.

      • freebird
      • 2 years ago

      I don’t believe a mid-level card will be using 8 of these chips… that would be 16GB of memory, but since Micron is listing a 8Gb version of GDDR6 (16Gbps) you could get 512GB/s.

      We’ll probably have to wait and see how the latency part plays out in working cards, but I think it will probably turn out to be about the same latency of GDDR5 with double the bandwidth.

      Micron has a nice pdf on the details of their GDDR5/GDDR5x/GDDR6 products here:
      [url<]https://www.micron.com/~/media/documents/products/technical.../tned03_gddr6.pdf[/url<] HBM2 will be coming out this year from both Samsung & SK Hynix @ 2.4Ghz per pin or 307.2 GB/s per stack. So you could see cards with 600+ GB/s running HBM2 (2 stacks) this year. [url<]https://www.anandtech.com/show/12301/samsung-starts-production-of-hbm2-aquabolt-memory-8-gb-24-gbps[/url<] [url<]https://techreport.com/news/33101/samsung-juices-its-hbm2-to-2-4-gt-s-and-names-it-aquabolt[/url<]

        • Chrispy_
        • 2 years ago

        In response to both you and Waco, what’s so hard to believe about 16GB graphics cards? 4GB cards became mainstream in 2012, 8GB cards in 2015, and 2018 is about when you should be expecting 16GB cards.

        Upper midrange cards have had a 256-bit memory buses for the better part of 11 years (G92 from nvidia, RV770 from AMD). The fact that a 256-bit bus requires 8 chips is what determines how much RAM a new graphics card will have. Initially, only 2GB chips will be available, meaning that GDDR6 will appear on 256-bit 16GB cards, 192-bit 12GB cards, and perhaps even 128-bit 8GB cards if GDDR6 is affordable to trickled down to the GV106 and equivalent AMD Polaris replacement.

          • freebird
          • 2 years ago

          COST. You stated a mid-level card… 16GB for a High end card? Much more likely.
          8GB is mostly overkill on the current Polaris cards, but since it has a 256-bit bus it’s pretty much 4 or 8GB. The GTX 1060 with a 192-bit bus is better running 6GB. Saves a little on memory cost, but a nice sweet spot for gaming memory (for mid-level cards) requirements for it GPU capabilities. Maybe there will be an “explosion” by game developers pushing more pixels & textures, but 16GB for mid-level? I just don’t see it this year.

          Additional going forward, HOPEFULLY we’ll see a game engine developed that supports DX12 multi -adapter out of the box…but just getting games running on DX12 seems to be taking time, so we probably have until 2020 for Multi-adapter DX12 gaming.

          [url<]https://wccftech.com/directx-12-multiadapter-technology-discrete-integrated-gpus-work-coherently-demo-shows-big-performance-gains/[/url<] [url<]https://developer.nvidia.com/explicit-multi-gpu-programming-directx-12[/url<]

            • Chrispy_
            • 2 years ago

            I think you’re getting confused by what a mid-level card is. I’m talking upper-mainstream which for Nvidia has traditionally been the xx104 chips. It’s not their top product, there are xx102 and xx100 chips that tend to.

            Also, where are you getting the crazy notion that GDDR6 is really expensive from? Sure, it’s brand new and will have a premium over existing GDDR5, but it’s a replacement for affordable GDDR5 and being touted as a cheaper and more affordable option than HBM2 or GDDR5X

            • freebird
            • 2 years ago

            Maybe I am or maybe you are… mid-level is now “upper mainstream” whatever that means… I would call Polaris and GTX 1060s mid-level.

            That’s opinion.

            And nowhere did I state GDDR6 is “really expensive”, but for mid-level, memory costs are to be considered. Samsung’s top of the line GDDR6 running at 18 Gbps per data line is definitely going to cost more than Micron/SK Hynix GDDR6 running at slower speeds, especially since they don’t even list GDDR6 at those speeds yet.

            • Chrispy_
            • 2 years ago

            You’re really trying to be pedantic here, to derail the discussion for some reason, but it’s not rocket surgery ffs.

            Nvidia and AMD have a portfolio of graphics cards that feature a limited number of discrete GPUs every generation. Since Nvidia have more than double the market share of AMD, I’ll use their current generation as the example, but this pattern hasn’t changed for the best part of two decades, okay?

            GP100 high end.
            GP102
            GP104
            GP106
            GP107
            GP108 low end.

            If you wanted to be truly pedantic you could argue that GP107 is mid-level because it has stuff above [i<]and[/i<] below it in the product stack. For the vast majority of interpretations though, either the GP106 or GP104 are mid-level and that applies [i<]regardless[/i<] of whether you are talking about price, performance, numbering, portfolio position, or popularity in the Steam surveys. Sure, I don't think it's realistic to assume that Nvidia will launch GV107 with GDDR6. What I'm saying is that these first GDDR6 chips will likely go into the first consumer Volta products which are likely (but not certainly) going to be the GV104-based cards. More powerful cards will follow, just as they have done for years, as will smaller, cheaper cards. In fact, the only time Nvidia has deviated from this recipe in the last 15 years was when it released Maxwell out-of-generation at the end of the 700-series, rather than with the GM104.

            • freebird
            • 2 years ago

            And you are being obtuse, I’m not derailing any discussion, 16GB of the fastest GDDR6 that will be available in 2018 is not going to be on a mid-level card.

            With your logic the 1070 should have been released running GDDR5X.

            As I stated, this memory (Samsung 16Gbit, 18Gbps) is going to be the MOST expensive GDDR6 available. It WILL NOT be on a mid-level card with 16GB; not this year.

            Look me up at the end of the year.

            but feel free to change your statement AGAIN and say you were talking about some mid-level card circa 2019 or 2020.

          • Waco
          • 2 years ago

          You asked for the catch, the catch is cost thanks to DRAM prices being stupidly high (which I have to assume trickles down to affect GDDR pricing). I’d love to see 16 GB become somewhat mainstream for VRAM.

          • freebird
          • 2 years ago

          Samsung is NOT the only company going to make GDDR6. This Samsung 16-gigabit GDDR6 running with 72 GB/s bandwidth per chip are going to be the CREAM of the the GDDR6 crop…

          Micron is only listing 8Gb chips running from 12-14Gb/s
          and
          “The initial GDDR6 chips from SK Hynix will have an 8 Gb capacity and will feature 12 and 14 GT/s data transfer rates at 1.35 V.”
          [url<]https://www.anandtech.com/show/11398/sk-hynix-advances-graphics-dram-gddr6-added-to-catalogue-gddr5-gets-faster[/url<] This tells me these Samsung 16Gb chips running at 18Gb/s are going to be TOP of the Line GDDR6 which will be priced as such by Samsung... they still have to make money on the 10nm process these are produced on. Micron & SK Hynix will probably be using 20nm-16nm processes for their GDDR6 initially.

    • NTMBK
    • 2 years ago

    I feel the need…

    The need for speed!

      • vonWolfhausen
      • 2 years ago
      • Srsly_Bro
      • 2 years ago

      The need to make dumb dumb posts too, but you didn’t disclose that…

        • Shobai
        • 2 years ago

        Bro, srsly?

Pin It on Pinterest

Share This