JEDEC makes GDDR5X official

Bandwidth ahoy! Hot on the heels of the news that Samsung is producing HBM2 chips, the JEDEC standards body has now published the specification for GDDR5X graphics memory, affectionately known as JESD232.

As the name implies, the new standard is based on the existing GDDR5 spec, but doubles the range of available bandwidth to 10-14 Gb/s per die. Although those numbers aren't quite as stratospheric as HBM2's ludricous speed, they're certainly nothing to scoff at.

Since GDDR5X uses the same pseudo-open-drain (POD) signaling as its predecessor, it should be relatively easy for GPU makers to take advantage of it. According to earlier rumors, Nvidia and AMD may use GDDR5X in lower-end products and reserve HBM2 for higher-end cards. The release of this standard makes it more likely we'll see a mix of RAM types across next-gen graphics card lineups.

Comments closed
    • phileasfogg
    • 4 years ago

    >>>>> As the name implies, the new standard is based on the existing GDDR5 spec, but doubles the range of available bandwidth to 10-14 Gb/s per die.

    I believe that should say “per I/O pin”.

    • UnfriendlyFire
    • 4 years ago

    I envision four tiers of VRAM in 3-6 years from now:

    High end: Stacked VRAM (ex: HBM)

    Mid-high: GDDR5X

    Mid-low: GDDR5 (those old inventory of GDDR5 aren’t going to turn into profits overnight, you know?)

    Most shipped amounts: 2-4GB DDR3 chips

    Still MIA: Rambus’s VRAM chips

      • ImSpartacus
      • 4 years ago

      4gb ddr3 cards will live forever.

    • the
    • 4 years ago

    So as part of JESD [b<]232[/b<] we can expect speeds of 115,200 bit per second?

    • Chrispy_
    • 4 years ago

    This makes me happy but I suspect we’ll still see some nasty DDR3 options floating around on laptops and cheap graphics cards.

    Manufacturers are always keen to get the very last scrapings out of the bottom of the barrel, as long as it gets something onto a store shelf!

      • MathMan
      • 4 years ago

      It just creates more options at more price points. You make it sound as if that’s bad thing.

        • UnfriendlyFire
        • 4 years ago

        I think he’s upset about the fact that VRAM DDR3 will not disappear anytime soon.

        I have seen a laptop that had an i7, GT 610M and 2GB DDR3 VRAM. Such unbalanced performance, I’m sure the IGP would’ve been sufficient to match the GT 610M.

          • Star Brood
          • 4 years ago

          Stuff like that exists only to lure in novice customers enticed by the video card sticker. It’s almost like false advertizing, but it is definitely false advertizing when the Best Buy idiot tells them it’s great for gaming.

    • ltcommander.data
    • 4 years ago

    I wonder why they didn’t name it GDDR6? There must have been some heated discussions among GPU marketing teams on whether suffixes or higher numbers are more catchy.

      • Airmantharp
      • 4 years ago

      Even-numbered GDDR generations suck more ;).

        • chuckula
        • 4 years ago

        You have a point when GDDR4 was the aptly named Sir Not Appearing In Practically Any Products.

        As for GDDR2, I don’t think it even existed on paper*, while GDDR4 actually sort of was a thing that just never took off because of GDDR5.

        * GDDR3 is actually a souped-up variation on regular DDR2, not DDR3.

          • Airmantharp
          • 4 years ago

          I don’t think you can really compare the GDDR generations directly with the DDR generations (some cards did use straight DDR/DDR2 in the early days, and you *can* compare those).

          For instance, GDDR2 didn’t last very long, and GDDR4 may have been based on DDR3; but GDDR and DDR have really diverged in terms of frequency and latency, as graphics cards don’t actually need low-latency RAM given the almost entirely in-order nature of their work, versus the out-of-order nature of CPUs.

            • MathMan
            • 4 years ago

            It’s a myth that SDDR and GDDR have diverged in terms of latency. That’s true in the number of clock cycles, but not in the amount of nanoseconds. For a CPU, running on a different clock than the memory clock, it wouldn’t make any difference.

        • ImSpartacus
        • 4 years ago

        I hope hbm doesn’t have the opposite issue…

      • MathMan
      • 4 years ago

      The differences between GDDR5 and GDDR5x don’t seem to be too large. I found this page quite interesting: [url<]http://monitorinsider.com/GDDR5X.html[/url<]

    • DPete27
    • 4 years ago

    What does JEDEC actually do? It seems like they always just double bandwidth. What do they actually contribute?

    Do the publications (which I can’t download because you have to be a member) actually detail a design for said bandwidth increases, or do they just write on paper “yup, you can push that Thunderbolt port to twice the bandwidth now, you have our blessing.”

      • Meadows
      • 4 years ago

      There must’ve been some sort of fundamental design bottleneck since GDDR5 can’t reliably go far past 7 Gb/s no matter how you try to overvolt or overclock it. Even the cream of the crop topped out somewhere around 7.2-7.4 Gb/s only.

      • NeelyCam
      • 4 years ago

      [quote<]What does JEDEC actually do? It seems like they always just double bandwidth. What do they actually contribute? [/quote<] Doubling bandwidth is not at all trivial. Losses in wiring generally get worse when frequency or bandwidth is increased, requiring improved wiring (that has to be specified) or various signal equalization techniques to counter the signal losses from wiring. JEDEC members analyze and argue the merits of various approaches to increase the bandwidth, and then finally agree to make it a standard specifications. Those publications detail the approach carefully to be used to double the bandwidth, so products would be compatible with each other.

        • DPete27
        • 4 years ago

        So they do essentially create a blueprint for the new standard that fabricators use?

        The fact that doubling bandwidth isn’t trivial is what makes me so suspicious about the fact that all they ever seem to do is double bandwidth from the previous generation. You’re telling me that they couldn’t get it 2.5x better, etc? Seems like an artificial limitation on technology.

        Examples:
        Sata 1.5Gbps to Sata 3
        GDDR5 to GDDR5x
        PCIe 2.0 to PCIe 3.0
        Thunderbolt 2 to Thunderbolt 3
        USB 2 to USB 3 was a bit of an outlier
        etc etc

        I understand the importance of industry standards to keep components interchangeable and ensure compatibility. I’m wondering what JEDEC’s contribution to the advancement is. Do they just review controller designs from outside firms that submit them and decide on the best one…or are they laying the groundwork themselves…or?

          • dslegend
          • 4 years ago

          You’re just f’ing with us right?

          • slowriot
          • 4 years ago

          JEDEC is a standards body, it’s literally composed of the various companies who engineer, manufacturer, and use these standards.

          I mean, you could have just googled it and checked out the site. For instance, check out the members: [url<]https://www.jedec.org/about-jedec/member-list[/url<]

          • Kougar
          • 4 years ago

          JEDEC is an open standards organization comprised of members from within the various industries it serves. Tech companies produce their own design(s) and JEDEC analyses them to make a single official standard across the entire industry and all devices and peripherals that connect to them need to adhere to ensure everything works as originally intended.

          They do not “create” the technology, they standardize it. The companies that comprise JEDEC (like Intel) are the ones that create said technology. Intel created Thunderbolt. Micron created HMC, and AMD/Hynix are credited for HBM. So generally speaking there’s usually just single design submitted, JEDEC simply ensures it’s an official specification so everyone else can design products around it.

          But to be clear, JEDEC manages memory related specifications only. Mostly RAM and NAND and everything derived or relating to them. SATA-IO is the standards body that oversees the SATA specifications. Things like the USB standards are created within their own groups/entities/owners, and future spec revisions are decided upon by said members of that entity.

            • DPete27
            • 4 years ago

            Thank you.

            • cygnus1
            • 4 years ago

            Doesn’t membership in JEDEC also include patent licensing? At the very least access to the same FRAND license costs as all the other members?

            • Kougar
            • 4 years ago

            Appears to be [url<]https://www.jedec.org/about-jedec/patent-policy[/url<]

      • spugm1r3
      • 4 years ago

      Imagine our entire country ran on electricity.

      Now imagine if every company had their own version of 120V plug. If you’ve ever travelled abroad, you’ll understand.

      • Freon
      • 4 years ago

      Turns out there is a lot more to interconnect specs than the speed rating.

    • chuckula
    • 4 years ago

    Good to hear since while HBM2 is certainly the whiz-bang futuristic technology, it won’t be economically feasible for many mainstream parts for a while.

    If my back of the napkin calculations are correct, then a 256-bit GDDR5X interface should deliver bandwidths in the 400 – 500 GB/sec range, which would put it a little below the current Fury X parts at 512GB/sec, but not massively behind. That ought to be enough for the next-gen equivalent of a GTX-970/980 or Radeon R9-390.

      • cobalt
      • 4 years ago

      [quote<]Good to hear since while HBM2 is certainly the whiz-bang futuristic technology, it won't be economically feasible for many mainstream parts for a while.[/quote<] Serious question: while that's true right now (new tech, R&D costs, etc.), is there anything fundamental about HBM tech that makes it more expensive? I've been assuming that long term it would be more economical. (e.g it doesn't increase the expense via a bigger die size)

        • chuckula
        • 4 years ago

        [quote<]is there anything fundamental about HBM tech that makes it more expensive? [/quote<] From a per-chip memory production standpoint not much, although stacking chips does require a little more finesse. However, from the perspective of assembling the full GPU + memory there certainly are technical and economic drawbacks to HBM. Technologically, having to align both the GPU and the memory stacks to a high-precision silicon substrate with TSVs is a more complex process than BGA mounts on a regular circuit board. It can certainly be done, but it's more expensive and requires much tighter tolerances during manufacture. If the information available online is correct, Hynix, which is the manufacturer of the HBM1 memory, is actually handling placing the GPU & HBM stacks onto the substrate at its own factories. The board OEMs only receive completed GPU/memory modules from Hynix instead of buying GPU and memory parts separately. There's another economic issue too: Sourcing memory. Right now a board OEM like Asus/EVGA/etc. goes out and buys the GPUs and goes onto the open market to buy GDDR5 memory with varying speed bins and spot prices based on what the OEM wants to do for its products. However, all that goes away with complex packaged products. There is now only a single source for both the GPU and the RAM that's fully integrated onto a package long before the OEMs ever take delivery. There is far far less wiggle room for them to work with different memory vendors when the product is completely monolithic, and it doesn't make things cheaper.

          • maxxcool
          • 4 years ago

          Add to that manufacturing ‘misses’ oft get passed on to the buyer when constructing highly complex packages. The source materials will get cheaper over time.. but magically the constructed package wont see much cost reduction.

          • the
          • 4 years ago

          A bit to add about memory sourcing and pricing:

          There is a second supplier of HBM2: Samsung. The problem for graphics cards manufacturers is that they themselves cannot source memory separately while AMD and nVidia [i<]can[/i<]. Any savings by putting Hynix and Samsung up against each other in a bidding war means cheaper prices for AMD and nVidia. The actual graphics card manufacturers get the complete package from AMD and nVidia which will be a fixed price regardless of who made the HBM2 memory. This will lead to more normalization of prices in retail. The other thing worth noting is that Samsung as a HBM2 supplier could also manufacture and package all the chips. A one-stop shop for a design can lead to a nice discount. This additional savings would only go to AMD and nVidia and not card manufacturers.

      • tviceman
      • 4 years ago

      To be exact, a 256-bit bus running 14 ghz GDDR5X ram would have 448 gb/s of bandwidth. Seeing as how GM204 was able to extract 75% more performance than GK104 at 1440p/4k with the same 224gb/s bandwidth, you’re right on with next-gen midrange parts having more than enough ample bandwidth with GDDR5X, at least from Nvidia’s side.

      AMD GCN-based cards seem to need more bandwidth to achieve the same performance (Hawaii based 390x has 384 gb/s), so there are more challenges (opportunities?) for AMD to tackle, but 448 gb/s should still be plenty even without too much improvement to current-gen architectures.

        • tviceman
        • 4 years ago

        And also, a 384-bit bus equipped GPU would have 672 gb/s. I know HBM is more power efficient, but for a gaming card 672 gb/s sounds great to me if it’s cheaper than a more expensive HBM-equipped equivalent card!

          • Pwnstar
          • 4 years ago

          That amount of bandwidth is not needed right now. I’m not sure why they would go above a 256-bit bus.

      • ImSpartacus
      • 4 years ago

      Don’t forget to mention the power issues.

      I recall that was one of the considerations that caused hbm to exist at all.

      On its current trajectory, gddr would’ve swallowed an unacceptable portion of the gpu power budget (especially as gaming laptops need to push more pixels while simultaneously using less power).

      I assume that the “power” effect is less noticeable on lower tier gpus that don’t need to fuel 4k displays and so on.

        • chuckula
        • 4 years ago

        If we are talking about 16 or especially 32GB of GDDR5X at very high speeds trying to approximate a similar HBM2 setup then power starts to become an issue.

        If we look at most consumer products with lower amounts of GDDR5X, then power will be dominated by the GPU itself and not the memory. For example, I’m sure that the 4GB of HBM in the Fury X operates at a lower power level per unit of bandwidth (e.g. milliwatts/gigabyte/sec) than the GDDR5 in a GTX-980Ti. However, as we’ve seen from the review, the total power draw of the Fury X is still higher.

          • ImSpartacus
          • 4 years ago

          That’s exactly why gddr5x is a good fit for that part of the consumer gpu market and that’s why I suggested that you not forget to mention it when you’re arguing that gddr5x is a good fit for that part part of the consumer gpu market.

Pin It on Pinterest

Share This