Micron reports early successes in GDDR5X production

Micron Technology posted an early report on its production of GDDR5X chips today, and the results sound good. The company says it's gotten working silicon back from its fabs earlier than expected, and those parts are already hitting 13Gb/s speeds. That's already toward the top end of the expected speeds for GDDR5X, whose specified transfer rates range from 10 to 14 Gb/s. Micron says those results are "incredibly promising."

The first Micron GDDR5X chips are being produced on the company's 20-nm process, and they'll be 8Gb (1GB) dies. The company expects mass production of these parts to begin in the summer of this year, and it'll announce sampling dates for the chips later this spring. Tantalizingly, Micron says that going off its early performance results, we could see GDDR5X chips exceed the 14 Gb/s speed range as time goes on. JEDEC standardized GDDR5X graphics memory just a couple weeks ago, so it's heartening to see production of this improved graphics RAM move so fast.

Comments closed
    • spugm1r3
    • 4 years ago

    I think a lot of people are forgetting, at the bottom of the graphics market is DDR3. It’s not just HBM at the peak and GDDR5 filling everything else. Everyone is aware of the gap between DDR3 and GDDR5. However, stacking memory and requiring it to sit on the same interposer as the GPU drives cost up in multiple areas, making the gap between HBM and GDDR5 even more apparent.

    GDDR5X allows GDDR5 to fill the role of DDR3, while creating a more linear price/performance curve [i<]for the manufacturer[/i<]. It also prevents GPU manufacturers from having to consider DDR4 as a stop-gap while the rest of PC market moves away from DDR3. If you are wondering whether, as a consumer, you should be excited about GDDR5X performance, I think you are asking the wrong question.

      • NTMBK
      • 4 years ago

      GDDR5X drives up board complexity compared to GDDR5 due to denser memory traces, and will probably have lower yields. I don’t see it replacing DDR3 at the low end; I would expect DDR4 to do that.

        • synthtel2
        • 4 years ago

        It’s exactly the inverse of that. GDDR5X can transfer a lot more data per trace, so you don’t need as much board complexity for a given level of performance. I wouldn’t be surprised if the chips themselves are expensive enough to limit use at the low end, but on board complexity it’s a solid win.

          • NTMBK
          • 4 years ago

          For a given level of performance, sure, but overall bandwidth demands are going to go way up due to 14nm GPUs.

            • synthtel2
            • 4 years ago

            Overall bandwidth demands always go way up as the chips get more powerful, and various tech changes have to be made to keep cost and power reasonable. Comparing 28nm/GDDR5 to 14nm/GDDR5X, it looks like a given width of GDDR5X will probably be able to supply a similarly-sized 14nm die as 28/GDDR5. Of course, that means it may not work great at 10nm and beyond, but presumably by that time HBM will be much cheaper.

      • ImSpartacus
      • 4 years ago

      So you’re saying that the reason why we don’t see gddr5 in more low end gpus is its limited supply? And then once gddr5x ramps up, we’ll have enough excess gddr5 production to cheaply put gddr5 in just about every low end gpu?

      If that ends up being the case, then that’s a nice fringe benefit from gddr5x.

        • spugm1r3
        • 4 years ago

        I would take that with a grain of salt. Replacing DDR3 with GDDR5 should bring up the performance of the bottom end, but the gap it may close on pricing will likely occur at the middle to the top of the spectrum (i.e. price shifting up towards the halo products).

        I’m just speculating, but new technologies that result in savings to the manufacturers don’t always result in savings for the end-customer. However, tech is one of those industries where “more of the same” is a death-sentence. If GPU manufacturers are more profitable, expect to see better products.

        • NTMBK
        • 4 years ago

        GDDR5 isn’t particularly supply limited; you wouldn’t see 8GB of it in millions of Playstations if it was. It’s just expensive to manufacture.

      • Ninjitsu
      • 4 years ago

      Yeah, there’s DDR3, GDDR3, GDDR5 and HBM at the moment.

      So I think GDDR3 will replace DDR3, GDDR5 will replace GDDR3, and GDDR5X will replace most of GDDR5’s current position.

      HBM2 should make its way into most of the Quadro/FirePro lineup, I would guess.

    • BaronMatrix
    • 4 years ago

    Perhaps this is why I haven’t seen any nVidia demos… They said they were skipping to GDDR5X so it’ll take until April to get ready for mass-production…

    I believe they also said they would jump to 32GB for high end and again that won’t be available in 8GB chips until June or July…

    AMD may get a huge lead on FinFET cards…

    • albundy
    • 4 years ago

    looks like a massage table…but will there be a happy ending?

      • Wirko
      • 4 years ago

      It looks like a pair of accordion keyboards. Or a computer keyboard in disguise by Das.

    • Mat3
    • 4 years ago

    So I don’t ever hear anyone discuss what the downside is. It seems Micron enabled this by doubling the prefetch, which doesn’t seem all that innovative or technically challenging. Does that mean every time the GPU requests data, there’s a good chance it gets a lot more than it needs or am I understanding it wrong? Whatever the case, there are always trade-offs. If it were a total win, why didn’t someone do GDDR5X a long time ago? AMD/Nvidia have been putting massive 384-512 bit memory interfaces on their existing cards for years now.

    I’m not dissing it, I’m just wondering if it’s as good as it sounds…

      • ImSpartacus
      • 4 years ago

      It’s not. That’s why it’s not called GDDR6 and that’s why HBM is a thing.

      It just exists to get 80% of the performance of early HBM implementations at lower price points and tolerable power consumption.

      • UberGerbil
      • 4 years ago

      [quote<]Does that mean every time the GPU requests data, there's a good chance it gets a lot more than it needs or am I understanding it wrong? [/quote<]It's fetching 64 bytes instead of 32. When textures run into the MB range, and vertex arrays are hundreds of coordinates, a few extra bytes on the end when fetching something that happens to not be an even multiple of 64B isn't really going to matter. We didn't really notice a problem when CPUs switched from single-channel to dual channel, and GPUs tend to be a lot more throughput-oriented.

        • MathMan
        • 4 years ago

        The total size of the object from which you’re fetching has nothing to do with it.

        It’s about fetching data that may or may not actually be used. If you prefetch more than you need and then don’t use it, you’ve wasted bandwidth that could have been spent better.

          • synthtel2
          • 4 years ago

          For graphics, that’s not an issue particularly often. If you need a piece of data for one pixel, chances are pretty good that the next pixel over needs the next piece of data. Differing data:pixel ratios are involved pretty often, but with decent caches that doesn’t tend to be a problem.

            • MathMan
            • 4 years ago

            I agree that there must be a lot of spatial coherence. But UberGerbil’s argument that it doesn’t matter because textures and buffers are in the megabyte range is still irrelevant. 🙂

      • MathMan
      • 4 years ago

      The technically challenging part is transmitting parallel data at 13Gbps per pin.

    • Srsly_Bro
    • 4 years ago

    Now I have to wait for this or get hbm2.

    • southrncomfortjm
    • 4 years ago

    How much memory speed could a GTX 980ti actually use if memory speeds weren’t the bottleneck? How much more performance would that actually net you?

    HBM and HBM2, while impressive, seem like they provide the equivalent of 50mbs download speeds when all the person wants to do is stream Pandora, or maybe a YouTube video.

    I guess my question is, if GDDR5X basically doubles the speed of GDDR5, is that enough? Do we really need HBM or HBM2? Are video cards actually that limited by their memory speed?

      • orik
      • 4 years ago

      HBM has other benefits; but GDDR5X will be on mid range cards this next gen probably.

      Long term we will want HBM because of the benefits of memory stacking, and the lower latency of having the chips so close together.

        • MathMan
        • 4 years ago

        HBM doesn’t have lower latency. Not if you count nanoseconds instead of clock cycles.

          • Anomymous Gerbil
          • 4 years ago

          Isn’t the latency lower simply by being closer to the chip, connected by the interposer rather than having to go off-chip to the discrete memory chips (i.e. nothing to do with the higher speed / more clocks trade-off, as you note)?

            • mesyn191
            • 4 years ago

            No. Memory cell access and write times are the big limiting factor by far. Hynix has some slides from a 2014 presentation showing HBM having about the same latency has main system RAM.

            Which is actually quite good for a video card memory. All the GDDRx’s traded latency for bandwidth. That is a sensible trade off because modern GPU’s are highly tolerant of high latency compared to CPU’s which need low latency due to branchy and unpredictable code.

            HBM’s high bandwidth comes from having a stupid wide bus at a pretty respectable clock speed which would be impractical to do with surface mount memory.

            • MathMan
            • 4 years ago

            Are you sure GDDR5 has longer latency than DDR3?

            AFAIK it simply has a much faster data bus clock rate. There shouldn’t be a latency trade off. But I could be wrong…

            • mesyn191
            • 4 years ago

            I’m going by Hynix’s slides, which are public. If they’re wrong then yes I’d be wrong too.

            • MathMan
            • 4 years ago

            The ones from hot chips in 2014? I don’t see any mention of the RL timing parameter in those.
            Do you have a link to those slides?

            • mesyn191
            • 4 years ago

            No but some of the slides got screen shotted and posted over in B3D.

            Its well known GDDR5 is slower than DDR3 latency wise, what is making you think otherwise?

            • MathMan
            • 4 years ago

            I know that “it’s well known that GDDR5 is slower than DDR3 latency wise”, but this is the kind internet truth I’ve never seen proven and, more important, it just doesn’t make sense when you consider the architecture of DRAMs in general: at their core, they’re just all the same. Only the interfacing is different.

            There are two Hynix presentation:

            [url<]http://www.memcon.com/pdfs/proceedings2014/NET104.pdf[/url<] [url<]http://www.hotchips.org/wp-content/uploads/hc_archives/hc26/HC26-11-day1-epub/HC26.11-3-Technology-epub/HC26.11.310-HBM-Bandwidth-Kim-Hynix-Hot%20Chips%20HBM%202014%20v7.pdf[/url<] I've been going through all the timings in gory detail and there is no indication at all that GDDR5 has higher latency than DDR3. tRC for GDDR5 is lower than DDR3, and similar to DDR4. (40ns) tRCD, one of the key latency numbers for DRAM accesses, for GDDR5 is 12ns, which is lower to the 14ns of Samsung DDR4. And, finally, for CAS latency, which is on the order of 10ns for DDR3 and similar for GDDR5. So for all major timings that can impact latency, they're really the similar.

            • mesyn191
            • 4 years ago

            But its not a internet truth? Many developers will tell you GDDR5 is slower than DDR3.

            Its common knowledge with the PS4 for instance and even Sony will tell you its slower. They also say its not a big deal though, I guess because its a console and is largely task dedicated.

            • MathMan
            • 4 years ago

            [url<]http://www.redgamingtech.com/ps4-vs-xbox-one-gddr5-vs-ddr3-latency/[/url<] Quote: I’ll save you the trouble and say that it’s between 10 – 12 for both DDR3 and GDDR5 varying heavily on the clock speed of either the DDR3 or GDDR5 AND in the timings they’ve chosen. Also, take into account what Mark Cerny has said about GDDR5 latency: “Latency in GDDR5 isn’t particularly higher than the latency in DDR3. On theGPU side… Of course, GPUs are designed to be extraordinarily latency tolerant so I can’t imagine that being much of a factor”. Look like the PS4 architect says just the opposite. And the data sheet numbers of Hynix (which have been taken off-line?) confirm it.

            • mesyn191
            • 4 years ago

            But he isn’t. He is saying it isn’t particularly higher which is a vague statement. The article is saying its ~2 cycle difference, not sure if that is right since others have said the real world branchy code difference is more like hundreds of cycles but whatever.

            Point is: how can you read that as “not higher”?

            EDIT: GDDR5 tends to fall flat on branchy code because its prefetch implementation can’t deal with it. Its fine for very predictable data sets, like what you see in graphics work loads, though. GDDR5 is doing other stuff than just prefetching of course to improve bandwidth but that is the major method AFAIK and it incurs the large latency penalty that is so often bandied about. Simple command process times aren’t going to tell you this information.

            EDIT: You also have to consider the context of his statement: he is speaking as a console developer, not in a general sense.

            • MathMan
            • 4 years ago

            Let’s recap the article:
            DDR3: 7 to 9 cycles CAS latency.
            GDDR5: 15 cycles CAS latency.

            When corrected for clock speeds, they both end up at ~10 to 12 ns.

            The underlying DRAM architecture is identical.
            The data sheet numbers end up to be identical.
            The PS4 architect says that GDDR5 isn’t particularly higher.
            You say: GDDR5 is so much faster because it trades off latency for bandwidth.

            Which of the 4 statements above is most likely to be wrong?

            And please tell me more about that special GDDR5 prefetch implementation?

            I just downloaded the GDDR5 JEDEC specification. The word prefetch is mentioned exactly 3 times in exactly the same sentence: “GDDR5 uses an 8n prefetch architecture”. (FYI: just like DDR3.) Your branchy code argument is nonsense.

            Everything about GDDR5 is plain vanilla DRAM: you select a row, you select a column, the data requested data comes back. The only difference is that it has an extra clock that’s double as fast as the regular clock.

            Maybe it’s time to start showing add some link to your claims? Because you really go out of your way of not backing things up.

            • mesyn191
            • 4 years ago

            Uh they’re not identical, your own numbers show this but you seem to be down playing their significance, and I don’t know what to say. Just bandwidth numbers alone will show that, and that bandwidth isn’t coming for free.

            And yes GDDR5’s prefetch is 8 burst deep but it works fundamentally differently than DDR3’s, they’re actually introducing latency delays on purpose to combine data and do large transfers at once. DDR3 won’t do that.

            You have to do more than just do a word search and google for an hour or 2 to understand what is going on here. Unfortunately I don’t have any good papers to link for you that lay it all out nice n’ easy, but do ask yourself this question: if GDDR5 and DDR3 are essentially identical in nearly all ways then where is the bandwidth/clockspeed coming from for GDDR5? If they are essentially identical then there is no reason why they wouldn’t perform about the same.

            • MathMan
            • 4 years ago

            If GDDR5 and DDR3 are so different, it should be trivial to point out this difference in their respective JEDEC blueprints.

            Feel free to show me the pages where these differences are made clear.

            • mesyn191
            • 4 years ago

            I did. You’re either misreading my posts or flat out not understanding what I say.

            Its also perfectly reasonable for the other person in a discussion to clearly state the basis for their disagreement in practical terms.

            JEDEC also doesn’t do blueprints, they do standards. Actual “blueprints” of the circuit layout are something no one outside the DRAM manufacturers will have so I’m totally confused how you could even think to ask me for them. This is not some minor semantic quibble here.

            • MathMan
            • 4 years ago

            The JEDEC spec defines the exact way the memory controller needs to send commands etc.

            There is absolutely nothing in there in terms of commands that is different between DDR3 and GDDR5 that would make a reading data have different latencies. But feel free to point out where I’m wrong.

            You haven’t given me one concrete piece of evidence that would support your claims. Not one.

            Instead you start some inane argument about the exact meaning of a word. It’s very hard to not write some ad hominems…

            • mesyn191
            • 4 years ago

            Yes and that is all a spec does, it doesn’t dictate manufacturing implementation details or circuit layout which is what a “blueprint” (which they don’t use and haven’t for years if not decades) will do.

            Commands? Now we’re talking about commands?! That is a whole other subject dude and even if it were relevant to the discussion wouldn’t matter. Why? Because even if the commands are identical for DDR3 and GDDR5, and I have no idea if they are or aren’t, they can be executed differently internally which results in different latencies. You won’t be able to tell this at all just by looking at the commands either.

            I’ve pointed out papers and comments by others in industry, you’re just interpreting things how you like.

            • spugm1r3
            • 4 years ago

            To bring this back to about where it started:
            [quote<]Are you sure GDDR5 has longer latency than DDR3?[/quote<] To quote your own reading: [quote<]DDR3: 7 to 9 cycles CAS latency. GDDR5: 15 cycles CAS latency. [/quote<] I think you answered your own question. However, I think your analysis that "everything about GDDR5 is plain vanilla DRAM" lacks the necessary nuances. DRAM can't simply be defined as two dimensional arrays that store stuff. If that were the case, there would be one type of memory and the speed of access would be the only metric that mattered. The reality is, [i<]how[/i<] that data is accessed differs dramatically between DRAM technologies. Clock rates, bus widths, interleaving, block restrictions, power, etc... all affect how fast and how accurate the data coming in and out of memory. While it may or may not be "internet truth" that GDDR5 has higher latencies than DDR3, there are very specific limitations that keep DDR3 out of Tesla and FirePro GPUs, and GDDR5 off of modules.

            • MathMan
            • 4 years ago

            Yes, it answers my own question in the sense that the time in nanoseconds is constant. And that this is due to the way data is retrieved from the sense amplifiers.

            Compare the JEDEC specs between DDR3 and GDDR5. They are the blueprint about how these devices need to work.

            It’s a simple challenge: point me in those specs to a key difference in the way they work. There isn’t one.

      • chuckula
      • 4 years ago

      [quote<]How much memory speed could a GTX 980ti actually use if memory speeds weren't the bottleneck? How much more performance would that actually net you?[/quote<] Some performance gains are certainly possible, but it's doubtful that any real-world gains would make you want to replace an existing GTX-980Ti with the equivalent HBM-ized part.

      • Srsly_Bro
      • 4 years ago

      Go look at the 970 and 980 ti for how memory speed affects performance

        • synthtel2
        • 4 years ago

        Cool story bro. 😉

        When I was doing some experimenting with my GTX 960 in Unigine Valley,[super<]1[/super<] a 5% core clock boost would get me about 3% better framerate and a 5% VRAM clock boost would get me about 2% fps boost. That is, performance was roughly 60/40 dependent on core/VRAM. Yes, I repeated them a few times and they were very consistent. The GTX 960 has 8 SPs per VRAM interface bit, the same as a 980 and actually a touch more memory-bound than the 970 or 980 Ti. Yes, Valley isn't the most bandwidth-bound thing out there, but this at least shows that bandwidth isn't everything. If it were, NV/AMD wouldn't be paying so much attention and power budget to shaders/TMUs/all that other stuff. [1] Ultra / 2560x1440 / 2x MSAA

        • travbrad
        • 4 years ago

        Those 2 cards have a lot more differences than just their memory bandwidth though. 980ti also has about 70% more CUDA cores, texture units, and ROPS.

        780ti and 980ti have exactly the same memory bandwidth yet the 980ti has much better performance, and the 970 performs similarly to a 780ti despite the 780ti having much more memory bandwidth. Fury X has a lot more memory bandwidth than a 980ti also, but the 980ti is still slightly faster overall.

        Since GDDR5X and HMB/HBM2 will be used on the next generation of cards I think the importance of having more memory bandwidth will depend a lot on just how fast those cards end up being and what architecture choices they make. At the end of the day both Nvidia and AMD will try to have a “balanced” design for their cards.

        TL:DR Memory bandwidth isn’t everything as long as you have “enough” of it.

          • Srsly_Bro
          • 4 years ago

          That’s my point. The two cards are very different and memory speed affects them differently. I was trying to get across that memory speed doesn’t solve everything when there is a limitation within the gpu preventing it from utilizing increased speeds effectively.

      • ImSpartacus
      • 4 years ago

      The problem is power.

      As we cram more pixels into our displays, we need larger and larger GPUs but those GPUs need more and more bandwidth.

      However, as that happens, memory eats more and more of the power budget if we don’t make the jump to something like HBM. In fact, the concentration on mobile implementations means we need to [i<]decrease[/i<] the power consumption, not just maintain it. GDDR5X buys more time in 2016, but it can feasibly get only like ~768GB/s (twice the bandwidth of a 290X/980 Ti) before it starts eating too much of the power budget just like GDDR5. However, HBM has legs. It'll efficiently satisfy our thirst for bandwidth in 2017 and for a fair number of years after that.

        • Ninjitsu
        • 4 years ago

        I agree with you, but a small pedantic correction – should be 768 GBps, (or GiBps if you want to be [i<]really[/i<] pedantic).

      • BlackDove
      • 4 years ago

      Theyre limited enough that when Nvidia implemented their new color compression and released dual GK210 K80s. GM200 is kind of a flop for anything but games and even then its only a 50% improvement at best over GK110.

    • DancinJack
    • 4 years ago

    I am honestly kinda meh about this right now. My next graphics card will likely employ HBM2 and for whatever reason, despite the pretty decent improvements, I just can’t get excited about GDDR5X. :/

      • Airmantharp
      • 4 years ago

      I’m wondering where HBM2 will hit; would it wind up in the equivalent of my GTX970s/390X, or would it be relegated to halo products and possibly their slightly-less-crazily-priced counterparts, i.e. GTX980(Ti) class, as HBM1 has been for AMD?

      GDDR5X isn’t ‘sexy’, but they’re literally talking about doubling the bandwidth, and for your average upper mid-range card with a 256-bit memory controller, that will very likely be enough, alongside the inherent doubling of memory capacity (8GB becomes the minimum for a 256-bit controller).

        • Beahmont
        • 4 years ago

        x70 and x80 NVidia and x90 AMD class cards will probably come with dual memory controllers in the Pascal and Polaris generations, but most likely use GDDR5X in initial products with ti, xx5, or yyyX models possible with HBM or HBM2 depending on parts, price, demand, and availability at some point.

        But as far as I know HBM, HBM2, and the associated interposers and logic dies are very expensive to even design and test let alone turn into actual products at this point. If the associated costs come down, and if NVidia and AMD behave reasonably from the point of view of their customers, then HBMx will likely be used with increasing frequency as it really is an amazing memory tech for GPU’s and some of the HBM2 improvements make it easier to use for CPU’s and APU’s though it’s still a long way from a true main memory replacement because of access granularity, latency, and CPU memory controller issues.

        GDDR5X is likely to be around for at least the next 2 generations of GPU µArchs in the x50 and x60 NVidia class and x70 and x80 AMD class cards at the very minimum because of shear cost and performance issues. A x60 NVidia class card and a x80 class AMD card simply don’t need and can’t afford the bandwidth of HBMx in their price and performance segments.

        This is all logical [s<]guess work[/s<] supposition from the armchair chipmakers section of course so take it for what you paid for it.

          • guardianl
          • 4 years ago

          It’s hard to know what costs AMD is incurring for the interposer on Fury, but in the long term the interposer should only be a few dollars because it’s built on practically ancient fab equipment.

          [url=http://electroiq.com/blog/2012/12/lifting-the-veil-on-silicon-interposer-pricing/<]Some real info on costs[/url<] Whatever the learning cost is on the yield curve for stacking today, it will be a relatively short-term spike. And unlike the current planar logic nodes that are running up against crazy-difficult issues like quantum effects, die stacking is a pretty traditional semi-industry problem - just learn the process corners until the precision and consistency is found.

      • Srsly_Bro
      • 4 years ago

      Good thing micron didn’t ask you, bro.

      • mesyn191
      • 4 years ago

      HBM and HBM2 are going to be very expensive for a long time.

      So niche, so very low volume, high end cards only.

      GDDR5X will scale down nicely to low end GPU’s but offer a nice bump in bandwidth. So marketwise, in the short term, its probably more significant. Its nothing to “meh” about.

    • eofpi
    • 4 years ago

    Any idea how this compares to HBM-type memory?

      • Jeff Kampman
      • 4 years ago

      [url<]https://techreport.com/news/29614/samsung-begins-mass-production-of-4gb-hbm2-memory-chips[/url<] [url<]https://techreport.com/news/29591/jedec-updates-hbm-standard-with-bigger-stacks-and-faster-speeds[/url<]

      • chuckula
      • 4 years ago

      Slower.
      But cheaper.

        • DPete27
        • 4 years ago

        Yeah, in comparison, GDDR5 used in the vast majority of current GPUs is ~7Gb/s

      • ImSpartacus
      • 4 years ago

      Anandtech did a surprisingly thorough dive into gddr5x. In particular, check out the “memory math” table in the middle of the article. It has some very nice “tangible” comparisons between actual graphics cards and hypothetical gddr5x & hbm2 graphics cards. Simple, concise and it exactly answers your question.

      [url<]http://anandtech.com/show/9883/gddr5x-standard-jedec-new-gpu-memory-14-gbps[/url<]

        • The Egg
        • 4 years ago

        I think this is as much, if not more exciting than HBM2. Being able to pull off 224GB/s with a 128-bit chip like the GTX960 with the same number of chips could make solid performance even more affordable.

          • Airmantharp
          • 4 years ago

          Honestly, it’s not just exciting- it’s [b<][i<]necessary[/i<][/b<] as the jump from 28nm to 14nm will allow them to shove more shader units into the same class of hardware (same die size) with likely higher clockspeeds. Of course, that means that consumers see the benefit, so I'm kind of glad that DRAM manufacturers pushed ahead with GDDR5X rather than just go all-in with HBM ;).

          • ImSpartacus
          • 4 years ago

          Yeah, GDDR5X will mean a lot for the mid-lower end just to save some power. I’m tickled pink to think of the better and better GPUs that we’ll be able to fit into laptops. I mean, imagine Polaris 10 with lower-clocked GDDR5X. That’s a killer for laptops as well as desktop gamers that don’t have any PCIe power connectors (a surprisingly large market).

Pin It on Pinterest

Share This