Rumor: Nvidia Pascal GP104 die shots leak

The smoke around Nvidia's next-generation graphics chips keeps getting thicker, and if Tuesday’s purported GeForce GTX 1080 cooler shots didn’t warm things up enough for you maybe this will. A leaker from ChipHell.com has posted a fairly legit-looking shot of GP104-200’s die, and the folks at Videocardz have picked it up:

Image: Videocardz.com

Videocardz figures the GP104-200 chip is bound for the GeForce GTX 1070, the replacement for the popular GTX 970. The site speculates that those graphics cards will arrive in mid-June. GP104-400, GP104-200's big brother, purportedly powers the GTX 1080. Videocardz thinks this GTX 980 replacement will debut at Computex in early June. That's consistent with Nvidia's thinly veiled hints about its plans for Computex on Twitter.

Image: Chiphell.com

An older GP104 die shot provided by ChipHell.com conveniently includes some memory chips, and Videocardz has decoded the markings on this RAM. The Samsung chips on the old die shot bear the K4G80325FB-HC label. These chips are standard 8Gb GDDR5 jobs, rather than the faster, newer GDDR5X standard. Videocardz thinks that this leak indicates the GTX 1070 will still use GDDR5. The site further claims the GTX 1080 will feature newer GDDR5X RAM, though. As always, we'll have to see how time bears out these guesses.

Comments closed
    • Ninjitsu
    • 4 years ago

    Also of importance, and linked in the Thursday shortbread, is the Tesla P100 whitepaper, with some details on GP100 and HBM2.

    [url<]https://images.nvidia.com/content/pdf/tesla/whitepaper/pascal-architecture-whitepaper.pdf[/url<]

    • Chrispy_
    • 4 years ago

    Call me a cynic, but why is the font for the all-important text clearly photoshopped on?

    [list<][*<]It's a different font (Arial, I'd guess, not whatever font the laser package-etcher uses) [/*<][*<]It's not aligned with the package anyway, off by a quarter degree or something. [/*<][*<]It's the wrong colour.[/*<][/list<] Am I missing something - like is the important text the laser-etched 1614A1 and not the shopped/painted GP104 bit?

      • ImSpartacus
      • 4 years ago

      Haha, I noticed the same thing, but I assumed that I was just going crazy, lol.

      • NoOne ButMe
      • 4 years ago

      That part is the important part. I believe it means 2016, work week 14, A1 silicon (don’t remember if this means first or 2nd revision with Nvidia). assuming the actual package is legit this makes it a die size Nvidia does not have.

      Marking it is a quality sample… Lots you can read into here. Most of it assumptions off of who one thinks leaked it.

      [b<]If [/b<] you use the 980 as a reference point than this will launch towards the end of July from work week alone.

        • MathMan
        • 4 years ago

        qual means qualification, not quality.
        It means that the silicon is considered good enough to make definite conclusions about how the full system will behave, and that it can be used for all measurements, but that hasn’t been granted production silicon yet.

          • Chrispy_
          • 4 years ago

          Yeah, “Qualification” not “Quality” is not the issue I’m worried about.

          The issue is that the QUAL text looks like it’s photoshopped on. If it’s not photoshopped on, it’s [i<]painted on[/i<], which is just weird and I've NEVER seen that in 20+ years of following CPU/GPU development. I know, I guess Intel engineering samples are often drawn on with sharpie, but the important text is always "INTEL CONFIDENTIAL" and that's [i<]etched[/i<] into the package during manufacture. I'm just questioning the authenticity of something that rings multiple alarm bells in the little bit of my lizard brain used to detect bull.

            • NoOne ButMe
            • 4 years ago

            Everyone should hope that their whoever got this shot added that on for some bizarre reason without any clue of what is happening or it means quality.

            Qualification to me would mean at that time (work week 14) production for consumer parts hadn’t started. Which would probably push back everything at least 2 months, probably more. Putting consumer availability in September at best.

            • MathMan
            • 4 years ago

            No.
            Once you’re past the early sample stage, all samples are qual samples until the final production sign off is granted.

            That means that samples which are produced a day before sign off are still called qual. The day after sign off, the samples are makes production samples.

            It doesn’t mean at all that there are still months before consumer availability.

            • NoOne ButMe
            • 4 years ago

            first, thank you for the information on how and why “qualification/qual” is explicitly used.

            Now, getting to consumers typically takes 5+ months from wafer start… If they sign off to start in WW14 (early-mid April) you won’t see parts in the hands of consumers until early-mid September at best. Probably later because FinFET process takes quite a few more steps than previous process.

            Let’s just all hope that it was added on after to make it seem more legit to leak sites who didn’t stop and think. And in reality this is production silicon from the first batch.

            • MathMan
            • 4 years ago

            First of all, standard times from order to delivery are typically in the order of 14 weeks. That’s roughly 3 months, not 5.
            Second, that’s when you order from a fabless company. In this case, Nvidia is ordering from itself. It can prioritize fab orders anyway it wants. Do some accelerated orders etc.
            Third, most important, you don’t need to wait for qual sign off to start production orders. Qual samples are already supposed to be identical to production samples. It’s just that all testing hasn’t finished yet. That’s why I said that production samples can roll out of the fab the day after sign off is given. It’s very likely that they have wafers stalled in the fab until sign off is given. That means: production samples could be ready today or whenever.
            Fourth, the time from production samples in hand to products for sale on the shelves can be very short, a couple of weeks. GPUs are small enough and high value enough to ship thousands of them via air freight. Just like Steve Jobs did to get the first iMacs in stores in time for Christmas (and locking out everybody else in the process by buying all available capacity.)

            • NoOne ButMe
            • 4 years ago

            [b<]I'm counting time from fab start to hitting market. Which is more like 5+ months I believe. Also is that 14 weeks considering the extra steps (about 1.5-2x I think) for FinFETs?[/b<] When you say accelerated orders I take it you mean reorder which wafers will be for which dice? Remember Nvidia isn't the big partner for TSMC anymore, hasn't been since 40nm. If you mean do hot-lots or something like this it isn't going to happen with Apple using the fab also. Unless Nvidia runs less wafers in order to get product out faster. And 16nm appears to be supply limited by Apple, so this decision personally would strike me as very odd. As before, thanks for the additional information.

    • ronch
    • 4 years ago

    That is not a die shot. It’s just a photo of the C4 Flip Chip package.

      • NoOne ButMe
      • 4 years ago

      Well, look up “maxwell die shot” in google images. At least here there’s a real die 😀

    • Firestarter
    • 4 years ago

    ah yes, rumors and photos of upcoming shiny things, 2016 promises to be an exciting year in GPU tech!

    • DPete27
    • 4 years ago

    [quote<]GTX 1070 will still use GDDR5[/quote<] DRAT!!

      • tipoo
      • 4 years ago

      DRAM actually

      • ImSpartacus
      • 4 years ago

      That’s all it needs. It’s roughly 980-level performance and the 980 performed beautifully with 7Gbps GDDR5 on a 256-bit bus.

      Crank that up to 8Gbps with new GDDR5 and you have enough of a boost to easily satisfy a 980-level 1070.

      Now the 1080? It needs 980 Ti-level bandwidth, so GDDR5X is necessary on the comparably narrower bus.

      • NovusBogus
      • 4 years ago

      There’s a strange noise coming from my computer, I think it’s my new GTX 960 laughing at the critics. Maybe sometime next year the Pascal with all the features everyone wanted to see in Pascal will show up.

        • NoOne ButMe
        • 4 years ago

        [quote<]There's a strange noise coming from my computer, I think it's my new GTX 960 laughing at the critics. Maybe sometime next year the [s<]Pascal[/s<][b<]Volta[/b<] with all the features everyone wanted to see in Pascal will show up.[/quote<] ?

    • Srsly_Bro
    • 4 years ago

    AMD also came out today stating the product positioning of Polaris 10 and 11.

    [url<]http://videocardz.com/59248/amd-officially-confirms-polaris-10-and-polaris-11-market-positioning[/url<]

      • chuckula
      • 4 years ago

      That jibes with what Lisa Su said during the Q1 teleconference.
      It’s not bad that AMD is improving in new market segments, but Polaris is clearly not intended to be a direct competitor with the higher-tiers of the GP104 chip.

      It’s probably more of a direct competitor with the GP106 parts and the higher-end Polaris parts compete against somewhat cut down versions of GP104.

        • ImSpartacus
        • 4 years ago

        Yeah, Polaris 11 might have its way with the laptop market, but Polaris 10 will only compete with GP106 (and maybe certain GP104 parts, as mentioned). It’ll surely compete, just not at the high end.

      • nanoflower
      • 4 years ago

      Hmm, so that suggests the Polaris 10 might compete at the 970 level but probably not at the 980/1080 level. If that’s the case then there really is an opening for Nvidia to introduce the 1080 line at $100-200 premium over the equivalent 900 line (since they’ve supposedly stopped production of 970/980 GPUs.)

        • Welch
        • 4 years ago

        Sure, but that is in fact what Vega is all about, Q1 2017.

        I’m thinking I’m going to go with a Polaris 10 card that is meant to be in the same class for this generation as the 970/390 was this past generation. $250-$300.

        I’ve been running the $150-180 class of cards through all of my builds and always feel a little more want around the last few years of owning them. Such as I’m feeling with my Twin Frozr 7850

          • NovusBogus
          • 4 years ago

          I’d like to move into the $250 price bracket, but this generation was no good for 1920×1200 users. The only two cards that really interested me–4GB 960 and 380X–were among the last things in the cycle and underwhelming at non firesale prices.

        • mczak
        • 4 years ago

        The 1080 should definitely be quite a bit faster (probably around 980Ti level), but the 1070 as well. Polaris 10 is also said to be a ~230mm^2 chip, whereas this one seems to measure a bit over 300mm^2, which should give some ideas what market segment these chips are aimed at.

          • faramir
          • 4 years ago

          14nm Samsung/GloFo process allows for ~10% more transistors in same area than the 16nm TSMC is using so to put above figures into perspective, 232 mm^2 Polaris should roughly have the transistor count of a 250 mm^2 16nm chip.

      • ImSpartacus
      • 4 years ago

      This is nothing new.

      It’s a little more explicit, but it’s not new info.
      [list<] [*<]We've known since AMD's official CES demo in January that [url=https://youtu.be/5g3eQejGJ_A?t=2m<]Polaris 11 would perform like a 950 and have a frighteningly low power consumption[/url<] [/*<][*<]A couple weeks later in VRLA, [url=https://youtu.be/p010lp5uLQA?t=16m<]AMD's Roy Taylor officially said that Polaris 10 would be a cost-effective way to meet (not necessarily exceed) the Vive/Rift VR Min Spec (290/970)[/url<]. The context of the presentation was that the VR-ready PCs are rare and AMD needed a GPU that could expand that market for the lowest possible cost (to both AMD & consumers), so we knew it wasn't a high end GPU. [/*<][*<]Then recently in the official Capsaicin event, [url=https://youtu.be/tcW9qRU4Qz0?t=25m53s<]AMD sealed the deal by showing Polaris 10 playing the new Hitman at 1440p60[/url<]. If Polaris 10 could game at 4K like a high end GPU, then you better believe AMD would've done that demo in 4K. [/*<] [/list<] Then you have the [url=https://forum.beyond3d.com/posts/1888336/<]freaks that predicted AMD's entire 2016 lineup from a few sentences[/url<] in [url=http://venturebeat.com/2016/01/15/amds-graphics-guru-describes-the-march-to-full-graphics-immersion-with-16k-screens/<]Raja Koduri's January interview with Venturebeat where he officially confirmed that we'd only get two Polaris GPUs in 2016[/url<]. In hindsight, it makes sense. AMD has four active gaming-tier GPUs, but they don't have enough dosh to replace them all at once. So they pick Pitcairn and Hawaii. Pitcairn is five fucking years old and Hawaii is way too expensive to produce due to its 512-bit memory setup (and its cost is essential since it is currently AMD's VR Min Spec GPU). So now AMD is poised to make money with a couple of [url=http://www.macrumors.com/2016/04/19/amd-polaris-2016-macs/<]Apple-friendly dGPUs[/url<], any desktop gamers itching to get their rig VR-worthy and any ultrabook owners that want console-tier GPU performance. They lose style points for keeping Fiji throughout 2016, but they (probably) win where it counts.

        • Demetri
        • 4 years ago

        Yep, sounds like Polaris10 = Hawaii performance with a fraction of the power draw, and (hopefully) priced similar to Tonga. Sounds like a pretty nice card, but won’t be pushing the envelope as far as performance. It’s going to wreck the second-hand value of Hawaii cards though, so if you don’t mind the power draw you should be able to grab an old 290/290X for dirt cheap.

        • NoOne ButMe
        • 4 years ago

        Someone at anandtech who apparently is trustworthy is saying things that speak slightly differently. Where P10 is filling 390x to Fury* (not X my bad) performance at the top.
        [url<]http://forums.anandtech.com/showthread.php?p=38180442#post38180442[/url<] If Polaris 10 can hit average of Fury performance, sounds possible, I think AMD will not make any Fiji besides selling stock with Pro Duo. Instead having a dual Polaris 10 card to fill the gap. 2 Polaris 10 should be cheaper to make than a Fiji I think.

          • ImSpartacus
          • 4 years ago

          That’s an interesting possibility.

          But I guess you have to ask yourself, if a Polaris GPU can perform like a Fury, then why didn’t AMD demo Polaris 10 at 4K rather than 1440p60? Fury can definitely do 4K gaming (especially in a demo setting where you can cherrypick the game/settings and you can ensure optimal cooling/performance). I really think the choice of 1440p60 was very telling.

          And if Polaris 10 replaces Fiji rather than Hawaii, then how does AMD use Polaris to significantly lower the cost of VR min spec GPUs? Roy Taylor was pretty explicit about that. A laptop-bound Polaris 11 doesn’t have close to the performance and a giant Fiji-tier Polaris 10 has too much. The only option would be to heavily cut down expensive Polaris 10 GPUs, which isn’t cost effective.

          It’s not impossible, but it would definitely conflict with AMD’s prior “official” statements/demos.

            • NoOne ButMe
            • 4 years ago

            Oh. Yeah. The more we learn directly and indirectly about Polaris the more confusing it gets.

            I think AMD can lower the MSPR of Fiji by 100$ to Polaris 10 and make the same amount of money with a higher gross margin. (to clarify, I mean if Polaris 10 is equal performance to Fiji AMD’s margins should be close to the same if the Polaris card is $100 cheaper at retail)

            I think AMD will launch at 300/400. Maybe 275/350/450 if 3 bins. Leave the 400+ market to the GP104.

            Most of the VR ready cards are probably around the 390/970 price point. I think AMD can completely retreat from it [the higher end, Fiji/GM200 level]] until Vega, to fight GP104 and GP102 if it exists. Better to hold the lower price points until GP106 arrives. To try to get a market share that is at least presentable.

            • tsk
            • 4 years ago

            I can confirm from GTC that GP102 does exist. There is no indication that GP100 will ever be sold to regular consumers since it is very compute focused. My prediction is GP102 will be the Titan card.

            • NoOne ButMe
            • 4 years ago

            Thanks, From what I’ve heard you could take away he compute from GP100 and save over 100mm^2. Do you know if this GP102 has GDDR5x? I cannot imagine Nvidia killing margins on a non-Titan version. And it sounds like even Vega is going to aim to be relatively cheap.

            • tsk
            • 4 years ago

            There are some indications that it’ll have GDDR5x, this makes it a bit more uncertain when it will be ready for launch tho.

            • NoOne ButMe
            • 4 years ago

            Cool, so a 384b GDDR5 version launching as soon as possible (well, when AMD has a faster part than GP104) and a GDDR5x version to follow or as a Titan card?

            thanks again

            • nanoflower
            • 4 years ago

            One possible reason for not demonstrating 4k performance is not wanting to give away anything to the competition. A more likely possibility (assuming the card can do 4K) is that the early engineering samples weren’t stable when pushing 4k. Engineering samples aren’t always the best for doing demos.

            That said you have AMD saying that Polaris 10 is aimed at the mainstream which suggests it’s going to be replacing at most the 390(x) with their best Polaris 10 chips.

            • ImSpartacus
            • 4 years ago

            Do we know of a precedent of either Nvidia or AMD doing a live demo that significantly underplays the performance capabilities of a given GPU?

            I mean, picking 1440p vs 4K is a pretty big difference as far as performance requirements go.

            I get that there’s some competitive secrecy that goes on, but that level of deception seems almost counterproductive.

            • nanoflower
            • 4 years ago

            I agree that the idea that they hid performance for competitive reasons is unlikely but I do think they didn’t show us all that Polaris is capable of. For instance the Polaris 11 demo was about decent performance at a very low power usage so the chip is likely to be capable of much greater performance but at a cost of high power usage. I still think the Polaris 10 demo didn’t show us all that the chip was capable of due to a combination of early drivers and using engineering samples. That’s always going to be an issue if you are demoing new GPUs six months before they actually hit the market (at least that seems the likely time span between the demos and when we will be able to purchase them.)

            • ImSpartacus
            • 4 years ago

            Do you know of a similar situation where there was an appreciable performance boost in production cards compared to a demo?

            For me, that’d be the dead ringer here. We’ve had enough generations of gpus. If this kind of thing is the norm, then we oughta have evidence of it. However, I can’t recall any notable examples.

            I feel like if amd had a performance target of 4k-worthy performance, and early Polaris 10 samples couldn’t do it in time for the demo, then amd wouldn’t’ve told us the resolution. They would’ve fixated on how pretty the pixels were (which is another initiative) or something. You don’t associate Polaris 10 with 1440p unless you really want to.

            But concerning Polaris 11, there’s definitely performance headroom. The label on the ces benchmark said it was limited to 800mhz core clock. That’s to be expecting since the demo was very contrived around the idea of power consumption at a given performance level (rather than the more free form “unrestrained” Polaris 10 demo).

            • nanoflower
            • 4 years ago

            I can’t think of a specific case where there was significant performance increase between the first demo and the release of new hardware, but I would point to just how much of a difference AMD has made with their drivers over time. Significantly more so than you see with Nvidia and likely due to the lack of people available to pull out all of the performance available with new GPUs at release time. So over time AMD GPUs see a nice boost in performance due to the developers finding ways to speed performance while Nvidia is able to pull out most of the available performance in their GPUs at release time.

      • Chrispy_
      • 4 years ago

      Double the performance/Watt sounds awesome, until you realise they’re talking about comparison to mobile Tonga, first and foremost.

      Mobile TongaXT is giving us 75% of the clocks of desktop Tonga at 125W. So double that performance at 125W or the same performance at 65W sounds pretty good huh?

      Well, double the performance, AKA 150% of a 380X is a GTX970, near as I can make out.
      And the GTX980M is for all intents and purposes, indistinguishable in benchmarks from a GTX970M, except it uses 100W, not 125.

      So, at the high end, Nvidia have provided [i<]more than[/i<] double the performance/Watt of Tonga since October 2014, 18 successful months ago. If we look at the efficiency end of the market, AMD has nothing modern other than Tonga - Bonaire is the next best option and it's providing older GCN 1.1 as well as woeful performance/Watt somewhere in the region of half what the GM107 has been providing the market with for [i<]over two years.[/i<] I really really want Polaris to be amazing, but it has to be as good as Pascal, not "almost as good as Maxwell which we've been enjoying for the last whole product cycle already".

    • Srsly_Bro
    • 4 years ago

    Some dork over at OCN did some calculations based off pixel size and came up with around 340 mm^2 for die size of GP104.

    [url<]http://www.overclock.net/t/1598181/tpu-nvidia-gp104-pascal-asic-pictured/20#post_25097277[/url<]

      • albundy
      • 4 years ago

      I cant even begin to describe how much that matters.

        • Srsly_Bro
        • 4 years ago

        People attempt to estimate performance based upon the die size and the increased transistor density. Obviously architectures are different and direct comparisons can’t be made, but it’s a prediction.

          • Chrispy_
          • 4 years ago

          Die size is not about performance estimates, it’s about cost estimates.

            • NoOne ButMe
            • 4 years ago

            It can be for both 🙂

            Also die size can be completely worthless for cost because how a chip yields is an architecture by architecture and chip-by-chip situation. Putting A Kepler/Maxwell/GCN on 40nm the same die size as a Fermi and the Kepler would yield better for sellable parts.

            • Srsly_Bro
            • 4 years ago

            People on forums don’t care so much about die cost as they do performance. You can make rough estimates based on larger die sizes. We know GP100 is ~610mm^2 and GP104 is rumored to be ~340mm^2.

        • iBend
        • 4 years ago

        it mean someone have too much freetime

    • Neutronbeam
    • 4 years ago

    Can any of these chips be used in a laptop? A few big laptops can use desktop chips. More to the point, do we know when mobile versions will be generally available?

      • nanoflower
      • 4 years ago

      These won’t be used for laptops unless they use one of those external adapters. Laptops will have to wait till the fall supposedly when the 1060 and under cards are supposed to come out since something in that line is more appropriate for a laptop. (Though a cut down 1060 GPU may get 1070M numbering thanks to the way Nvidia handles mobile GPU numbering.)

        • Neutronbeam
        • 4 years ago

        Thank you sir!

        • brucethemoose
        • 4 years ago

        This isn’t Godzilla pascal. Nvidia’s 2nd biggest chips like like 980, 680, 560 TI etc. generally make their way into laptops.

        So I’m guessing a cut-down version of this will make it into laptops.

        • chuckula
        • 4 years ago

        In an ultrabook? Definitely no.

        In what used to pass for a “regular” notebook? Like you said the GP-106 parts will probably show up there.

        In a “gamer” notebook that you could use as impromptu body armor in a gun fight? Yeah, they’ll probably cram GP-104s in there. The cut-down GP104 might even be somewhat reasonable if they do it right.

          • nanoflower
          • 4 years ago

          I think of those things as more like the “luggables” of old. They certainly aren’t something you want to be lugging around every day.

            • NoOne ButMe
            • 4 years ago

            My 13.3″ laptop is a little under 5lbs and has over 100W of dissipation in it I think (47 CPU, 960m (50W?) and 10-20W for everything else). dropping down to a higher clocked dual core instead of a quad core should allow for a GP104 cut part like a 1070m. And probably be a better fit for most games.

            battery life is a bit suspect, might as well just carry around a NUC with an UPS.

      • the
      • 4 years ago

      I’d fathom yes since nVidia was able to put a fully enabled GM204 into a mobile package. I’d expect similar laptops to be able to handle a full sized GP104. I wouldn’t expect a full sized GP104 to hit mobile until well after discrete cards for desktops are out. Those chips are selectively binned for power consumption and it takes awhile to build up an inventory to launch for OEMs.

        • Visigoth
        • 4 years ago

        Agreed. Plus this is “Little Pascal”, which they should definitely have a mobile version of, just like Maxwell. “Big Pascal” (GP100) most probably won’t have a mobile version, solely due to the sheer power & cooling requirements, not to mention its monstrous compute performance that isn’t really relevant for most laptop gaming users out there.

    • Leader952
    • 4 years ago

    [quote<]This GP104-200 variant is supposedly planned for GeForce GTX 1070[/quote<] This is not true if the following is true: Nvidia will launch three cards in June [url<]http://www.fudzilla.com/news/graphics/40439-nvidia-will-launch-three-cards-in-june[/url<] [quote<]The Geforce GTX 980 GTX Ti will be replaced with the GP 104-400 Pascal based chip. It will come as a reference and a custom AIB board. It should ship in early June, probably after Computex 2016. [b<][u<]The Geforce GTX 980 GTX will be replaced with the GP 104-200[/u<][/b<] which is obviously a cut-down version of a Pascal based chip. It will come as a reference and a custom AIB board. It should ship in early June. The last but not the least is more affordable [b<][u<]GP 104-150 Pascal chip. The GP stands for Geforce Pascal and the GTX 970 replacement [/u<][/b<]will not launch as a reference card, it will start as an AIB designed card and should ship in mid-June. [/quote<]

    • chuckula
    • 4 years ago

    GDDR5X has a different physical pinout configuration than GDDR5, so obviously a GDDR5 memory controller isn’t setup physically or electrically to work with GDDR5X.

    However, does anyone know if silicon that is setup for GDDR5X can fall back to operation with GDDR5? It seems logical but I don’t know enough about the specs to say that with certainty.

      • NoOne ButMe
      • 4 years ago

      GDDR5x controller an do GDDR5

      • MathMan
      • 4 years ago

      There is virtually no difference electrically between GDDR5X and GDDR5. The pin functions are the same, the terminations are the same.

      A GDDR5X chip has a difference pinout, but that’s more ground pins for the most part. And the DQ pins have an extra most to do QDR in addition to DDR, of course.

      So other than a different PCB, any chip that can drive GDDR5X, should do GDDR5. It’d be negligent to not design it that way.

        • chuckula
        • 4 years ago

        No, there are new PLLs for tracking phases of the signals in GDDR5X too. It’s not merely throwing in some ground pins and magically doubling the bandwidth, there are some subtle but important electrical differences involved.

          • MathMan
          • 4 years ago

          Yes, Captain Obvious, of course there are some design changes. You can’t magically make QDR work with a system that has been designed exclusively for DDR.

          But the electrically (you know impedance, etc), GDDR5 and GDDR5X are very similar. And a memory controller designed for GDDR5X can easily work with GDDR5 as well. There are no major difference in terms of functional operation, just enhancements.

          So let me reiterate: if you have designed a memory controller for GDDR5X, it’d be downright stupid and negligent to not make it work for GDDR5 as well.

            • Tirk
            • 4 years ago

            I think I’d have to agree with Chuckula on this one.

            Yes it might be similar but that doesn’t mean the same………

            Although, I do agree being compatible for both GDDR5 and GDDR5X is possible it is definitely not a forgone conclusion. It would be smart to make to make the memory controller compatible for both as GDDR5X as it is likely not going to be available until much later in the year so the first GPUs coming out this summer will almost certainly have GDDR5. But as we all know, companies like to disappoint when we make assumptions without knowing all the details involved in making it happen.

            • MathMan
            • 4 years ago

            Micron has now stated that production GDDR5X chips will be available around end of May time frame. The ‘much later in the year’ claim is stale.

            And given that there are pictures of GP104 boards with GDDR5 and GDDR5X, I think it’s a foregone conclusion in this case.

      • Sahrin
      • 4 years ago

      >so obviously a GDDR5 memory controller isn’t setup physically or electrically to work with GDDR5X.

      This isn’t obvious at all. Memory controllers on GPU’s have been designed to work with different kinds of memory for years.

        • MathMan
        • 4 years ago

        Yes, I have no idea what Chuck is thinking. GDDR5X is just your typical progression of an existing technology, and easy to make backward compatible.

        In fact, the GDDR5X spec even lists a separate mode where a GDDR5X DRAM behaves even more like a GDDR5 chip than usual (at the expense of some performance loss.)

          • nanoflower
          • 4 years ago

          Sounds like he’s just saying the obvious thing. You can’t throw GDDR5x on a board designed for GDDR5 and expect it to work. The board has to be designed for GDDR5X, but could then also be designed to allow for GDDR5 support. So, while there is some work to do it seems the biggest hurdle to boards with GDDR5X is going to be the supply of GDDR5x memory.

            • NoOne ButMe
            • 4 years ago

            It does sound like a respin of a chip probably could go from GDDR5 to supporting GDDR5x though. Nothing to major.

        • chuckula
        • 4 years ago

        No,that’s quite obvious due tot he fact that a gddr 5 controller on a piece of silicon you can buy today literally doesn’t have the electrical connections required to connect to a gddr5x chip.

Pin It on Pinterest

Share This