Nvidia makes the GTX 1070 and GTX 1080 official

Nvidia made the GeForce GTX 1070 and GTX 1080 official tonight at the Dreamhack festival in Austin, TX. As expected, these graphics cards use a Pascal GPU fabricated on TSMC's 16-nm FinFET process. The GTX 1080 uses GDDR5X RAM, while the GTX 1070 uses GDDR5. The GTX 1070 and GTX 1080 cards themselves look almost exactly like the variety of cooler shrouds leaked in recent days.

The GTX 1080 promises 9TFLOPS of single-precision performance, and it comes with 8GB of GDDR5X RAM. That card will be available May 27 for a suggested price of $599, or $699 for the "Founders Edition" card. The Pascal GPU on this card includes 2560 stream processors clocked at a boost speed of 1733MHz.

The GTX 1070 promises 6.5TFLOPS of single-precision performance. It comes with 8GB of GDDR5 RAM, and it'll carry a suggested price of $379 when it hits store shelves on June 10. Like the GTX 1080, the GTX 1070 will also come in a "Founders Edition" for $449.

Nvidia CEO Jen-Hsun Huang made some startling performance claims regarding its new graphics cards. Huang said the GTX 1080 will be twice as fast as a single Titan X—all while using a little more power than a single GTX 980. The GTX 1070 is purportedly "faster than a Titan X."

To demonstrate Pascal's overclocking prowess the company ran a Paragon demo on a GTX 1080 card running at a whopping 2116MHz with an air cooler. The chip only hit 67° C or so under load.

Comments closed
    • HERETIC
    • 3 years ago

    My few cents worth as a summary-
    Nvidia should own the $370 price point and up the rest of 2016.
    thro DX12 has a question mark………

    Polaris-if available in June should give AMD control of the $200 to$350 segment
    until Nvidia releases it’s smaller dies later in the year-then we can have a nice
    price war in that segment if we’re lucky…………………………………………………………

    • tipoo
    • 3 years ago

    Just making sure – do all Pascal cards (or at least, the two including the 1070 so far) have simultaneous multiprojection, or was it just the 1080? They talked about it when they only had the 1080 out of the bag, but it also sounded like a Pascal feature.

    It’s really a killer feature for VR imo. You can get there with brute force, but this will allow even lower tier cards than the 1070 to provide better VR performance than equivalently priced cards without it.

    Wonder if Polaris will feature similar.

      • Ninjitsu
      • 3 years ago

      Would be really weird if it wasn’t a Pascal feature…

        • tipoo
        • 3 years ago

        I mean, it surely is, but the question is if they’ll be doing any artificial segmentation for high/low end.

      • the
      • 3 years ago

      Considering that the GTX 1070 is just a binned GTX 1080 with few enabled units and lower clocks, it would be surprising that it’d be disabled. Then again when companies will nickle and dime for every feature… *cough* Intel *cough*.

      The real question is if this feature will make it into GP106 cards and if the big GP100 has it as these are different chips.

    • Mat3
    • 3 years ago

    So AMD went with Glofo/Samsung and Nvidia TSMC? Did AMD bet on the wrong horse? When was the last time Nvidia beat AMD to a new process? Thirteen years ago?

      • BurntMyBacon
      • 3 years ago

      [quote<] When was the last time Nvidia beat AMD to a new process?[/quote<] My thoughts exactly. Also, it seems that for the dual sourced Apple chips, the TSMC process actually worked marginally better. I believe AMD is using a newer, higher performance 14nm process, though. Here's hoping that it will at least have something to show for being a little late.

      • tipoo
      • 3 years ago

      Ship date isn’t the *only* horse though. This will be interesting as direct architecture:architecture comparisons will partly be edged out by foundry silicon performance. I know TSMCs 14 did better in the iPhone, but this is reportedly a newer process than that.

      Besides, is it ship date, or just “launch” date?

      • Flapdrol
      • 3 years ago

      Polaris should also launch soon.

    • flip-mode
    • 3 years ago

    Love that picture of Jen-Hsun standing in front of that graph showing the projected growth of his Huang

      • Freon
      • 3 years ago

      Can’t unsee!

    • flip-mode
    • 3 years ago

    It’s great that there’s going to be a new king and all, but we need to see some of those dramatic improvements materialize at the “mainstream” segment. We need a 2016 version of the GTX 460. We need a doubling of performance at the $200 level. That segment has been suffering for YEARS.

      • Krogoth
      • 3 years ago

      There’s a good chance we will get that wish this round.

      • tipoo
      • 3 years ago

      We’ve been on 28nm since 2011 (!!!!), so it darn well better come soon! 14/16nmFF is a great chance for this.

      • NTMBK
      • 3 years ago

      That’s supposedly what Polaris 10 is meant to be. I’m sure Nvidia will have an equivalent part in a couple of months.

      • Airmantharp
      • 3 years ago

      Supposing the GTX460 was mainstream- and it really wasn’t. The part that went into the 460 is the same class that has gone into the 680, 770, 970, and now 1080. That’s the x*04 part, which has become a pure gaming solution that lacks the mix of compute usually found in the x*00 parts.

      Now, the space that was occupied by the x*04 part in the GTX460 is now occupied by the x*06 parts that made up the 660, 750, 960, and will likely show up in future 1050/1060 parts.

      Also note that that doubling of performance happened basically at every level with the GTX460s generation… so it may happen this generation with a x*06 part as it’s happening with the x*04 part (980 -> 1080), and it may not.

        • Ninjitsu
        • 3 years ago

        Well arguably the 460 [i<]was[/i<] mainstream and so were the 680,770,970,980, etc. except they could be priced as high end parts. But it's crazy, GF114 to GM104 is almost a 3x perf increase [url<]http://www.anandtech.com/bench/product/1661?vs=1595[/url<] GF104 to GP104 will probably be 4x. Imagine if the price bracket was still the same!

        • Krogoth
        • 3 years ago

        GF104 was a mid-tier GPU from the get go. The reason why Nvidia set the MSRP low by today’s standards is because of heavy competition from AMD at the time and Nvidia needed to recover from GF100 debacle a.k.a GTX 480.

      • Firestarter
      • 3 years ago

      Depends on the die size. I’ve seen 333mm[super<]2[/super<] being thrown around as a number, and if that's correct then the GTX1080 or equivalent cards could eventually end up around the $200 mark, with the high end 600mm[super<]2[/super<] cards around $500. That's how it is (roughly) with the current 28nm cards, the 300mm[super<]2[/super<] class cards (actually 365 with the HD7970) that were the first on 28nm were introduced at $550 and gradually went down to that $200 level

      • ish718
      • 3 years ago

      You can get a R9 380 or GTX 960 for $200. They both offer decent performance at 1080p. Don’t expect ultra settings in every title but that’s mainstream for ya.

        • anotherengineer
        • 3 years ago

        The GTX 960 was not a doubling of performance over it’s predecessor IIRC.

    • chuckula
    • 3 years ago

    Cash in your frequent flier miles since AMD is apparently launching Polaris in Macau at the end of the month: [url<]http://videocardz.com/59753/amd-polaris-launch-end-of-may[/url<]

    • deinabog
    • 3 years ago

    It looks like Nvidia’s engineers were able to squeeze the performance of the Titan X into the GTX 1080 and 1070 at lower price point which is an achievement. I won’t be swapping out my Titan X cards anytime soon but I’m glad to see this nonetheless.

    • anotherengineer
    • 3 years ago

    Hmmmmm Looks like we won’t have to wait to long for some comparisons.

    [url<]http://www.techpowerup.com/222347/amd-to-launch-first-polaris-graphics-cards-by-late-may[/url<]

      • ImSpartacus
      • 3 years ago

      Not surprising. Polaris 10 needs gddr5x just like gp104, so the well release near each other.

      Polaris 11 probably could’ve released earlier (remember it was functional back in early January at ces), but it’s be weird to do a big release with only Polaris 11 (though Nvidia did it with gm107…).

    • kamikaziechameleon
    • 3 years ago

    A concise stats breakdown would be appreciated.

    Otherwise this is great news.

    • ish718
    • 3 years ago

    [url<]http://www.gamersnexus.net/news-pc/2427-difference-between-gtx-1080-founders-edition-and-reference[/url<] [quote<]Every single instance of “Founder's Edition” can be replaced with the word “Reference,” using previous-gen nomenclature. There is not one difference in its market positioning. They are synonymous. NVidia has replaced its “Reference” name with “Founder's Edition.” There are not two GTX 1080 models made by nVidia. Only the “Founder's Edition” exists; there is not a cheaper card made by nVidia than the $700 Founder's Edition, which ships first. Just to be clear: nVidia is making one official GTX 1080 and one official GTX 1070 model. The “Founder's Edition” is not specially binned. The “Founder's Edition” is not pre-overclocked. The “Founder's Edition” uses the new industrial design and cooler from nVidia. Historically, this is what we would call the “reference cooler.” The cooler is more-or-less identical to the previous reference models. It's got vapor chamber cooling, a VRM blower fan, and a large alloy heatsink under the shroud. There is a backplate on the GTX 1080 Founder's Edition. This card is not "limited edition," despite its name that would indicate as much, and will run production through the life of the GTX 1080 product line.[/quote<]

      • puppetworx
      • 3 years ago

      Interesting. The rationale (which sounds like it came from Nvidia themselves) is that Nvidia want to get out of the way of GPU manufacturers. A couple other reasons also come to mind 1) Nvidia wants to set a price anchor in consumers’ minds to make the non-reference design look like a better deal, and 2) non-reference cards may not feature the same quality switching power supply which Nvidia made a major fuss of and focus during the launch (or the vapour chamber cooler for that matter).

      Edit: readability

      • yogibbear
      • 3 years ago

      WTF Nvidia. Why use so much obfuscation? Like I want to get my fanboy boner on, but all this lack of confusing information is silly. I mean it’ll all be cleared up on May 27th, but why does that need to be so?

      • ImSpartacus
      • 3 years ago

      Wow, so “Founder’s Edition” is just the reference version? Man, that’s harsh. Good link though.

        • ish718
        • 3 years ago

        A reference version with vapor cooling, fancy.

          • NTMBK
          • 3 years ago

          Hah, you can get vapor chamber coolers on HD 6450s. There’s nothing particularly fancy about them.

    • muxr
    • 3 years ago

    $700 MSRP (at release), for 25% better performance than 980ti. Ouch.

    • Legend
    • 3 years ago

    Interesting that nVidia decided to use 1080 in the marketing scheme as the part matches up with 4K panels and 3D headsets. You spin me right round baby right round…

      • Visigoth
      • 3 years ago

      Any moron can see that the next level up from a Geforce 980 would a Geforce 1080… :-/

    • Fonbu
    • 3 years ago

    Seems very much VR focused this card. Getting on that bandwagon is an interesting area to bet big on for Nvidia that may/may not be huge.

    Everyone will be comparing it to the 980Ti and regular 980 in SLI in reviews, which will probably be really soon.

    Some website seem to have received their review sample with photos already!

    The card itself could include a 2x6pin to 8pin adapter for most people to take it in their systems without buying a new power supply and its ~500 Watt PSU requirement and TDP of 180 Watts.

    In conclusion it is exciting about the new process node…alot more cards to come and Nvidia is first to officially announce…

    • chuckula
    • 3 years ago

    I got a lot of thumbs down when I made fun of people buying Fury X’s when Pascal was just around the corner.

    Enjoy your investment.

      • PrincipalSkinner
      • 3 years ago

      They got furious and their blood pressure went to 200 kilopascals.

    • f0d
    • 3 years ago

    edit: i got suckered into a fake benchmark
    laugh away people 🙂

    in case anyone wants to see the original link
    [url<]https://youtu.be/TGmDdork6oA[/url<]

    • anotherengineer
    • 3 years ago

    hmmmmm [url<]http://www.geforce.com/hardware/10series/geforce-gtx-1080[/url<] So it shows the 1080 about 1.75x faster in Witcher 3 as the 980. However core clocks are about 1.43x faster and the memory clocks about 1.43x faster also hmmmmm So some definite gains, but I would be curious to see if both had the exact same clocks to see the difference from the architectural changes.

      • ImSpartacus
      • 3 years ago

      It has 1.25x the compute resources (2560 SMs vs 2048 SMs), so when you look at the product of 1.25x & 1.43x, you have 1.78x, which is in the ballpark of the claimed 1.75x.

      I doubt there were significant architectural changes. This is a shrink. Major architectural changes come later.

        • the
        • 3 years ago

        There are a handful of architectural changes.

        They doubled half precision performance which is something we’ll need to be watching. nVidia could be substituting lower precision calculations that result in image degradation. With HDR, the differences will be harder to tell.

        Simultaneous multi view port is a nice new efficiency feature for VR and improves surround vision.

        While not a architectural feature, DP 1.4, HDMI 2.0b and HDCP 2.2 support are all welcomed additions as they enable higher resolution displays. 5K can now be done via a single cable and a GTX 1080 can drive three of them simultaneously.

      • tipoo
      • 3 years ago

      It would be interesting, but largely academic as we just don’t have the 980 on 16nm FF to compare.

    • ClickClick5
    • 3 years ago

    I’ll hang onto my 980 (non TI). Once games start to either:
    a) chug endlessly
    b) no longer run

    Then I’ll upgrade.

    But nice kudos on the performance!

      • cygnus1
      • 3 years ago

      so let me paraphrase:

      “my high end GPU that was part of the latest generation until this announcement still works really well, so I’ll keep it”

      thanks for the update Captain Obvious.

    • Srsly_Bro
    • 3 years ago

    I’ll buy a used 980 Ti for $200. When $379 for a much faster, and included warranty for a GTX 1070 makes the 9xx series obsolete.

    • Rageypoo
    • 3 years ago

    I got a lot of thumbs down when I made fun of people buying 980’s when pascal was just around the corner.

    Enjoy your investment.

      • geekl33tgamer
      • 3 years ago

      The 980 if bought even today isn’t suddenly going to be obsolete is it? It’s still going to be just fine for up to 1440p and high detail for several more years.

        • ronch
        • 3 years ago

        But why not just wait a bit longer and get a product that’s either gonna (1) suck less power for the years that one will own it while giving similar performance or (2) one that’s gonna give more performance at the same TDP in the years that one will own it?

          • Ifalna
          • 3 years ago

          You do realize that you would wait indefinitely with a mindset like that?
          There’s always sth cool just around the corner.

          I’ll probably go fr a 1070 as a replacement for my 7870, but I will wait for AMDs counterparts so they can duke it out in a price war.

            • sweatshopking
            • 3 years ago

            The long wait to go below 28nm was unusual. Skipping the 900’s made sense unless you had to buy a gpu

            • ronch
            • 3 years ago

            Except 16/14nm is such a long time coming. You wanna stay with 28nm for a few years more? Be my guest. 🙂

        • bfar
        • 3 years ago

        Yes, but in light of what’s arriving over the summer you’d be bonkers to have bought one recently

        • derFunkenstein
        • 3 years ago

        that $500 is better spent on a 1070 and a night out on the town.

        • muxr
        • 3 years ago

        That depends. I am backing up my 980 drivers, since Nvidia has shown that they will do whatever it takes to make the new cards benchmark better even if it means gimping old cards.

        Which is verifiable if you compare the benchmarks of 7xx cards at launch vs. benchmarks now.

      • anotherengineer
      • 3 years ago

      A gaming card isn’t an investment, as typically in 4 years their value is about 10% of it’s original worth.

        • bfar
        • 3 years ago

        It’s all about utility. If you game a lot, expensive cards aren’t bad value. If you’re like me with small children and only a precious few hours a week for gaming, $700 GPUs have become a total luxury. That’s not to say I won’t indulge occasionally, but I know all to well how quickly fast GPUs become old hat.

          • anotherengineer
          • 3 years ago

          Yep, I have 2 little ones. My total gaming hours for the past 2 weeks according to steam is 2.4 hrs. And half of that was probably AFK lol.

          Now if you game a lot, but have low res monitor and/or older games, an expensive card can still be bad value. Always best to check the reviews and get whatever will server you best/maximize your dollar value.

            • bfar
            • 3 years ago

            There’s some truth to that. On the other hand a flashy new GPU is a gateway for a cool new hi res monitor, a VR headset and and a bunch of cool new games 🙂 It’s a bit kart before the horse, I know 😉

      • chuckula
      • 3 years ago

      GTX-980: Released September, 2014. What is so bad about it being replaced 20 months later. Where are all the calls for people who dropped money on a much more recent Fury X to regret their decision?

        • ImSpartacus
        • 3 years ago

        Yeah, two year full replacement is the pretty typical cadence. The 980 was a great (if expensive) then and it’s still formidable now.

        I mean, the 680/770 had the same treatment. It was replaced like 24-ish months after release, right?

        Amd is a little more sporadic, but stuff like Pitcairn and Hawaii lived for a while as well.

          • JustAnEngineer
          • 3 years ago

          In my evaluation, GeForce GTX980 never represented a good value. It was a [b<]lot[/b<] more expensive than Radeon R9-290X/390X, GeForce GTX970 and Radeon R9-290/390, without offering a lot more performance than those cards. If you were going to spend $550+ on a graphics card, you might as well have spent another hundred and gotten the [b<]much[/b<] more powerful GeForce GTX980Ti.

            • ImSpartacus
            • 3 years ago

            Yeah, the 680 wasn’t a great value at $500 either and honestly, the 1080 probably won’t be a fantastic value at $600 if the $380 1070 turns out to be like its 970 & 670 fore-bearers.

            I only meant to mention that we’ve seen a pretty consistent pace of G*#04 parts being replaced roughly every two years. 2012’s 680/770 for replaced by 2014’s 980, which was replaced by 2016’s 1080.

            The actual value of the top-tier G*#04 doesn’t matter in this context, just the generational cadence. The original comment in question could’ve just as easily been “I got a lot of thumbs down when I made fun of people buying 670/760s when the 970 was around the corner.” or “I got a lot of thumbs down when I made fun of people buying 970s when the 1070 was around the corner.” They all sound equally silly to me.

            • BurntMyBacon
            • 3 years ago

            YOU MAD BRAH?

            [quote<]... we've seen a pretty consistent pace of G*#04 parts ...[/quote<] [quote<]... top-tier G*#04 doesn't matter ...[/quote<] No need to throw profanities around. What's that? ... You say you weren't throwing profanities? ... Those are generic mid - high end nVidia silicon chip references? ... Is that what you kids are calling it these days?

        • Tirk
        • 3 years ago

        [quote=”chuckula”<] $1000 for a halo product that had a run of about 14 months from the TR review on St. Patrick's Day of last year doesn't sound too bad for that type of product. [/quote<] You just wrote that on this exact comments section......... And now your saying a $650 halo product is not worth a 12 month run? Did AMD take away your favorite toy or something? People are commenting on what NVIDIA CHOSE to display on their slides NOT WHAT YOU IMAGINED THEY PUT ON THEIR SLIDES.

          • chuckula
          • 3 years ago

          What 12 month run are we talking about since the Fury X debuted in July of last year with VERY limited availability for the first few months. Thank you HBM. An overly-optimistic 10 months “at the top” (not really due to Nvidia having better parts out earlier) isn’t exactly something I’d go around bragging about while accusing Nvidia of being evil because they had the gall to actually improve their products.

          Oh, and the only reason that the Fury X (interesting how “X” is in the name.. isn’t it) wasn’t a $1000 card just like the Titan X is that Nvidia had already launched the GTX-980Ti before the Fury-X ever showed up on the market. Thank you for real competition Nvidia. So optimistically we are looking at 10 months for AMD’s top dog product. Not due to AMD coming out with something better, BTW, but because AMD’s competitors upstaged it before it even launched and now it’s being made irrelevant.

          So while you might go on and on about how some “Vega” part at some undetermined date in 2017 will beat the GTX-1080 (and it damn well better given how expensive it will be to produce and since Nvidia is already producing big-silicon GPUs right now), it’s not like AMD had already launched a better product prior to the GTX-1080 debuting, because that sure didn’t happen.

            • Tirk
            • 3 years ago

            You talk of supply as if you already know about the supply of the 1080 before its been released. Lets leave that debate until we see what supply actually materializes when its released. BTW Fury X release was the end of June, just as the 1080 is the end of May, a 1 month difference so both of us are wrong if you wish to pick straws and are you saying more people should have bought the Fury X if they had more supply?

            You are mad at AMD selling the Fury X for $650 instead of $1000 and thank Nvidia for doing that? Nvidia is the one that still kept selling its $1000 card when it had a $650 card clipping its wings 3 months later. By your own metric the Titan X was made irrelevant 3 months after its release BY ITS OWN COMPANY. Should you thank AMD for forcing Nvidia to release the 980ti early to show up Nvidia’s $1000 card 3 months later?

            I never mentioned Vega so I have no clue why you bring that up.

            Of course I want Nvidia to bring competition, do you not want AMD to bring competition? Does competition only matter when it favors your bias?

            • chuckula
            • 3 years ago

            I’ll be sure to make some forum posts when I get my 1080 at the end of the month and you can make long-winded fanboy diatribes about how it doesn’t count.

            You can also re-post the same crap we heard about how awesome the Fury-X was because it was not in stock anywhere while the GTX-980Ti was in plenty of stock — so that must prove that nobody wanted Nvidia’s products. Until the quarterly reports issued at least.

            • Tirk
            • 3 years ago

            You putting words in other people’s mouth is your diatribe, not theirs. I never said you buying a 1080 as soon as it comes out doesn’t count. In fact, go ahead and do that if its what you want but leave your made up fanboy conversation in your head please. So someone buying a Fury X when it was released doesn’t count? Make sure your applying the same logic to both sides.

            Maybe your getting low on your ketracel white intake…

      • ImSpartacus
      • 3 years ago

      Maybe you got thumbed down because you were making fun of people? Sometimes the reasoning doesn’t really matter.

        • Tirk
        • 3 years ago

        Agreed, however isn’t it telling the comment section bias when Chuckula makes the same post and has positive thumbs and no one admonishes him like you have to Rageypoo.

        I realize Chuckula’s was a comment probably mocking Rageypoo but it should then be considered within the same category as making fun of people? Or did reasoning matter for Chuckula because of comment section bias?

          • ImSpartacus
          • 3 years ago

          Yes, it’s bias.

          • chuckula
          • 3 years ago

          That’s because I’m intelligently satirizing a stupid post and people who aren’t long established AMD fanboys can appreciate it.

          Judging by the comment spamming that you’ve been committing here along with the cliched “OMG WOODSCREWS” drivel that you plagiarized from semiaccurate, I’m going to say that your level of panic is as high as your level of butthurt right now.

            • Tirk
            • 3 years ago

            Comment spamming this coming from you? Did you even read my comment as I recognized you satirizing, please do not project your insecurities on me. Is your ego so blown out of proportion you can’t even read correctly?

            I apologize for any of your feelings I may have hurt, but I will not excuse you bashing people left and right spamming the comment section and making it rude and distasteful at times. I don’t know how many times I seen you turn a respectful conversation into a rude diatribe. Your insightful comments pale with what you’ve often turned the conversation into. Decency is a lost art and some day I hope that you can regain it.

    • elmopuddy
    • 3 years ago

    Just sold my GTX960 2GB, now I’ll game on my bootcamped iMac while I wait for 1070 to arrive…

    Hopefully don’t get raped my horrible US-CDN exchange rate!

    • NeelyCam
    • 3 years ago

    I [i<]knew[/i<] there was a reason why I was waiting for 14/16nm GPUs...

    • smilingcrow
    • 3 years ago

    Seems like a solid new architecture and the gains are what you would hope for considering the delay shifting from 28nm.
    I quite like his presentation style as at least he’s not some marketing drone.
    At least we know that The Fonz is Team Green judging by his choice of jacket.
    Maybe for his next presentation he will get on stage hang out his thumbs Fonzie style and say. ‘Buy nVidia, Hey!’ and walk off stage.
    Now that would be cool.

    • the
    • 3 years ago

    The big surprise for me was DP1.4 support in the GPUs considering the spec was finalized only two months ago. Then again, DP 1.3 -> DP 1.4 was a minor update bringing in features that didn’t make the cut in time for DP 1.3 (Display Stream Compression).

    The simultaneous multi view port technology demo’d is neat but it does seem [i<]familiar[/i<]. I want to say that 3Dlab or Matrox many years ago were showing something similar off. My fuzzy memory remembers that there implementations on saving the raw calculations had a trade off in more memory consumption. Or in the case of my personal memory, less memory consumption. Regardless, it is going to be a key factor in the efficiency gains. And things get interesting when you combine those two things. First off is the obvious: how many external displays can this thing drive? I don't think we'll see more than four DP 1.4 outputs on the card but each of those ports has amble bandwidth to drive several 1440p @ 60 Hz displays. A DP 1.3 MST hub would be able to divide up the bandwidth for four 1440p @ 60 Hz displays. (The resolution could go even higher with the display stream compression provided by DP 1.4 but I'm pessimistic about this approach.) That'd give a grand total of nearly 236 megapixels of screen real estate which might be enough to display my ego. Maybe. The second thing that these could do is produce perspective correction for curved displays. This is a slight distortion at the edges of the recent ultra wide displays ( 3440 x 1440 ). The screen could be divided up into several viewports for correction and then outputted to a curved display. I'll make a prediction on this and say it'll appear in a future driver update. Lastly is that simultaneous multi view port was only shown off using monitors and VR whose displays are next to each other horizontally. This technology should be able to adapt in both horizontal and vertical dimensions. For gamers, the only application I can see the vertical adjustment being useful is for the flight simulator niche. However, this also plays a role in other piece of nVidia's slide deck: holographic displays. The idea of holographic displays hasn't made much noise in years beyond the tech demo scene in years. nVidia is jumping ahead of themselves or they know of something revolutionary in the pipeline perhaps? Outside of gaming, I do see this as being a miracle for astronomers who have a dome projection setup. One thing [b<]not[/b<] mentioned is nvLink support for SLI. GP100 has it but not for discrete GP104 cards? I was hoping to hear of improved SLI scaling with this but nothing in the presentation.

      • chuckula
      • 3 years ago

      They did mention a new SLI bridge. I do not know if that is some sort of cut-down version of nvlink or something else though.

        • smilingcrow
        • 3 years ago

        I think it doubles the bandwidth but if it had been another ‘ground breaking’ tech I’m sure we would have been ‘indoctrinated’ into that as well. 🙂

      • Ninjitsu
      • 3 years ago

      [quote<] Finally, not touched upon in NVIDIA’s initial presentation is that GP104 will use a newer version of NVIDIA’s SLI technology. Dubbed SLI HB (High Bandwidth), SLI HB will double the amount of bandwidth available between paired cards. At the same time this change will require a new SLI HB bridge, presumably to be compliant with the tighter signal integrity requirements a higher bandwidth link would require. NVIDIA is producing the new rigid bridge with their usual flair for industrial design, and so far I have not heard anything about the older non-rigid bridge making a return. In which case buyers will need bridges specifically sized for the slot arrangement of their board. [/quote<] - AT

    • anotherengineer
    • 3 years ago

    GTX1080 – May 27th & GTX1070 – June 10th

    Hmmmm wonder when Polaris will be available?

    • djayjp
    • 3 years ago

    Why is anandtech reporting 1080 is only 20-25% faster than titan x/980ti? If that’s the case then wake me up when the real one launches….

    *edit: in fact, this is precisely what Nvidia’s own chart shows above. So why is TR stating a make believe 2x increase in performance? If it’s *efficiency*, sure I’ll believe that.

      • Voldenuit
      • 3 years ago

      [quote<]*edit: in fact, this is precisely what Nvidia's own chart shows above. So why is TR stating a make believe 2x increase in performance? If it's *efficiency*, sure I'll believe that.[/quote<] It's 2X in VR, because the 1080 can do stereo projection in a single pass instead of two.

        • djayjp
        • 3 years ago

        Although anandtech also called into question the exact differences between this version and the reprojection found in the prior generation. But maybe that’s the difference. I’m just quite disappointed in the performance increase (20-25% over titan x/980ti normally, besides the above niche case) despite 2 full process node shrinks…. :/

          • the
          • 3 years ago

          One and a half. 28 nm -> 20 nm is a full node shrink but 20 nm -> 16 nm FinFET is half node. The 16 nm figure TSMC uses is mainly for marketing.

            • djayjp
            • 3 years ago

            Ok right, I guess Samsung’s 14nm would be a full shrink from 20 then. I thought they’d dual source, but I guess not. Anyway, I guess production capability is all just speculation now, without hard figures as to its viability/yield.

      • ImSpartacus
      • 3 years ago

      They use some VR-specific tricks to get the ~20% figure up to 2x. For non-VR loads, it’s just the 20%.

      CEO Math at its finest.

        • Redocbew
        • 3 years ago

        Well yeah, but after seeing those slides did anyone really expect a 2x performance increase across the board just by going from one generation to the next?

        That being said, an increase of 20% within roughly the same thermal envelope as the 970 and 980 sounds pretty good to me.

    • f0d
    • 3 years ago

    what i think is a big clue to how efficient these are is the fact the 1080 only uses a single 8 pin pci power cable
    im guessing but could the 1070 use a single 6 pin pci power cable?
    when was the last time a high end video card used a single pci cable?
    pretty amazing

    edit: picture [url<]http://s6.postimg.org/nxvq8fi8x/8pin1080.jpg[/url<]

      • NTMBK
      • 3 years ago

      When the GTX 680 launched… Which makes sense, as that was also a midrange part pretending to be high end for a while.

      EDIT: Well I got that wrong! See below.

        • f0d
        • 3 years ago

        680 was 2X 6pin
        [url<]https://techreport.com/review/22653/nvidia-geforce-gtx-680-graphics-processor-reviewed[/url<]

          • NTMBK
          • 3 years ago

          Hah, good spot. I looked at a picture and saw one connector, didn’t see the second one in that weird vertical stack. I take it back 🙂

          Though doesn’t one 8 pin connector equal two 6 pin connectors in terms of power provided?

            • f0d
            • 3 years ago

            [quote<]Though doesn't one 8 pin connector equal two 6 pin connectors in terms of power provided?[/quote<] theoretically yes but isnt quite that simple i suspect as there are so many cards out there with dual 6pin power - if they could have gone with a single 8pin i think it would have happened on previous cards already

    • Captain Ned
    • 3 years ago

    Doom. All the gibs. I need one.

      • anotherengineer
      • 3 years ago

      [url<]http://www.silvergames.com/doom[/url<]

    • Tirk
    • 3 years ago

    Looking at that graph makes me feel sorry for anyone who bought a Titan X, what a horrible price performance product time span.

    What do the numbers on the left of the graph represent, they seem so arbitrary. Did they explain at the event?

      • chuckula
      • 3 years ago

      $1000 for a halo product that had a run of about 14 months from the TR review on St. Patrick’s Day of last year doesn’t sound too bad for that type of product.

      I mean, it’s better than $1,500 for the Radeon Pro Duo that had a reign of about 3 weeks.

        • Tirk
        • 3 years ago

        Except I was commenting on the graph and Titan X is on there didn’t see Pro Duo on there hmmmm.

        Let’s also not forget that the 980ti came out just 3 months after the Titan X largely performing the same or better in games than the Titan X, and that’s from the same company making its product obsolete. I’d be happy for anyone with the 980ti but the Titan X was horrible. Of course you could then argue that the Titan X was not for gamers as some have but then the same could be argued for the Pro Duo which still has higher TFLOPS than these Pascals have. The Pro Duo after all can install FirePro drivers……

        • JustAnEngineer
        • 3 years ago

        GeForce GTX Titan X had a [b<]two-month[/b<] run with limited availability before GeForce GTX980Ti appeared and obliterated it on a value basis.

        • BurntMyBacon
        • 3 years ago

        I wouldn’t feel sorry for anyone who purchased a Titan X.

        [quote<]$1000 for a halo product[/quote<] I suppose this is where my view differs from others. I don't consider the Titan line "halo" products. I consider them more appropriate as entry level products for a different category. Sure, they can function as a halo product for gaming at some points in time, but they waste a lot of silicon area on functions not necessary to gaming. In other markets, however, not only are those functions useful (less wasted area), but the price is much more palatable as well. In these markets the Titan doesn't yet have a newer iteration. Anyone who purchased the cards solely for gaming either has enough disposable income or enough disregard for value that the purchase probably still doesn't bother them. So, again, I wouldn't feel sorry for anyone who purchased a Titan X. [b<][i<]Edit:[/i<][/b<] I just remembered that they squashed much of the dual precision floating point calculation capability in the Titan X, making it less viable in some alternate markets. Makes this Titan X's positioning a little awkward in my opinion. My points about purchasing the card solely for gaming still stand though.

      • yogibbear
      • 3 years ago

      The Titan X is not a gaming card.

      Honestly Nvidia should have just shown a 980Ti on these graphs instead which would make the 1080 look worse (because the equivalent Titan X point would be shifted left and remain at same Y-axis point) but still awesome and totally worthwhile getting.

      All the reviews are going to compare the damn things to 980Ti’s… so Nvidia are basically just setting themselves up for “disappointed” consumers that can’t read.

        • ImSpartacus
        • 3 years ago

        Nvidia wants to show their new stuff in the best possible light. Sometimes that includes some misleading “CEO Math” and other deceptive tricks.

        But it’s not just Nvidia (though they are consistently caught in this shit ). All companies do this.

        • Tirk
        • 3 years ago

        Yes I agree, also of note the 980ti has more memory bandwidth than the 1080 so it’ll be interesting to see if that makes a difference in some games.

        The 1080 has 320Gb/s and the 980 ti has 336Gb/s of bandwidth if anyone was wondering what the numbers were.

        • JustAnEngineer
        • 3 years ago

        GeForce GTX Titan X most certainly [b<]is[/b<] a gaming card. Previous generations of Titans were aimed at computing, but to get maximum gaming performance out of GM200 within the transistor budget available at 28nm, NVidia had to cut out much of the double-precision computing capabilities that were in the previous generation of GPU. Titan X (GM200): 6.1 TFlops single precision, 0.19 TFlops double precision Titan Black (GK110): 5.1 TFlops single precision, [b<]1.7 TFlops double precision[/b<]

          • BurntMyBacon
          • 3 years ago

          [quote<]Titan X (GM200): 6.1 TFlops single precision, 0.19 TFlops double precision Titan Black (GK110): 5.1 TFlops single precision, 1.7 TFlops double precision[/quote<] I'll add the high-end gaming cards of these architectures for comparison: 980Ti (GM200): 5.63 TFlops single precision, 0.176 TFlops double precision 780Ti (GK110): 5.04 TFlops single precision, [b<]0.21 TFlops double precision[/b<] I do believe I see a (now 2 generation old) 780Ti (gaming card) throwing more double precision TFlops than the latest in series Titan-X (compute card). [Yoda Voice] Awkward this is! [/Yoda Voice]

      • ImSpartacus
      • 3 years ago

      The vertical axis was just relative gaming performance. I think it was in units of 960s. e.g. a 960 was exactly 1 and then they increase from there.

        • Tirk
        • 3 years ago

        It doesn’t show the 960 at 1 or whoever made their graph is really really bad hehe.

        People jump on AMD for actual game benchmarks with different settings that help their cards but they actually list as a note so that anyone can read what the changes are. But nothing for an arbitrary performance graph from Nvidia with no reference point and excludes the 980ti but includes the Titan X?

          • ImSpartacus
          • 3 years ago

          Honestly, the units don’t really matter. All that matters is that it’s scaled properly to measure relative gaming performance.

            • Tirk
            • 3 years ago

            But isn’t that the question, is it scaled properly? What information are we given that we know it has been?

            • ImSpartacus
            • 3 years ago

            Just eyeballing it, it looks about right. But you could count pixels and see if it lines up with SP performance or something like that.

            Nvidia will do shady things, but they are more clever about it (e.g. the 1080’s VR-only perf & associated drama). They aren’t going to doctor the scaling of the graph.

            • Tirk
            • 3 years ago

            Why wouldn’t they? They used woods screws in a mock up of what they called an actual product. Why wouldn’t they manipulate the scaling of the graph and is some shady things more condoned then others?

            • lilbuddhaman
            • 3 years ago

            I don’t think you understand the term “relative game performance”. Its a perfectly legitimate measure of performance, relative to games.

            • Tirk
            • 3 years ago

            Maybe you don’t get that relative is only valid when you are provided additional information to know how it is related.

            • chuckula
            • 3 years ago

            AMD posted completely farcical benchmarks for its own products on the day they launched. Remember how the Fury X can destroy Nvidia’s best products as long as you don’t use any anisotropic filtering?

            Lisa Su sat on stage and nodded like an idiot while her own employee talked about the “overclockers dream” Fury-X. That’s not even taking into account the “official” launch schedule of the product that would eventually turn into the Radeon Pro Duo that was “officially” scheduled to launch last fall.

            People whine about Jen-Hsun’s stage shows, but interestingly enough after the products actually launch Nvidia doesn’t seem to have a problem 1. Sending them to sites like TR for actual reviews, and 2. Selling them on the open market.

            • Tirk
            • 3 years ago

            AMD did note what settings they used, they played to their cards strengths but they hid nothing.

            If your whole point is that both companies do shady things, I’ll go with that assertion. Sounds like you’re the one acting like a fanboy. Cry wolf more?

            Its odd though you you have to bring up AMD time and time again on a Nvidia article to defend Nvidia. If you don’t like how Nvidia is displaying its products why bring up AMD?

            • Voldenuit
            • 3 years ago

            [quote<]They used woods screws in a mock up of what they called an actual product.[/quote<] It's in the name. "Mock up". Next you'll be telling me boeing planes don't actually fly, because they always unveil a new plane with a non-working model.

            • Tirk
            • 3 years ago

            Nope, please re-read what you quoted. They claimed the mock up was working silicon until they were caught and had to admit it was a mock up. That is very different than stating off hand that something is a mock up.

            But if you don’t want amplifying information on Nvidia’s products that gives more detail on what they are measuring that’s on you.

            • sweatshopking
            • 3 years ago

            Technically Tirk is right, they did screw up with that. Still doesn’t change the fact AMD sucks for CPU’s and GPU’s, and has for years. I own a 290, but would gladly trade for a 970.

            • Tirk
            • 3 years ago

            Thanks for the support, If you’ve kept a 290 this long it must have served you fairly well.

            I don’t understand all the vitriol over asking for more information, its as if I punched someone’s baby for asking Nvidia to include more info on their graph its not as if it was cluttered or anything. And its these same people who blast people for not waiting for a techreport review are lapping up these slides like their mother’s milk.

            • sweatshopking
            • 3 years ago

            I’ll keep it for years longer. Your point about graphs is valid, they should provide reasonable amounts of information. That being said, this chip is a large improvement, and will almost certainly be a better buy long term than polaris. That being said, if i can find one cheap enough, like my 290, i’d buy again from AMD.

            • DoomGuy64
            • 3 years ago

            Serious? Nvidia might have somewhat better drivers, but the 290 is far more future proof, with more relevant features. Freesync? Async? Do those features mean anything? I have a 390+MG279, and I certainly wouldn’t trade it for a 970. Not even a 980, because I’d lose Freesync, 4GB of ram, and any future dx12 performance. Maxwell is dead in the water, especially now that Pascal is out.

            Depending on which model you have, the 290’s also OC pretty easy to 390 clocks. It’s not a bad card. Just tweak it a bit.

            • sweatshopking
            • 3 years ago

            freesync isn’t usesful to me. monitors are far more than i’m willing to pay, and will likely be for the next decade. async so far doesn’t mean a thing. i have a 290, so it’s 4gb, and the 512mb doesn’t mean jack. sure, it’ll run to 390 clocks, and mine is a solid performer. it’s still hotter, louder, and less reliable driver wise than nvidia’s.

            • JustAnEngineer
            • 3 years ago

            Newegg has the Nixeus NX-VUE24B 1920×1080 24″ 144 Hz TN LCD display with VESA standard adaptive sync (“FreeSync”) for US$235, delivered. Similar monitors with NVidia’s expensive proprietary G-Sync are $400+. Similar monitors without any variable refresh technology are $200.

            VESA standard adaptive sync (“FreeSync”) is now cheap enough that even [i<]Sweatshopking[/i<] can afford it. It's up to you to decide what you should do with the $150 to $200 you save by not getting NVidia's expensive proprietary G-Sync.

            • sweatshopking
            • 3 years ago

            I’m not planning on getting [i<] any [/i<] for a while. 235$ USD is going to be damn near 400$ CAD (grand total of $445.03 CAD on newegg.ca for that monitor, and it's currently on sale) after exchange and then price hikes because we're canadian. There is also the fact that 400$ for a TN panel isn't something I'm interested in. I'd rather stick with my 75hz IPS (which i got for 200$ CAD) than downgrade to a TN panel. I'd purchase when i can get an IPS 1080p minimum 144hz for 200$. Not until, and we're not even CLOSE to those prices. So no, not cheap enough yet for Sweatshopking. Also, since i play strategy games, not sure how useful freesync will be in Civ or TW. MAYBE hon, but i don't play any twitch games these days, which reduces my interest in it. So in the end, i'm not saving anything. Do i think gsync is a dumb standard? Yes. Will it have anything to do with me for years? No. Like physx, cuda, and gameworks, it's a largely useless (at this stage) bullet point which I care nothing for, and wish nvidia would stop dividing standards over. I'm happy with my 290, don't get me wrong, but frame times and power consumption clearly favour nvidia right now. I've also had quite a few driver problems with windows 10 and AMD, which is likely mirrored on the nvidia side.

            • derFunkenstein
            • 3 years ago

            Ew, you’re recommending a TN display that costs 100% more than non-FreeSync displays of its caliber.

            • JustAnEngineer
            • 3 years ago

            144 Hz fixed refresh 1080p TN LCD = US$200
            144 Hz VESA standard adaptive sync (FreeSync) 1080p TN LCD = $235
            144 Hz Proprietary NVidia G-Sync 1080p TN LCD = $400+
            These are the least expensive 120+ Hz monitors available, likely targeted at gamers on a tight budget.

            If you’re a twitch gamer looking for minimum persistence, you probably hold your nose and get a TN LCD rather than an IPS LCD or VA LCD display. I recently researched these products for a forum gerbil who wanted lower persistence than his existing IPS LCD provided.

            • sweatshopking
            • 3 years ago

            Yeah, but they’re all tn. Im not saying freesync isn’t cheaper it is. It just still is more than im willing to spend for a looooonggg time yet

            • derFunkenstein
            • 3 years ago

            Alright, so it’s not double the price of 144Hz displays, but it’s still a non-trivial amount more (>15%). Does that FreeSync panel do VRR at 144Hz?

            Meanwhile, the cheapest FreeSync display I’d actually consider (27″ 1440p IPS) a $580 Asus model. That definitely makes me hug my 60Hz IPS Auria that I cost me less than $300.

            And oh yeah, in tiny print:
            FreeSync™ technology supported (35Hz-90Hz)

            • JustAnEngineer
            • 3 years ago

            Review of the Nixeus monitor here:
            [url<]http://www.pcper.com/reviews/Displays/Nixeus-Vue-24-1080P-144Hz-TN-30-144Hz-FreeSync-Monitor-Review[/url<]

            • chuckula
            • 3 years ago

            An interesting conversation, but as a counter-point, I just ordered a G-sync enabled 2560×1440 144Hz Dell monitor (a S2716DG) for $515 on Amazon (Newegg had it for about $10 more and the price jumped a bit since the order).

            Using Newegg as a guide because of its awesome search functions, the absolute cheapest Acer monitor with the same resolution & refresh rate — and no adaptive sync of any kind — was $469.99. So it was about a $45 price premium for the so-called “g-sync tax” and buying from Dell instead of Acer…

            [url<]http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100160979%20600012694%20600255030%20600417886&IsNodeId=1&bop=And&Order=PRICE&PageSize=60[/url<]

      • Ifalna
      • 3 years ago

      Everyone knew that the Titans are horrible P/P value.
      Consumers that have the money to burn in order to buy the best of the best don’t care about P/P as much as the poorer folks.

      Imho all cards above the 400€ mark suffer from the “E-Peen tax” which lowers the P/P value considerably.

      • Nictron
      • 3 years ago

      Can I point out that the graph is performance per watt difference not actual performance difference.

      Proof is in the pudding, TR I assume we’ll see the review soon 🙂

        • Tirk
        • 3 years ago

        Yes, the TR review will most likely fill in the missing gaps. Has Nvidia sent one to TR for review yet? I heard some sites have already gotten one and are waiting for the embargo to lift.

          • rahulahl
          • 3 years ago

          Nope. TR will probably be purchasing their units with the rest of us.

      • Flapdrol
      • 3 years ago

      No way I’m going to feel sorry for anyone with that kind of disposable income.

      • bhappy
      • 3 years ago

      People that can afford to buy Titan X’s don’t need you to feel sorry for them. It was always marketed as a halo product so people who bought them were prepared to pay the premium and weren’t buying them for value for money performance reasons. Even after the new pascal gpu’s are released they will still be good performers for a while yet.

        • Waco
        • 3 years ago

        I have zero motivation to replace mine. Power use does not concern me, 4K gaming performance does.

    • Dudeface
    • 3 years ago

    I find the transistor count disparity between GP104 and GP100 interesting.

    With GM204 -> GM200, the relationship was almost linear.
    Cores – 2048 -> 3072 (50% increase)
    Transistors – 5.2B -> 8B (53.8% increase)

    However, GP104 -> GP100
    Cores – 2560 -> 3840 (50% increase)
    Transistors – 7.2B -> 15.3B (112% increase)

    A few theories – much wider memory controller in GP100? Much more cache perhaps?

      • bwcbiz
      • 3 years ago

      I think you’re also dealing with 8 GB vs. 4GB of GDDR5(+).

        • Dudeface
        • 3 years ago

        I’m fairly sure those numbers don’t include memory (GDDR or HBM)

      • f0d
      • 3 years ago

      the 15b figure was including the transistors for the HBM2 memory

      edit: whoops 150billion was for the gpu+hbm2 i got my numbers all mixed up

      • Flapdrol
      • 3 years ago

      The GP100 has a bunch or separate fp64 units and it’s fp32 units can also do half precision at twice the speed. That stuff has to add to the number of transistors.

      • guardianl
      • 3 years ago

      It’s because GP100 has FP64 at 1/2 rate. GP104 will probably be 1/32 like Maxwell GM204.

    • ronch
    • 3 years ago

    Take a look at that photo with “A New King”.

    Anyone else here thinks Jensen should’ve stood a little bit more to the left or right? 😀

      • JustAnEngineer
      • 3 years ago

      +1

      Maybe he was just illustrating his feelings towards those of us who purchased Maxwell GPUs.

      • NeoForever
      • 3 years ago

      LOL I was about to post this. Well composed photo 😛
      Shows how proud he is of his.. uh.. “business”.

      Also, see that “Relative Gaming Performance” axis? May be the photographer is trying to symbolize where Jensen pulled that graph out of. 😛

      • Leader952
      • 3 years ago

      That was a zinger aimed at AMD for their “King of the hill” title for Fury.

    • Chrispy_
    • 3 years ago

    With current gen hardware supporting 60Hz up to 1440p, the improvements to performance that these bring raises the significance of the G-Sync/Freesync debate.

    AMD are announcing Polaris in 3 weeks, apparently, and this being an Nvidia paper launch with no VESA AR support means anyone in their right mind will just be waiting.

    I might end up getting a 1080 anyway, but I’m not blindly jumping in without waiting for Polaris, and even then I’ll decide on my next monitor first and then buy a suitable GPU to drive it. Chances are my next monitor purchase will see me through at least three graphics cards.

    • anotherengineer
    • 3 years ago

    Hmmmm both prices at MSRP, if reviews are good, I wonder what e-tailers would actually price them at??

    • anotherengineer
    • 3 years ago

    Impressive, I guess TSMC 16nm finfet process/silicon is working pretty good and clocks very good!!

    I wonder how much of an architecture/driver battle this is going to be?

    I have a feeling it might come down to Silicon vs Silicon (not sure if they will be using the same silicon and same doping??) and Process vs process.

    With AMD/Nvidia both being at the same fab before it was easy to chalk up power draw to bios voltage settings/architecture/driver optimization, now we have a new unknown process thrown in.

    Even though the 1070, on paper has potential to be good performance for the dollar, I have a funny feeling it’s going to be over $500cnd. I probably like a lot of 1080p midrange gamers will be looking for a ~$200US card. If AMD can provide this at a performance/$/power ratio like the 1070 they will probably sell well.

    And yeah for new node!! Finally!!

      • ultima_trev
      • 3 years ago

      From what is rumored + confirmed:

      GTX 980 = 1.7 GHz clock (at least), 2560 shaders, 160 TMUs, 64 ROPs, 320 Gbps memory bandwidth

      Polaris 10 = 1.1 GHz clock (at most), 2560 shaders, 160 TMUs, 32 ROPs, 192 Gbps memory bandwidth

      AMD will have no choice but to sell Polaris 10 at $200…

        • ImSpartacus
        • 3 years ago

        Polaris 10’s GPUs will probably be priced in the $200-$300 range with roughly the performance of Hawaii. Though with Hawaii parts routinely dipping into the $250-$350 ballpark (and even lower in extreme cases), I don’t see the surprise.

        Where are you getting the 192 Gbps (assumed GB/s?) figure? I thought Polaris 10 was rumored to use 10 Gbps GDDR5X & a 256-bit bus (just like the 1080), so it would have 320 GB/s of bandwidth. Hawaii needed ~320 GB/s of bandwidth, so I figure that Polaris 10 will need that as well.

        Also, I thought the consensus was that Polaris 10 had 2304 SPs. I can’t find the forum discussions about it as they are buried in massive threads, but WCCFTech had a decent summary article (as they are wont to do): [url<]http://wccftech.com/amd-polaris-10-gpu-specs-leaked/[/url<] Personally, I don't think Polaris 10 will use the 490 name as WCCFTech mentioned. I think it'll be marketed as 480X and 480 (and Fiji will get rebranded to 490X and 490 until Vega). Polaris 11 will follow under Polaris 10 with the <$150 parts.

      • Prestige Worldwide
      • 3 years ago

      379.99 USD is $490 CAD…. so yup, this is going to hurt.

      599 for the 1080 is going to come up to a whopping $775 CAD before taxes :*(

        • juzz86
        • 3 years ago

        It’ll still be $899 in Australia. So rejoice my friend 🙂

          • anotherengineer
          • 3 years ago

          It will be close to that here after you tack on the 13% sales tax.

      • coldpower27
      • 3 years ago

      Yeah I am looking to finally upgrade my aging GTX670 to this new baby of GTX1070, however it’s so much more money nowadays with our weaker dollar can totally see this going over 500 CDN after tax. Looking forward to keep power the same and get a boat load more performance woohoo!! Performance/watt is where it’s at these days.

        • anotherengineer
        • 3 years ago

        I will wait for polaris. I don’t need anything like the 1070 on my 1080 screen and with my older games like CSGO and Borderlands.

        If AMD has a polaris model that is a replacement for the 7850/R7-370 for around the $200 US ($250 cnd range) that’s my target/budget. (or if Nvidia has a 950/960 replacement at the same time)

        Freesync/adaptive sync going forward is on the must have list. I have a feeling once Intel starts pushing it, there will be monitors everywhere with it. If I ever upgrade to a 2560×1440, then the free/adaptive sync would probably come in useful with a lower horsepower card.

      • _ppi
      • 3 years ago

      I find quite intersting the decision to have relatively modest SPs increase, but very significant clockspeed increase. I wonder what route AMD will choose.

    • Kretschmer
    • 3 years ago

    This would be tempting if it was adaptive sync. Sadly, my MG279Q precludes a proprietary Gsync card right now.

      • ImSpartacus
      • 3 years ago

      Yeah, that’s one of the reasons why I don’t want to buy a variable refresh monitor yet. I don’t bind myself to any one GPU maker, so I don’t want to accidentally pick the “wrong” VRR tech.

        • f0d
        • 3 years ago

        imo VRR isnt that special – as long as your framerates are high anyways

        i have a freesync monitor and unless its really really low hz/fps then i cant really tell the difference between it on and off

        for me i only notice when its under something like 45 or 50fps/hz

          • travbrad
          • 3 years ago

          I pretty much echo what f0d said. I saw a lot more benefit going from 60hz to 144hz than I did from G-Sync. At the lower framerates (30-60FPS) G-sync does feel a bit better than without G-sync, but it still basically feels like you are just getting bad framerates and gameplay that isn’t smooth. It’s a lot better to just get higher framerates in the first place in my opinion. The lack of tearing without VSync input lag is nice but just having a higher refresh rate helps a lot with that too.

          G-sync “only” added about 20% to the price of my monitor compared to the non-gsync competitors and I feel like it was still worth it since I keep my monitors for a long time, but VRR isn’t really a “must have” feature IMO.

            • f0d
            • 3 years ago

            agree 100%
            high refresh/high fps makes way more of a difference than variable refresh did for me

            playing games at under 60fps is still a horrible experience even with freesync – it just doesnt feel right and motion doesnt seem as fluid

    • AJSB
    • 3 years ago

    Correct me if i’m wrong but AFAIK, the “…TWICE the Performance…” claim is about VR PERFORMANCE and even that, using the new (proprietary ?) tools.
    If so, NVIDIA is playing with numbers, but i guess we will soon know.

      • npore
      • 3 years ago

      Yeah that 2x performance claim was with VR.
      The graph above – general performance – looks more like 20-30% faster

        • AJSB
        • 3 years ago

        So they increased performance 20% in regular games but they also increased TDP (compared with 980), unless i got something wrong from slide….AMD still on the race, both in regular and VR games.

        What it matters to me is regular games, in special FPS or RTS….i can’t see me playing (even if i wanted, and i don’t) FPS like BF, CoD, TF2, or CSGO in VR because motion sickeness besides other things…

      • ImSpartacus
      • 3 years ago

      They have a VR-only performance trick that gets a 1080 to 2x the performance of a Titan X.

      For non-VR performance, it’s only like 10-20% more.

    • ronch
    • 3 years ago

    Do Founders Edition cards come with Jensen’s picture on the fan? Should be a barrel of fun watching him go round and round.

    • sweatshopking
    • 3 years ago

    I’m more interested in mobile gpus. I wanna see the 1060m.

      • DeadOfKnight
      • 3 years ago

      Mobile is where 14/16nm FF will really shine. We’re probably looking at last generation desktop performance on a laptop.

        • JustAnEngineer
        • 3 years ago

        Or acceptable gaming performance in a laptop that doesn’t weigh 7+ lbs and sound like a dustbuster when you’re gaming.

          • DeadOfKnight
          • 3 years ago

          About damn time.

          • tipoo
          • 3 years ago

          Assuming manufacturers don’t do their usual thing and take any power saving as a chance to make them thinner, rather than quieter with larger batteries/cooling.

          At full bore, my 5 year old Studio 1555 is quieter than my new 15″ rMBP. The TDP dissipated is similar, and the mbp has the asymetric fan blades or whatever. I enjoy the thinness, sure, but if a few mm more would allow quieter full load cooling, plus add a few mm to key travel while they were at it? That would be great. I know this isn’t a gaming laptop, but it is a “Pro” laptop, with a lot of users that have use cases that may load the CPU and GPU at once. This thing goes crazy on noise and reaches 99C on those scenarios.

      • Neutronbeam
      • 3 years ago

      yep, in the market for a gaming laptop and on hold until mobile debuts.

        • sweatshopking
        • 3 years ago

        I’m personally done with desktop computers for the most part. After this tower is done, i’ll probably just stick to mobile going forward. Mobile GPU’s are getting functional, and i’m not gaming like i once was. A little TW and hon is all i basically play, and both will function just fine on a mobile GPU.

          • Milo Burke
          • 3 years ago

          No father of mine would completely eschew the desktop form-factor. You’re dead to me, dad. Dead to me.

            • anotherengineer
            • 3 years ago

            Always wondered how many illegitimate kids he has. lol

            • sweatshopking
            • 3 years ago

            ALL OF THEM

            • Milo Burke
            • 3 years ago

            I’ve always wondered how many TR readers he thinks are his kids. =]

            • sweatshopking
            • 3 years ago

            LISTEN, SON. DADDY HAS SO MANY FAMILIES TO GO TO AND BE THE DADDY TO SO MANY CHILDREN, IT’S DIFFICULT TO CARRY MY TOWER TO ALL THE HOUSES. IT’S NECESSARY.

    • EndlessWaves
    • 3 years ago

    Disappointing, but not all that surprising.

    I was hoping this time around we’d see a focus on power draw, with the top card being 150W or less. Instead we’ve got loads of extra performance at the cost of the card being even more power hungry than before. It’s not the most green of moves from the green team.

      • NTMBK
      • 3 years ago

      You can always underclock it 🙂

      I think “disappointing” is a little far, when this is the biggest jump in perf/W in years. Looking forward to seeing AMD’s riposte- what a wonderful time to be a PC gamer!

        • EndlessWaves
        • 3 years ago

        Only in the very short span of years since the last new design.

        Going by the graph rough figures would be 4.5 performance at 185W for the GTX 1080 and 2.5 performance at 170W for the GTX 980. I make that around 65% higher performance per watt. The GTX 750ti was 60% more efficient than the slower GTX 650ti and 80% more efficient than the faster GTX 650ti Boost.

        It doesn’t appear to be anything special technology-wise, nVidia have just chosen to pour all of the gains (and more) into performance instead of reducing power consumption and heat output. This will no doubt appeal to the VR and 4K crowd but it’s a shame for the rest of us. Another few years of expensive, slow and noisy gaming laptops, and gaming desktops five times the size of a normal PC.

      • Ninjitsu
      • 3 years ago

      Intel GPUs are all below 150W, come with a free CPU and have really good performance per watt!

        • JustAnEngineer
        • 3 years ago

        I know that you meant to be facetious, but the fact is that the tremendous improvements in Intel’s integrated graphics have eliminated any logical reason for consumers or OEMs to buy low-end NVidia GPUs.

    • Klimax
    • 3 years ago

    Looks like it is no longer up to AMD whether or not to engage in price war. Definitely not good times ahead for them. (And Zen’s unlikely to really help either)

      • ronch
      • 3 years ago

      Downthumbs in 3… 2… 1…

      • AnotherReader
      • 3 years ago

      Let’s see Polaris’s numbers before we make any definitive judgements. However, this makes the $1500 Radeon Pro Duo and the over $500 980 Ti look ridiculously overpriced. On a sidenote, I suspect that most, maybe all, of the clock speed increase is due to the new process. If that’s the case, then Polaris 10 could surprise a lot of people. However, that would depend upon Global Foundries matching TSMC, and that is probably unlikely.

        • Klimax
        • 3 years ago

        That’s why all mid to high chips got discontinued.

      • NTMBK
      • 3 years ago

      How on earth is this a price war? They raised the price over the 680 by 20%.

        • Klimax
        • 3 years ago

        Because NVidia left itself sufficient space for any outcome of Polaris announcement/release. NVidia can either introduce GP106 or even more cut-down 104 and eliminate AMD from that market or they can reduce prices without larger loss on profit. There seems sufficient space between 980ti and GP104 to counter almost anything.

        Also AMDs positions strongly depends on their ability to actually deliver HW and SW. DX 12 and Vulcan will not help them, because most titles will use DX 11 in midrange market (reminder: it is supported and developed with new features), as DX 12/Vulcan is not for them. (Much more expensive and uncertain benefits)

        If AMD will get stuck in midrange sub-300USD market, their GPUs will follow CPU division into strong irrelevancy.

          • derFunkenstein
          • 3 years ago

          [quote<]DX 12 and Vulcan will not help them, because most titles will use DX 11 in midrange market [/quote<] This is pretty bad logic: if DX12 is meant to run things in parallel and overall improve performance, then DX12 will help lower-end cards more than it does higher-end ones.

          • ImSpartacus
          • 3 years ago

          “Eliminate amd from the market”? Cmon, Polaris will do fine.

          Yes, it has no high end part in 2016 and that’s pretty weird. It’s going to be awkward, but amd will do fine in the spaces where Polaris competes.

          Vega might not be earth-shattering, but it’ll do fine. Nvidia doesn’t have the supply dominance to drop a day-ruining $650 gp100-based part like it did with the 980 Ti.

          Now in 2017? That’s a different situation because that provides enough time for everyone to get a full blown lineup on the market.

          • NTMBK
          • 3 years ago

          NVidia can’t “eliminate” AMD from a market segment just by releasing a competing product. Look at 28nm launch for comparison. Over several months, AMD and NVidia rolled out competing products across pretty much every price point.

          Things will probably look much like they always have. Maybe AMD will claw back a little market share (as their GPU architecture finally gets a much belated overhaul), or maybe NVidia will take an even bigger chunk this time. Anyone predicting dramatic shifts is a fantasist.

    • Firestarter
    • 3 years ago

    I’m itching for that GTX 1080 and an adaptive sync monitor, but Nvidia hear me out: I don’t want your proprietary locked-in “only works with our products” soon to be obsolete crap when there’s a VESA standard that you should be using. I don’t want to buy a monitor at a significant price premium when I know that there’s a significant chance that I won’t be able to use it with the GPU I buy after that

    • Bensam123
    • 3 years ago

    How many stream processors on the 1070? Only 1080 is listed…

      • Klimax
      • 3 years ago

      Very close. Likely biggest change is playing with memory configuration.

        • NTMBK
        • 3 years ago

        Given that they’re on a new manufacturing technology, I would not be surprised if the 1070 is quite cut down.

    • rahulahl
    • 3 years ago

    I wonder if they will have water cooled editions like the Fury.
    I have a GTX 980, and I am really tempted to sell it on ebay and get a 1080 instead. Just not sure if its worth waiting a while to see if any water cooled GPU show up, or just buy the 1080 I can find.

    • Jigar
    • 3 years ago

    Well played Nvidia, you brought price war game to AMD with best in class performance offering. AMD is now in tough position. Nvidia was right in anticipating a price war this round and they have played the exact same move AMD was about to make. This complicates a lot of things for AMD. Hope Vega brings something good cause Nvidia has watered the parade of Polaris 10.

      • Voldenuit
      • 3 years ago

      I was always leery of AMD launching with Polaris 10, because a new high end (1080/1070) part will push down prices of previous gen, and that’s not a good place to be if you’re launching a new midrange part (Polaris 10).

      This is going to cut down on AMD’s pricing flexibility. Probably not a bad time for consumers, though.

      EDIT: Not to mention, it makes AMD’s entire high end (current gen) worthless.

        • ronch
        • 3 years ago

        AMD needs a halo product right now, not just mainstream parts. Having the most powerful product can do wonders for your entire lineup. Yes I know the meat of the market is in the mainstream segment but AMD’s image has really sunk to the bottom of the ocean and they really need to project an image of a resurgent company, not a maker of cheaper parts. And you can price high and maintain it only if people feel they are getting a product from a brand that has the best products.

          • ImSpartacus
          • 3 years ago

          It’s on the books for early 2017 with Vega, but I agree that it’s awkwardly “late”.

            • the
            • 3 years ago

            The problem with Vega in 2017 is that GP100 could hit consumers in that time frame too. Also in late 2017, we could maybe even see nVidia’s Volta architecture launch using 14 nm FinFET.

            The next two years in graphics are going to be very aggressive.

            • ImSpartacus
            • 3 years ago

            There’s no way that GP100 hits consumers in any meaningful way until well into 2017. Maybe we get an expensive (i.e. >$1000) GP100-based Titan in early 2017 or late 2016, but it would just be as a symbolic halo product.

            GP100 is just too big. With the “Founder’s Edition” price manipulation, most 1080s will be $700 at launch. That’s a medium-sized chip on this new process. GP100 is at the fucking reticle limit AND it has HBM as well. There’s simply no way that Nvidia can competitively price a GP100-based consumer product in 2016 or early 2017.

            Think about it another way – there’s probably a good reason why AMD started with the tiny Polaris 11 and small-medium Polaris 10 for its 2016 lineup. This process can’t be cheap.

            Now, I will contend that we could see a hypothetical GDDR5X-packing GP102 in competitively priced consumer products before GP100. It would presumably be smaller than GP100, but it might lose that die size to pro-minded stuff like DP units. If Nvidia somehow gets GP102 on the market by the time Vega hits in early 2017 (which would cut the 1080’s reign undesirably short), then Vega could be in trouble. However, that’s not GP100 and you mentioned GP100.

            • JustAnEngineer
            • 3 years ago

            A small new GPU using the latest manufacturing process could generate a lot of design wins for gaming laptops.

            • ImSpartacus
            • 3 years ago

            No doubt. AMD will make a killing with Polaris 11. I trust Nvidia will follow with something similar.

            • the
            • 3 years ago

            And? Launching a GP100 as a Titan would suck the wind out of AMD’s clams of having the fastest single GPU card on the market. That is all nVidia needs to keep to keep AMD in the corner. (I still contend the probability of a GP100 consumer card rests with yields and volume, not as a counter move to AMD’s market positioning.)

            The reason why AMD started with Polaris 11 and 10 is due to OEM design wins. They’re the core business that pulls in revenue for AMD right now, not the gaming market that gets most of the press.

            • Ninjitsu
            • 3 years ago

            [s<]Founder's Editions are only going to be sold by Nvidia, so I don't "most" of them will be $700 for that reason.[/s<] I had wrong info

          • JustAnEngineer
          • 3 years ago

          [quote=”ronch”<] AMD's image has really sunk to the bottom of the ocean... [/quote<] I believe that you over-estimate the effectiveness of your persistent anti-AMD campaign in the comments and forums. AMD's GPU products have offered excellent performance and value for consumers for a very long time. Your posting negative comments at every possible opportunity does not make the products less good.

            • ronch
            • 3 years ago

            Fanbois will be fanbois no matter what, I guess.

            • Jigar
            • 3 years ago

            No matter how many times you post you are an AMD fanboy, everyone at TR knows its quiet the opposite.

            • ronch
            • 3 years ago

            And no matter how true a criticism of AMD is, fanbois will not accept it and go right ahead and hit the downthumb button. 🙂

            But you know what, Jiggy, it doesn’t matter either if people believe it when I say AMD is still one of my favorite tech companies and I just go right ahead and criticize their bloopers (believe it or not, those two things are not mutually exclusive) because I choose not to drink the AMD Kool-Aod, because as I’ve said, I know they simply can’t take it when someone says something against AMD.

            Oh, and it’s ‘quite’, not ‘quiet’. 🙂

            • Ninjitsu
            • 3 years ago

            Oh, and it’s ‘Jigar’ and not ‘Jiggy’. 🙂

            • ronch
            • 3 years ago

            Oh, and it’s ‘Ninjutsu’, not ‘Ninjitsu’. 🙂

            • lilbuddhaman
            • 3 years ago

            [quote<]I believe that you over-estimate the effectiveness of your persistent anti-AMD campaign in the comments and forums.[/quote<] Is he wrong? AMD looks awful as a company. Poor CPU's, Poor financials, Constant shift of high ranking personnel... you can't think that isn't affecting the image of the GPU's. Like you said, they've creeped back down to the "value" brand status.

            • ronch
            • 3 years ago

            The AMD Kool-Aid is selling very well, though. 😉

        • _ppi
        • 3 years ago

        1080/1070 are around 300mm2, while Polaris 10 is rumored to be around 232mm2. AMD’s (public) target for Polaris 10 was to allow VR-ready at substantially lower price point. They had to count with selling their cards for $200-250. GTX 970 is some $300 currently?

        And with that strategy they will completely invalidate nVidia offering below 1070, including notebooks. At least till nVidia introduces smaller parts.

        Weakness of this strategy is that they won’t have the halo of having top product halo. But if the word on the street is, that for shopping for <$350 card, AMD is the way they go, it would be good enough for them. And in best AMD years it was.

      • bfar
      • 3 years ago

      I understood Polaris to be positioned significantly cheaper than this. I dunno, I think Nvidia is looking to capitalise on being first to market.

        • Jigar
        • 3 years ago

        I hope you are correct, and I hope AMD repeats HD 4870’s history.

          • ImSpartacus
          • 3 years ago

          It’s funny. They are kinda on track for that.

          The Fury X was their 2900xt. The only gpu to use that kind of vram and generally considered a failure.

          Polaris will be the 3000 series. Just midrange stuff that regresses back to the old vram, but it’s at least competitive.

          Then Vega could be amd’s 4000 series. High end parts that are the first to use the successor to that failing one-off memory tech. But now it works and it’s great.

            • the
            • 3 years ago

            Fury X wasn’t as bad as the 2900XT. The Fury X was only a few months late and generally competitive with nVidia’s best.

            The 4000 series fixed many of the flaws of the initial 2000/3000 series. The internal ring bus for the memory controller was dropped favor of a cross bar. This improved bandwidth and latency inside of the chip. The cache architecture was radically improved by going from a unified, shared L1 cache for the entire die to discrete L1 caches per shader cluster. These are critical flaws in the initial 2900XT design that should have been avoided from the star: ring buses are known to be power hogs and a VLIW architecture loves lots of cache to function well.

            This raises the question of what are the fundamental flaws of the GCN architecture that AMD could improve upon for history to repeat itself as you’ve outlined?

            Thus far AMD has indicated that they’re renovating the front end and while that could improve performance per clock per compute unit, it won’t be at the same level of improvement AMD did from 2000/3000 to 4000 series back in the day.

            • ImSpartacus
            • 3 years ago

            Honestly, I was only half serious in the post you replied to. I realize that fiji isn’t quite as dire as R600 and therefore, there’s less obvious “headroom” to improve upon.

            However, Vega will have to have SOME kind of architectural improvements just to meet their perf/w roadmap. Polaris should easily meet its goals with the shrink. Hbm will help vega a little bit, but it has to improve roughly as much as Polaris, but it gets no shrink. I honestly hadn’t even heard the rumors of front end improvements or any other planned tweaks.

            • AnotherReader
            • 3 years ago

            Techreport, along with other review sites, covered the architectural improvements in Polaris [url=https://techreport.com/review/29514/amd-sets-a-new-course-for-radeons-with-its-polaris-architecture<]at the start of the year[/url<].

            • Ninjitsu
            • 3 years ago

            Well, they need more ROPs to use the bandwidth, that much was clear.

    • ultima_trev
    • 3 years ago

    Looking at the compute FLOPS figures, I’m guessing:

    GTX 1080 = 2560 shaders at 1.6-1.8 GHz, 160 TMUs, 64 ROPs

    GTX 1070 = 2048 shaders at 1.5-1.6 GHz, 128 TMUs, 64 ROPs

    • CheetoPet
    • 3 years ago

    “DisplayPort 1.2 Certified, DisplayPort 1.3/1.4 Ready.”

    120Hz 4k HDR capable. Neat. Assuming it can actually push that many frames with any detail.

      • Airmantharp
      • 3 years ago

      Three of them might….

      😉

        • Krogoth
        • 3 years ago

        With older content (pre-DX11 era), they can pretty much handle it. Current and future (DX11-DX12) it is not quite the case. You going have to wait until big Pascal or Volta.

          • DeadOfKnight
          • 3 years ago

          Which is exactly why I’m waiting for Big Pascal or Volta.

      • jts888
      • 3 years ago

      It’s only UHD@120Hz SDR or UHD@60Hz HDR I’m afraid. DP 1.4 is only effectively new color profiles, not expanded bandwidth from 1.3.

        • the
        • 3 years ago

        DP 1.4 doesn’t increase the raw bandwidth but it does add display stream compression for more effective bandwidth. That could raise the resolution, refresh rate or move to HDR.

        Though I’m still curious how well DSC works in reality.

    • Raymond Page
    • 3 years ago

    Seems its official, GTX 980 and GTX 980 Ti are replaced by the GTX 1080 from the [url=http://www.geforce.com/hardware<]GeForce Product Lineup[/url<] with this announcement.

    • USAFTW
    • 3 years ago

    I think the early benchmark we saw was the 1070 if we are to take Jen-Hsun at his word.
    Overall, I’m really surprised. Here’s some CEO math:
    1080 twice as fast as a 980 Ti.
    1080 is 599. 980 Ti should drop to 299.
    970?
    AMD? Your move but I’m not expecting much.

      • beck2448
      • 3 years ago

      980 series is out of production

    • puppetworx
    • 3 years ago

    Why is he claiming twice the performance of a Titan X and standing in front of a graph showing less than half of that gain? Did they intentionally use a scale (“Relative Gaming Performance”) which makes it look worse?

    Edit 1: [s<]I get it, twice the "performance per watt" not twice the performance. This needs an edit.[/s<] Edit2: I just watched the video he claims 2x performance gain and 3x efficiency gain. Either that graph or that script is heinous.

      • meerkt
      • 3 years ago

      Yeah, was wondering about that.

      • npore
      • 3 years ago

      That claim is for VR – and had a different graph when he was talking about it. The one seen here was general performance.

        • puppetworx
        • 3 years ago

        Thanks I missed that. When he summarizes at the end he says ‘twice the performance and three times the energy efficiency’, he says it twice but doesn’t mention VR. Either he forgot the script or the marketing division decided to drop the specifics for hype value.

        The three times energy efficiency claim must be based on VR or real world testing also. Using the ‘general performance graph’ dividing peak performance by peak power give less than a 2x gain.

        It’s not that I expect real claims by manufacturers. I just didn’t expect to be confused by them, thanks marketing!

          • yogibbear
          • 3 years ago

          Read the axis on graphs next time. It was very clear what the difference was between the two scenarios.

            • puppetworx
            • 3 years ago

            [quote<]very clear[/quote<] Yeah, that's why so many tech sites (including this one) reported that detail incompletely.

            • yogibbear
            • 3 years ago

            yeah i forget that the majority of people are stupid.

          • nanoflower
          • 3 years ago

          Not the first time that has happened in a presentation and not the first time people pick on the performance numbers and miss the qualification. I’ve already seen a number of people thinking the 1080 has twice the performance of Titan X without any qualifications not realizing it’s only in VR thanks to the work Nvidia has done to speed up VR. At least there will be time for reviews to get out so that people can see what the true performance of the 1070/1080 is before they can start buying them.

            • chuckula
            • 3 years ago

            Realistically I’d put it in the 20 – 30% faster range. Which is still impressive but clearly not double the performance.

            • puppetworx
            • 3 years ago

            That looks about right. The graph they showed in the presentation suggests a (roughly) 50-70% performance gain over the GTX 980 and 20-30% over the Titan X.

            The [url=http://www.geforce.com/hardware/10series/geforce-gtx-1080<]official GTX 1080 page[/url<] actually has some more performance benchmarks for specific games. I did some pixel counts and the claims are: GTX 1080 vs GTX 980 - Virtual Reality (Barbarian Benchmark) - 2.8x - Rise of the Tomb Raider (DX12) - 1.8x - The Witcher 3: Wild Hunt - 1.7x A Titan X is somewhere around 30% faster than a GTX 980 on average, so that gives a more generous 30-40% gains against the Titan X (non-VR). Of course all results they gave are rounded and probably taken from best case scenarios. 30% over Titan X (and 980 Ti) and 60% over the GTX 980 would be my guess though.

    • Tower
    • 3 years ago

    The 2x performace was in VR, so for single monitor I don’t think the gap will be that big, benchmarks will tell the true story.

    The Multiple viewpoint rendering looks very promising for VR, if it is well implemented. But unfortunately for me that’s gonna be a few years out before I’ll get a VR unit.

    1070 looks to be a nice upgrade to my 670 and will pair up with my gsync monitor nicely.

    • BIF
    • 3 years ago

    This is precisely what I’ve been waiting for. But neither in this article nor on Nvidia’s website is it mentioned the CUDA performance of either of these cards. Unless I missed those tidbits.

    I’m excited to see improvements in F@H performance, too.

    • End User
    • 3 years ago

    The thing I am most tickled pink by with the announcement of the 1080 is that, at my gaming resolution of 2560×1440, I won’t be stressing the card at all (fingers crossed). Thank you 4K!

      • CheetoPet
      • 3 years ago

      Curiously enough there was exactly no mention of resolution anywhere in that presentation. All setting maxed at 4K? 2K? 640×480?

        • nexxcat
        • 3 years ago

        Clearly, Nvidia went old-school. 320×240.

          • shaurz
          • 3 years ago

          Real old school is 320×200

            • Firestarter
            • 3 years ago

            single page video RAM addressing FTW

        • End User
        • 3 years ago

        I went back through TR GPU reviews to get a sense of how a 980 performs compared to my current 770. Its not an exact science but, if the 1080 truly is double the 980 in performance, I stand by my original post.

        • jts888
        • 3 years ago

        The think that I can’t resolve yet is how the massive performance gains can be reconciled with having ~5% less memory bandwidth than 980Ti/Titan X.

        Presentation benchmarks are always cherry-picked, but there are probably implications about which resolutions/types of games/etc. will really have more substantial improvements.

      • Krogoth
      • 3 years ago

      I wouldn’t bet on that being the case with VR and future content that is coming down the road. You might want to wait until big Pascals to come out if you intend on going on 4K route.

        • End User
        • 3 years ago

        I have no plans on transitioning my gaming rig to 4K in the near future. If I get the 1080 my next upgrade is a 2560×1440 G-SYNC display.

        As far as future content is concerned I can always throw in another 1080 or buy a better GPU. The great thing about gaming on the PC is that you are never tied down to your current hardware.

        VR is another motherboard upgrade away for me.

    • DPete27
    • 3 years ago

    Still no word on VESA Adaptive Sync support?

      • Jeff Kampman
      • 3 years ago

      Nope.

        • derFunkenstein
        • 3 years ago

        If it’s not there that’s a monstrous failure. Doesn’t matter how fast it is—DSR ensures we can make VRR necessary. Heh.

          • Airmantharp
          • 3 years ago

          I’d bet that the cards are capable of it, and that a driver update (or driver update that includes a firmware update) might become available if the lack of VESA AS becomes a perceived competitive disadvantage.

            • Firestarter
            • 3 years ago

            AFAIK they already use it with laptop displays

            • Airmantharp
            • 3 years ago

            Absolutely, but my bet is that the tech is going to be available in these cards 😉

            • Firestarter
            • 3 years ago

            If so, then why aren’t they telling us now? They didn’t need to tell us that the 1080 is going to be faster than the 980, we knew that already. If they wanted to tell us something new that was going to make some waves, they should have told us about this support for VESA adaptive sync that you’re betting on. THAT would have been news, and combined with their claimed performance of this GPU that might have made it a day 1 purchase for me, like the HD7950 was before it. As long as I don’t hear any of that coming out of Jen-Hsun’s mouth I’ll be waiting for the AMD cards to land and to see how they stack up, and Nvidia will be fighting that battle with a disadvantage

            • Airmantharp
            • 3 years ago

            Up in my first response, my reasoning was (and is) that Nvidia wouldn’t enable VESA AS unless not having that feature available was a competitive disadvantage.

            Not to say that it isn’t one now- but Nvidia doesn’t see it as a disadvantage yet, though they no doubt know that the day will come.

          • anotherengineer
          • 3 years ago

          Didn’t stop Maxwell from selling like hotcakes, and I’m sure if there is 0 support in Pascal, I’m sure most people will still buy them like crazy.

            • beck2448
            • 3 years ago

            These will rule. Waiting for the custom boards which shouldn’t be long.

            • odizzido
            • 3 years ago

            it’s true…even I got a 970. However as time went along it annoyed me more and more and it’s at the point where it will play a significant role in my next purchase. I think when the 970 came out variable refresh was still pretty new so that may have played a role as well.

            • derFunkenstein
            • 3 years ago

            well, I’m never going to spend G-Sync prices for a G-Sync IPS display, so it’s a loss to me. :p

          • 223 Fan
          • 3 years ago

          No FreeSync == No Sale

          • Ninjitsu
          • 3 years ago

          I dunno, is it? How many people have VRR monitors? I know I don’t…

      • odizzido
      • 3 years ago

      Yeah this is probably the thing I am most concerned about. The longer Nvidia refuses to support open standards the more…..displeased….I am with them. It’s a giant middle finger aimed right at their customers.

        • ImSpartacus
        • 3 years ago

        Indeed, very disappointing. I’m not even thinking of a VRR monitor until this stuff gets sorted out.

        If I was “committed” to one GPU maker, then it would be easier, but I’m a filthy hooker that sleeps around.

          • Firestarter
          • 3 years ago

          I want a VRR monitor so I’m not really thinking of a GPU until this is resolved. Or rather, I am thinking of a GPU but it’ll be a different one than if Nvidia would just stop being a bunch of ####s

    • Kougar
    • 3 years ago

    So I take it 5508Mhz is the physical speed, and that the VRAM is clocked at 11Ghz effective?

      • RAGEPRO
      • 3 years ago

      That was on the overclocked part. The memory clock listed on NVIDIA’s page is 10GT/sec.

    • End User
    • 3 years ago

    DX12 supports a mix of GPUs to offload work from the primary GPU. Does anyone know if that includes Physx?

      • Zkal
      • 3 years ago

      You’ve been able to do that for a while from the driver control panel, even before DX12.

        • End User
        • 3 years ago

        Of course! How silly of me to forget.

    • DPete27
    • 3 years ago

    erm, whats a “Founders Edition’ card, and why do they cost more?

      • tsk
      • 3 years ago

      They are binned and factory overclocked versions according to guru3D.

      • Flapdrol
      • 3 years ago

      From what I understand the normal cards come later.

      Nvidia makes more money and the cards won’t be sold out for the first few weeks, like with some previous gpu launches.

      I’d wait for a normal one.

    • End User
    • 3 years ago

    I can’t stop drooling.

    • derFunkenstein
    • 3 years ago

    Wow that’s pretty crazy. I had intended to just stick with my 970 but man. If supplies are good and prices aren’t jacked up I might have to get one.

      • chuckula
      • 3 years ago

      I may grab one to replace my 3 year old GTX-770 and justify the purchase by saying that I won’t buy a big Pascal for more money next year.

      See, it’s a cost-saving move! Yeah, sure.

      • NovusBogus
      • 3 years ago

      I’m willing to toss my shiny 960 out the window if the 1070 or purported 1060Ti turns out to be the purple squirrel I’ve been wanting since 2013. But I withhold judgement until reviews are in and they’ve been in the wild for 3-4 months, because the 970 also happens to be a perfect example of little unexpected surprises that weren’t in the initial hypefest (coil whine, 3.5gig mess).

        • derFunkenstein
        • 3 years ago

        I bought my 970 after the flaws were revealed, and I’d do it again. It’s been a fast card until now.

          • nanoflower
          • 3 years ago

          Damn… I’m sorry to hear that your 970 has suddenly stopped performing well. Time for an RMA though I’m not sure the manufacturer knows how to handle GPUs that have been shown up by the new hotness.

            • derFunkenstein
            • 3 years ago

            Lol exactly

            • _ppi
            • 3 years ago

            Joking aside, as 970 owner, I am genuinely afraid of quality of nVidia’s drivers post Pascal launch. See Kepler.

        • anotherengineer
        • 3 years ago

        I’ve been waiting for the purple squirrel since the fall of 2010 when I got my HD6850.

        Have to wait for a true HD7850/R7-370 replacement and a 950/960 pascal before pulling the trigger.

          • Mr Bill
          • 3 years ago

          Ditto for the purple squirrel but HD7870.

      • Krogoth
      • 3 years ago

      Depending on your gaming resolution and if you want to jump into VR bandwagon. It maybe wiser to hold out until Volta.

        • DeadOfKnight
        • 3 years ago

        Why?

          • Krogoth
          • 3 years ago

          If you don’t care for VR and/or still run at 2-4Megapixel gaming. The 970 and 980 can hold out until Volta comes around.

          Volta will improve upon Pascal and offer newer feature sets in addition to performance gains.

            • DeadOfKnight
            • 3 years ago

            With that logic you should just wait until whatever comes after whatever comes after Volta…

            If you have a 980Ti and don’t really care about VR right now, I agree you’re not getting a whole lot extra for that $600. If you have your heart set on 4K you might want to wait for big pascal.

            If you want to upgrade but you’re afraid you might want to upgrade again when Volta comes, just get the 1070. If you want to splurge now and hang onto it for many years, get the 1080.

            If you have deep pockets and just want the best because it might make you feel like you’re pro gamer fatal1ty and you can brag to all your internet friends while real friends roll their eyes and laugh at you, you should go for 2x 1080 now, 2x Big Pascal next year, and 2x Volta in two years.

            • ImSpartacus
            • 3 years ago

            No, that logic doesn’t extend that way. He’s just saying that Maxwell stuff is still really good for 1080p60 & 1440p60 gaming and it didn’t magically stop being awesome in that space just because GP104 improved the 4K & VR gaming experience.

            If you just upgraded to a 4K monitor or a high-refresh monitor, then your performance requirements jump and you probably need to upgrade. If not, then I think you have to ask yourself, “What’s wrong with my gaming experience today that is requiring me to upgrade?” I just don’t see people asking themselves that question in this thread.

            • DeadOfKnight
            • 3 years ago

            I’d say that logic is completely backwards. I haven’t upgraded my monitor yet because the benchmarks have shown me how disappointed I would be trying to get everything out of the display that cards can’t really give yet. GTX 1080 is a bit faster than GTX 980 Ti, it’s more an efficiency upgrade than a performance upgrade. I think the only real reason to wait is if you’re waiting for something to become possible that isn’t. If you want 4K, maybe Volta will be good.

    • tsk
    • 3 years ago

    So I’m seeing the 1080 has a memory bandwidth of 320GB/s, which is lower than the 336GB/s of the 980TI, but allows them to use a 256-bit bus instead of 384-bit.

    • LocalCitizen
    • 3 years ago

    so 599$ gtx1080 is faster than 1500$ pro duo. where does that leave amd’s line up?

    edit: mis-read. gtx 1080 is faster than 980 SLI or one Titan, not 2 titans. still heck of a price for such a powerful card. what’s the point of pro duo?

      • chuckula
      • 3 years ago

      Don’t worry, I have a bunch of downthumbs from an entire squadron of AMD operatives claiming that the $1500 Radeon Pro Duo isn’t intended for anybody who visits TR.

      I’m just wondering if the list of products that TR shouldn’t review from AMD is about to get longer.

        • LocalCitizen
        • 3 years ago

        haha. gotcha.
        just that i got some amd stocks due to the recent positive news. but i’m … a bit concerned now.

        480 is supposed to be $350-ish and very fast too. still have some hope on that.

        • albundy
        • 3 years ago

        the comments are all irrelevant…there’s no point in making anything of it. it’s an epeen extravaganza to see who passes the 1 inch mark. the only thing that really matters are benchmarks and price/performance ratio imho. i cant think of anything else that would seriously drive the sales.

        • _ppi
        • 3 years ago

        It probably could not escape you, that no other site received any sample either (google “AMD Radeon Pro Duo Review” and you will see just two sites + one Youtube).

        And neither PCPer nor Hothardware got samples, they just managed to get the card otherwise.

      • ImSpartacus
      • 3 years ago

      The pro duo is not a gaming card. Yes, it’s weird, but it’s for vr only. More of a vr dev card that allows amd to claim top single card perf.

        • the
        • 3 years ago

        The thing is that there are some neat tricks with GP104 that could make it far more efficient than two Fiji chips in Crossfire. Fiji certainly has more raw compute but if GP104 doesn’t need to duplicate as much work, it could come out on top.

        The variable here are how much duplicate work is being performed and could very likely differ by title or scene in a game.

          • ImSpartacus
          • 3 years ago

          That’s a good point. I’m glad you brought that up.

          If Pascal can efficiently display multiple viewpoints (I think I heard the limit was 16), then the Radeon Pro Duo stops making sense for the resource-sharing reasons that you just mentioned.

          And the Radeon Pro Duo is effectively only two Nanos, not two Fury Xs. I’m betting a GP104 could easily trade blows with that.

            • the
            • 3 years ago

            Yeah, this will require a bit of testing as the answer is certainly not clear cut from the specs on paper. The demo nVidia gave on stage showed a ~50% increase in speed but no knowing if that is an optimal scenario just for demos or if real VR games will display similar gains.

            We’ll likely know the answer when the reviews start to appear at the end of this month.

        • travbrad
        • 3 years ago

        Because inconsistent frame times are great for VR?

          • BurntMyBacon
          • 3 years ago

          As I understand it , the GPU’s are each rendering their own eye exclusively. Two different perspectives. Far less interdependency between GPUs. If this is indeed the case, I don’t think a Crossfire test is remotely accurate in gauging the effectiveness of this card in VR.

          As for claiming the performance crown for single cards, I tend to think of dual GPU cards as two cards with less hassle. Whether they are one GPU or multiple GPUs to a card, the maximum supported number of GPUs for SLI or Crossfire remains the same. So by my reckoning nVidia never really lost the top spot for DX11 games (verdict is still out on DX12). Also, I don’t care for multi-card configurations from either vendor and avoid them until it is no longer plausible to use single cards to increase performance. Frame-time inconsistency is only one (granted very large) problem to contend with.

      • muxr
      • 3 years ago

      gtx 1080 is 9Tflops according to the released info. Pro Duo is 16Tflops.

        • pranav0091
        • 3 years ago

        Teraflops dont correspond 1:1 to game performance. Even more so when you compare cards from two completely different architectures. That’d be like using the HP numbers for deciding which car is going to do a lap around a racetrack faster – not much can be inferred.

        <I work at Nvidia, but my opinions are only personal>

          • muxr
          • 3 years ago

          Tflops is what we got, until we see the benchmarks.

        • chuckula
        • 3 years ago

        So what:
        1. According to you, the Pro Duo isn’t intended for anybody who is on TR. The GTX-1080Ti, however [b<][i<]is[/i<][/b<] intended for the TR crowd. So as far as I'm concerned, the effective Flop-speed of a product that this site should never review and that nobody here should ever buy is exactly zero. 9 Tflops sounds a little bit higher than zero to me. 2. An SLI'd GTX-1080Ti is 18 Tflops, $300 less expensive, and uses less power.

          • muxr
          • 3 years ago

          1.) Pro Duo is a developer card. I am a developer (OpenCL/CUDA) so to me Tflops actually mean something.

          2.) 1080 Ti? Surely you mean 1080. You’re comparing two cards vs. one card. Also TDP for 1080 is 180w which means SLI’d it’s more than Pro Duo’s 350w. I would actually hope it’s more power efficient since it’s on FinFet but I guess it isn’t. Also It’s only $100 cheaper, seeing how Founders is the only card you’ll be able to get for months.

          I am sure 3x crossfire 480’s will wreck both solutions, in terms of efficiency, price and Tflops.

            • bhappy
            • 3 years ago

            Just because the card has a stated TDP doesn’t mean that it is accurate under testing conditions. So no it doesn’t necessarily mean that 1080 SLI’d would use more power than a radeon pro duo. If you look at PCPer’s benchmarks you’ll clearly see that the radeon pro duo is using more power than 980 Ti’s in SLI, which have a stated TDP of 250W each.

            [url<]http://www.pcper.com/reviews/Graphics-Cards/AMD-Radeon-Pro-Duo-Review/Clock-Speeds-Power-Consumption-Pricing-and-Conclusi[/url<]

            • pranav0091
            • 3 years ago

            [quote<]1.) Pro Duo is a developer card. I am a developer (OpenCL/CUDA) so to me Tflops actually mean something.[/quote<] Do you have numbers for how efficiently you are able to hit peak throughput in these cards? I'm positive that on any real workload you arent going to be able to sustain anything close to 100% of the peak Teraflops. Efficiency of shader utilisation matters - A LOT. Here is a refresher - a lower teraflop number on the GM204 didnt prevent it from beating the Hawaii - [url<]https://techreport.com/review/27067/nvidia-geforce-gtx-980-and-970-graphics-cards-reviewed/3[/url<] or here is an even better example - a [i<]measly[/i<] 6.1 Teraflop capable 980Ti beating a 8.6 Teraflop Fury X with relative ease - [url<]https://techreport.com/review/28513/amd-radeon-r9-fury-x-graphics-card-reviewed/4[/url<] ). [quote<]1080 Ti? Surely you mean 1080. You're comparing two cards vs. one card. Also TDP for 1080 is 180w which means SLI'd it's more than Pro Duo's 350w. [/quote<] 10W (~2.8%) more TDP for 2 Teraflops more performance (~6.2%). Even by the flawed metric of the *peak flops* 2x1080 is a better deal. Do we need to talk about the frame times too now ? [quote<]Also It's only $100 cheaper, seeing how Founders is the only card you'll be able to get for months.[/quote<] Citation needed. Or maybe actually wait till the cards launch date. <I work at Nvidia, but my opinions are purely personal>

            • NTMBK
            • 3 years ago

            In a graphics workload, things like polygon setup, ROPs, texture sampler throughput, tesselator performance, etc all contribute to performance. OpenCL workloads are more likely to just hammer the FP units, with some use of texture units.

            • ImSpartacus
            • 3 years ago

            If peak theoretical flops is flawed, is there a better metric?

            • pranav0091
            • 3 years ago

            Yes. Instead of peak Teraflops one could use
            (Peak Theoretical Flops * Average Shader Occupancy).

            Average Occupancy is something thats tied closely to the driver and the architecture and more importantly to the workload at hand. You can tune your code to increase the average occupancy, but within limits. For any processor with different fixed function hardware something like the min(peak performance of unit * average occupancy * #units) is the limiting factor. This will vary across games, and even across parts of the same game. The key is balance.

            You may always be able to design a workload that hits 100% shader occupancy, but the question to be asked is if the workload you care about (the code that represents a few successive frames of the game, or the code of the major loop of your compute program) can be tuned to hit 100% shader occupancy – the answer is almost always a no. Just adding more shader (or any other) resource and expecting performance to go up linearly is foolhardy.

            What NTMBK said is very true – compute programs are more shader heavy than games, and that makes their performance more sensitive to peak teraflops than games. But there is another important factor to be considered – programmability.

            Machine time is generally cheaper than human time – you buy a GPU once, but pay the employees monthly. Its a factor thats conveniently forgotten in the spec-war. A 6 teraflop GPU that can be coded to hit 85% occupancy in 10 hours is often more desirable than a 8 teraflop GPU that needs 25 hours to hit an equivalent number. (The numbers here are made up to illustrate my point).

            Ofcourse, there is a market for every niche- but are there enough people within that niche to justify a card specifically for them? Only the companies know for sure.

            <I work at Nvidia, but my opinions are only personal>

            • NTMBK
            • 3 years ago

            Great point, FPU utilisation can certainly be very architecture dependent. For instance some of our code had to be reworked for Kepler, as you need instruction level parallelism to saturate all 192 cores in an SMX (TLP alone won’t get you there).

            • Ninjitsu
            • 3 years ago

            Didn’t Nvidia restructure Kepler SMs while going to Maxwell to increase utlisation? I see a similar trend in Pascal.

            • NTMBK
            • 3 years ago

            Yup, Maxwell made it easier, and GP100 doubles the register file (making it easier again).

            • muxr
            • 3 years ago

            > or here is an even better example – a measly 6.1 Teraflop capable 980Ti beating a 8.6 Teraflop Fury X with relative ease – [url<]https://techreport.com/review/28513/amd-radeon-r9-fury-x-graphics-card-reviewed/4[/url<] Where? ALU troughput looks far in favor of Fury X. R9 290X does almost as well as 980ti in that benchmark you linked. The only benchmarks I see 980 winning are graphics related ones like Poly and Tessellation. In my work I don't care about those. Listen I realize you're a paid Nvidia shill (based on your signature), but for all the work I've done on GPGPU (sha hashing for instance) AMD offers far more bang per buck. If I get around to it I will benchmark my 980 vs. my AMD cards and make a blog post about it based on real OpenCL workloads I've been running, it's not even close. But do I really have to? Cryptominers have been prefering AMD cards for years for a reason. [url<]http://www.extremetech.com/computing/153467-amd-destroys-nvidia-bitcoin-mining[/url<] Also Founder's Edition has been confirmed to mean just a reference card. So wait and see I guess.

            • stefem
            • 3 years ago

            The Beyond3D ALU throughput test does not represent real world performance, it’s just a synthetic test to measure performance under a best case scenario in a specific case.

            I think pranav0091 was trying to point out the difference between synthetic specific tests and real world performance and he is right on this, the fact that he work for NVIDIA doesn’t change that and I actually find honest that it states this in his signature.
            Just an example, which GPU maker (and why) have been preferred by @home folders? The level of performance you get from a processor is dependant from the kind of workload and the implementation, you should know this.

            Also why you look at an ALU benchmark that test for float and vec4 performance if you are interested in hashing which mostly rely on integer operation?

            • muxr
            • 3 years ago

            Integer performance is a real world performance to me. Pranav0091 linked those benchmarks, I was just commenting on them. He was spreading FUD about AMD somehow not having throughput to best Nvidia, when his own link shows that not to be the case. Keep up with the thread!

            folding @home depends on the workload: [url<]http://images.anandtech.com/graphs/graph9306/74800.png[/url<]

            • pranav0091
            • 3 years ago

            I linked you for the throughput-table on those pages. The games are being reviewed in the following pages. You’ll see, if you read them, how having more flops (or FB bandwidth) wasnt helping the FuryX.

            I’m not sure why you call me to be spreading FUD, I’m only linking to established reviews, from this very site, that show the 980Ti handily beating the FuryX in games despite having lower flops. The very work I do is along the domains of performance, and so I have [i<]some[/i<] idea of what I'm talking about. I have no intention to indulge in namecalling. To each his own. I thought you wished to indulge in a discussion, its apparent that you aren't. Use the gpu that best suits your needs - simple as that. Have a good day sir. 🙂 <I work at Nvidia, but my opinions are only personal>

            • MathMan
            • 3 years ago

            Dude, you’re linking to FP64 results, comparing a GTX 980 Ti with 176 GFLOPS against, say a 7970 which has 1000 GFLOPS. Of course, the latter will win.

            (Though it’s interesting to know that even there a 980 Ti is only 35% behind the 7970. Which basically shows that Folding@Home isn’t a representative test for FP64 FLOPS comparison.)

            I’m surprised that you’re so much up in arms about the simple observation that Nvidia GPUs have historically been able to extract more gaming performance of out a lower amount of pure FLOPS. I thought this was general knowledge.

            Based on launch benchmark results at TechPowerUp:
            A GTX 580, 1581 GFLOPS, was faster than a 5870, 2720 GFLOPS.
            A GTX 680, 3090 GFLOPS, was faster than a 7970, 4301 GFLOPS.
            A GTX 980, 4981 GFLOPS, was faster than an R9 290, 5632 GFLOPS.

            That doesn’t mean that AMD has a worse architecture, it’s just different architectural choices: it can be that it’s less area to put down a lot of ALUs that aren’t always optimally fed, than putting less ALUs that are kept busy all the time.

            • muxr
            • 3 years ago

            If you paid any attention, that’s exactly what I have been saying.

            • bhappy
            • 3 years ago

            Your posts are obviously pretty messed up if you feel the need to constantly edit your posts, this post is up to 7 times already lol

            • Val Paladin
            • 3 years ago

            “seeing how Founders is the only card you’ll be able to get for months.”

            So how do you account for AIB’s talking June for OC’d custom cards?
            [url<]http://cdn.videocardz.com/1/2016/05/GALAXY-GeForce-GTX-1080-XTREME-GAMING.jpg[/url<]

            • muxr
            • 3 years ago

            Classic deceptive marketing 101. Sell some cards at the lower price, while selling most $100 more. What’s their incentive to sell for less when they can charge more?

            [url<]https://www.youtube.com/watch?v=nb_R1VZqLcE[/url<] I guarantee the non-founders will be out of stock every time you look.

            • MathMan
            • 3 years ago

            > I guarantee the non-founders will be out of stock every time you look.

            Definitely: they’ll sell like hotcakes!

            • muxr
            • 3 years ago

            Absolutely. I wouldn’t dare underestimate the gullibility of Nvidia buyers. If we’ve learned anything from the 970 is that people don’t care if they are being deceived.

            • chuckula
            • 3 years ago

            Have you considered renaming your account to be Buzz Schillington?

            Incidentally: I don’t believe a word that you say when you claim to be some sort of “openCL/CUDA” developer.

            • muxr
            • 3 years ago

            That’s rich coming from you.

            • Airmantharp
            • 3 years ago

            Wait, so 970 buyers caring about real-world gaming performance means that they don’t care if they’re being deceived?

      • beck2448
      • 3 years ago

      A marketing gimmick. No practical point at all.

    • Forge
    • 3 years ago

    Nice. I’ll take a Founders 1080 and an order of fries, to go.

      • Krogoth
      • 3 years ago

      I rather save the $100 and do the overclocking myself.

        • ImSpartacus
        • 3 years ago

        Something tells me you won’t have that choice until months after the release.

      • chuckula
      • 3 years ago

      The extra $100 for a cadre of Jem’Hadar bodyguards sounds reasonable.

        • AnotherReader
        • 3 years ago

        Now that would be impressive.

          • chuckula
          • 3 years ago

          Nice to see that somebody got that reference.

            • BIF
            • 3 years ago

            See, we need more of this.

            • cygnus1
            • 3 years ago

            Concur

        • Krogoth
        • 3 years ago

        Pfft, I rather have a tailor who may or may not be a former super spy. 😉

          • AnotherReader
          • 3 years ago

          That would be a card masquerading as a mid-range card which overclocks to 150% of its nominal clock speed.

        • BIF
        • 3 years ago

        It looks like the Amazonian Storm Trooper in the latest SW. Or the Cylons from the first BSG.

        • Renko
        • 3 years ago

        The question is how much nvidia will charge you for the ketracel-white to keep your cadre alive. 😉

          • AnotherReader
          • 3 years ago

          The founders wouldn’t take kindly to unauthorized Jem’Hadar replicas.

      • ronch
      • 3 years ago

      Now that’s one Happy Meal™ you’ll Love.

    • Freon
    • 3 years ago

    I don’t think the viewport stuff is to be undersold. It’s a huge deal. Their demos, while fumbled in presentation, did a good job showing how bad multimonitor projection has always been, as it really only looks right if you leave your monitors on a single plane. Of course people in practice actually bend them in, which serious distorts the image.

    It’ll be interesting to see this work on one of the new generation curved displays like an X34.

      • mczak
      • 3 years ago

      So far I fail to see what’s new with the viewport stuff though. Maxwell 2 could already output primitives to multiple viewports (and layers) simultaneously (there’s a gl extension for it, NV_viewport_array2).
      If it’s referring to something else I have no idea what exactly…

        • cygnus1
        • 3 years ago

        if it can double the titan x performance “in vr” apps (assuming because it can fully render both eyes views of the scene in a single pass instead 2), I’m figuring there’s more to it.

    • torquer
    • 3 years ago

    Man that Jen-Hsun Huang. What a showman he isn’t.

    Seriously, Nvidia’s presentations are always so god awful painful. Love their products but this is like watching LARPers.

      • yogibbear
      • 3 years ago

      But it’s 1000 times better when the CEO of the company presents, rather than an over-trained paid actor / shill. I’d always choose awkward/weird style with the actual CEO being accountable rather than have to watch some perfection in presentation that is probably rehearsed endlessly and lying to me.

        • torquer
        • 3 years ago

        I get your point but I don’t know whats worse – Jen with Tim or Jen with Tom. The interactions were just so painfully bad.

          • CheetoPet
          • 3 years ago

          I’ve done enough tech demos in front of live audiences to appreciate the situation. Can’t image Jen is the sort to allocate much time to dry runs.

            • the
            • 3 years ago

            Jen flat out side that they didn’t rehearse beforehand for this.

            • torquer
            • 3 years ago

            That doesn’t excuse it being painful to watch.

        • the
        • 3 years ago

        This is just a trend Steve Jobs started way back when. Previously it was mainly people from marketing, the guys whose job it is to run a good show and makes sales. I’d like to see it shift back or at least just have the CEO fulfilling the role of a host to introduce each of the real presenters.

        • sweatshopking
        • 3 years ago

        I’d rather just see some graphs and no talking.

      • djayjp
      • 3 years ago

      Pretty crazy though that he never uses notes or a teleprompter, especially considering some of his talks can last over an hour (I didn’t watch this one but the gp100 one and thought he did a pretty great job considering)

    • tsk
    • 3 years ago

    Are the founders edition the reference design cards?

      • Krogoth
      • 3 years ago

      They are Nvidia’s in-house “factory overclocked” SKUs. Nvidia has been selling gaming cards directly to the market for some time now (They started with second-gen Keplers). The OEMs of the cards are PNY.

        • ronch
        • 3 years ago

        Being factory-overclocked, shouldn’t they be called ‘Foundry Edition?’ I bet someone made a typo and they just decided to go with it. 😀

      • USAFTW
      • 3 years ago

      As the founding fathers intended.

      • NTMBK
      • 3 years ago

      They’re the Early Access cards.

    • Demetri
    • 3 years ago

    1070 sounds pretty damn good. $380 and faster than TitanX. Your move, AMD.

      • f0d
      • 3 years ago

      i think the 1070 is a real bargain for that price and performance
      its like the 970 all over again

        • ChangWang
        • 3 years ago

        Yeah, hopefully they did the mem controller justice this go around

          • Krogoth
          • 3 years ago

          The problems with binned Maxwell chips wasn’t due from the memory controllers. It was simply due to how the architecture allocated its resources. Disabling parts of the chip resulted in a partitioned memory space. This can cause micro-shuddering issues when the chip is forced to use the second, but smaller memory space.

          I suspect Pascal will not suffer from the same problems in its binned forms.

            • ImSpartacus
            • 3 years ago

            Do you have a source for the micro shuddering?

            I know about the 970’s quirks and its partially disabled rop. But I don’t remember seeing any performance impact as a result.

            But ultimately, I agree. Nvidia will likely not use that ability in future binning even if it’s still built into Pascal, crossbar and all.

            • f0d
            • 3 years ago

            using more than the 3.5gb and the shuddering depended on what game you played – some games used more than 3.5gb and some diddnt, some games shuddered while others diddnt
            this test from pcperspective is a good test
            [url<]http://www.pcper.com/reviews/Graphics-Cards/Frame-Rating-Looking-GTX-970-Memory-Performance[/url<] they had to go to some pretty extreme lengths to use more than 3.5gb of memory and when they did the framerate was unplayable (bf4 was around 15fps and COD advanced warfare was around 20fps)

          • f0d
          • 3 years ago

          honestly that whole issue was overexaggerated
          pretty much anything that used more than 3.5gb vram was too low framerate anyways
          heck i have a 4gb 290 and im struggling to find a way to use over 2gb vram on all my games when they are maxed out with the highest AA and settings

          even if there was a similar issue then it would be 7gb of full speed memory instead of 8gb which is still WAY more than would be usable

            • Krogoth
            • 3 years ago

            Enthusiasts were more pissed off on how Nvidia marketing handled it once the cat got out of the bag.

            Nvidia engineers knew the issues from the get go. Marketing dropped the ball. If they disclose the potential issues from the start then nobody would have given a crap about it save for some die-hard AMD fanboys.

            • f0d
            • 3 years ago

            yeah i agree they handled it pretty badly and its something that might never be forgotten

            • travbrad
            • 3 years ago

            and once that memory configuration and Nvidia’s horrible lies came to light the 970 still outsold the AMD competition by a wide margin. Apparently it wasn’t that important.

            I agree it would have been nice to know from the start, but it just wasn’t an issue performance-wise because that class of card will never get good framerates at the resolutions/settings that could actually go above 3.5GB.

            • NovusBogus
            • 3 years ago

            I’ve heard reports that it’s an issue in Arena Commander because the game’s not optimized at all and expects to gobble up 100.0% of available memory just because it can. It’s one of the things that ultimately drove me toward a 4GB 960. I agree otherwise though.

      • Dudeface
      • 3 years ago

      Seems like a bigger difference performance difference 1070->1080 than 970->980 was.

      GFLOPS alone is +38.5% advantage to 1080 vs 1070, where as 980 was +32%, based on number from wiki and todays presentation.

      Memory bandwidth will be at least ~43% advantage to 1080 vs 1070, depending on what they’ve done with 1070’s memory controller (based on assumption of 7Ghz GDDR5 on 1070)

      edit. Price wise, 1080 MSRP is +58% of 1070 MSRP. Based on MSRP, 970->980 was +67% price wise. Or, put more simply, the price difference is the same dollar wise, but the performance gap looks to be larger. Based on that, the 1080 value equation seems to have improved compared to the 980

        • cygnus1
        • 3 years ago

        I’ve never spent that much on a video card, but I’m strongly considering the 1080

        edit: spelling

          • floodo1
          • 3 years ago

          I had been considering a 980ti, so I kinda wonder how much the 1080ti will cost and how long it will be before it comes out … considering it looks like the timeframe will be very long I’m probably going to get a 1080!! I’m like you and have never spent that much on a single card but this thing looks pretty compelling for me right now!

      • beck2448
      • 3 years ago

      Power usage is amazing. Over 2ghz. On air???!!#
      Can’t wait to see the non reference monsters.

    • chuckula
    • 3 years ago

    Three times the energy efficiency of the Titan-X according to Jen-Hsun.
    AMD was promising that Polaris would be 2.5X better than some prior generation of GCN.

    Oh: Official Launch date: May 27.

      • Krogoth
      • 3 years ago

      Supplies hing on 16nm yields. If yields are poor with 16nm process than we may have a repeat of HD 5850 and HD 5870 on our hands.

        • chuckula
        • 3 years ago

        It’s already in mass production and unlike GloFo’s “14nm” process that literally has not appeared on the market in any product, TSMC has been turning out 16nm parts for quite some time and in very large quantities.

          • the
          • 3 years ago

          But for smaller chips and certainly not anything as complex as a GPU. Though having shipping products in volume on the market does say something about the process technology.

          The variable for yield now rests in the actual design. nVidia has produced designs that don’t yield well *cough* GTX 480 *cough* before independent of the manufacturing process.

            • chuckula
            • 3 years ago

            [quote<]certainly not anything as complex as a GPU. [/quote<] Don't let Blastdoor hear you claim that the A9X is less complex than a GPU!

            • cygnus1
            • 3 years ago

            lol, this is just a wee bit bigger than the 147 mm[super<]2[/super<] A9X

          • Krogoth
          • 3 years ago

          Mass-production =! yield issues

          Just because you can produce a tons of potential dies doesn’t mean all of them are going work at desired power envelops and clockspeed (assuming that they aren’t any defects). Nvidia is pushing mid-Pascal pretty hard.

          There are a large number of eager buyers are who going snatch every card on the market once they officially launch.

      • ronch
      • 3 years ago

      Also, I think whenever Nvidia or Intel put out numbers they’re ‘typical’ numbers, whereas AMD’s numbers are always ‘up to’.

      OK let the downthumbs roll.

    • chuckula
    • 3 years ago

    Incidentally, that 2.1GHz number while quite impressive is an overclock. The official boost clock is 1733 MHz.

    Having said that, if 2.1GHz on air actually works for a large sample of chips, there will be some impressive factory overclocks coming.

      • Krogoth
      • 3 years ago

      I suspect that mid-Pascal will become memory bandwidth starved (stuck on GDDR5X) when you overclocked it which is why Nvidia is waiting on HBM2 improves until they launch customer version of big Pascal (not the current GP100 on the market right now).

        • the
        • 3 years ago

        GP100 is likely next year’s Titan model when they’re finally able to produce 610 mm^2 chips in volume with HBM2. Everything this year is going to be massively supply constrained. Thankfully the expected demand in terms of raw volume for systems over $100k is relatively small.

          • djayjp
          • 3 years ago

          Finally able? Or finally willing? Seems they realize their competition is going to be mild and went for some fat margins this year :/

            • the
            • 3 years ago

            HBM2 is just starting to manufacture and GP100 has a 610 mm^2 die size. Neither of these can be produced today in significant volumes for a consume launch.

            • djayjp
            • 3 years ago

            “Neither of these can be produced today in significant volumes for a consume[r] launch.”

            Based on what, exactly?

            • the
            • 3 years ago

            The massive 610 mm^2 die size and that Samsung [i<]just started[/i<] manufacturing HBM2. Simply put, the necessary parts aren't being made fast enough for a consumer launch right now. A year from now when they have had time to ramp up volumes and improve yields, it would be more feasible.

            • evilpaul
            • 3 years ago

            You don’t just need the HBM2 chips and a good yield on a very large chip, you also needs the big interposer silicon to connect the two sets of components together.

            I’ve got a 970 GTX and would be interested to learn more about that card. It’s not the flagship though, so it’s not getting the attention in the presentation or as much analysis though. :/

            • djayjp
            • 3 years ago

            But do you know the current production output of Samsung’s HBM2 manufacturing? Many large GPUs have been made in the past (besides the gaming dies don’t need fp64).

            • the
            • 3 years ago

            Low since they just started fabbing it for sale on January 20th. It takes several months for a wafer to go into a fab and get to the point where it can be sold. Right now would be the time nVidia is actually getting their first batch of HBM stacks form Samsung.

            Gaming dies certainly don’t need to be FP64 heavy but reducing that would mean manufacturing a different die from the GP100. In all likelihood, nVidia will artificially cripple FP64 support on consumer GPUs even though they’re capable of full FP64 throughput like Quadro/Telsa cards. This is nothing new for nVidia.

            GP100 is the largest GPU die to date at 610 mm^2. The previous record hold was GM200 at 601 mm^2, followed by Fiji at 596 mm^2. Before 2015, the record holder was the GT200 at 576 mm^2 back in 2008. The only reason for the recent explosion of these large dies has been the delays in getting new process nodes online. Yields at these sizes, especially on new nodes like 16 nm FinFET, are going to be low.

            I would also predict an end to the big die designs. Interposers can be used to scale up the number of raw units in a design while keeping the individual die sizes sane for yield reasons.

            • djayjp
            • 3 years ago

            That’s a very informative response! Thx :). Though they have made a cut down version of their fp64 gpus for consumer purposes in the past, so I personally still suspect that lack of competition is the cause (really this strategy emerged with the GTX 680/Kepler generation). They made huge gpus from the get go with Fermi (GTX 480) after all.

          • jihadjoe
          • 3 years ago

          If they really use GP100 with it’s 1/2 DP then this will be the most workstation-y Titan yet.

      • nanoflower
      • 3 years ago

      It’s already been suggested that 2.1GHz is from a founders edition card. So limited edition cherry picked GPUs to get that stable clock.

      • willmore
      • 3 years ago

      67C at idle is nothing to brag about.

        • bhtooefr
        • 3 years ago

        That wasn’t at idle, it was actively rendering a scene.

        Now, whether it was at 100% load doing that…

          • willmore
          • 3 years ago

          It’s almost like people don’t understand jokes anymore. Or maybe it’s just the rabid nVidia fanboys.

        • chuckula
        • 3 years ago

        Not melting down into liquid slag while running on an air cooler is something to brag about.

          • auxy
          • 3 years ago

          This comment doesn’t really make sense unless you’re making an indirect dig at Fiji… which runs fine and even overclocks on an air cooler? Swing and a miss, Chuckie. (*´﹃`*)

            • f0d
            • 3 years ago

            maybe he wasnt having a dig at fiji – maybe it was the 290/290x?
            [url<]https://techreport.com/review/25602/amd-radeon-r9-290-graphics-card-reviewed/8[/url<] [url<]https://techreport.com/r.x/radeon-r9-290/gpu-temps.gif[/url<]

            • DoomGuy64
            • 3 years ago

            I think Fermi is more relevant, and speaking from experience Hawaii isn’t that hot. The problem with the 290 was mostly horrible reference coolers, and certain vendors using bad thermal paste. Replacing the thermal paste will lower temps around 10C on a lot of 290’s. Plus there’s the whole undervolting/underclocking thing.

            Reference coolers and QC is one of the things that ticks me off with AMD. Nvidia pulls it off 99.9% of the time, aside from the dustbuster. What is so hard about making a good reference cooler, and properly applying thermal paste?

      • beck2448
      • 3 years ago

      Exactly. Waiting for non reference.

      • NovusBogus
      • 3 years ago

      Could we be looking at the i7-2600K of the GPU world?

        • DeadOfKnight
        • 3 years ago

        It probably will be the last jump in performance this big, if that’s what you mean. They’ve been stuck on 28nm for so long and this affects GPUs way more than CPUs because more transistors means more stream processors. It can mean more CPU cores as well, but most users have very little use for more than 4 cores. It’s kind of a repeat of the 680 though. We got a “performance” chip occupying the “enthusiast” category at the top and the price hasn’t gone down that much.

          • the
          • 3 years ago

          I wouldn’t say that. 14 nm and 10 nm will happen, though the question is simply when at this point (recall that TSMC’s 16 nm FinFET processes is based upon there 20 nm node). That is two more process generations.

          As for the shaders themselves, there is still room for improvement. Recall that shaders are basically the same logic units used in CPUs, just designed for density so that manufacturers can put a significant amount of them on to a chip. This means that each shader running a thread is relatively inefficient. SMT here hides many of the inefficiencies as parallelism can be extracted at the thread level on the chip. It would take up more die space but I would predict a shift toward a focus on single thread performance. Similarly, I see a larger focus on caching within the GPU. Both of these will have to be balanced against adding more functional units which directly increase performance vs. efficiency gains for the die space consumed.

      • pranav0091
      • 3 years ago

      Do you worry about the clocks now, chuckula? 😉

      <I work at Nvidia, but my comments are personal opinions>

        • chuckula
        • 3 years ago

        I don’t mind being proven wrong!

      • AJSB
      • 3 years ago

      I also wonder if when NVIDIA says 1080 have twice VR performance, if it was with card OC to 2.1Ghz.

    • JumpingJack
    • 3 years ago

    *Scratches head*

    This can’t be right, where are the wood screws?

    • the
    • 3 years ago

    I feel bad for Tom.

      • RAGEPRO
      • 3 years ago

      As do we all.

      • tsk
      • 3 years ago

      It’s ok, I hear AMD is hiring.

      • drfish
      • 3 years ago

      So awkward…

      • Freon
      • 3 years ago

      He already updated his LinkedIn to “looking for next opportunity.”

        • chuckula
        • 3 years ago

        Experience: Major malfunctions in live product introductions.

      • ImSpartacus
      • 3 years ago

      That almost felt so bad that it was staged. But then I just don’t think that it would be staged.

        • the
        • 3 years ago

        I do think Jen-Hsun was honest when he said they didn’t rehearse before hand. Many of the mistakes in the presentation rest on that fact.

        I’ve worked in an environment dealing with live productions like that and you [i<]always[/i<] do a rehearsal for anything being sent outside of your company. It catches glitches as well as sets expectations for the presenter and content driver. The one that sticks out to me was when Jen-Hsun said that he wanted Tom to explain something and then cut Tom off when as Jen-Hsun said that he was being sarcastic. I don't think Tom had a camera feed or was focusing on controlling the content that to determine that. Tom was doing just what his boss told him to do. 🙁

      • Ryhadar
      • 3 years ago

      I didn’t actually know what you meant at first. And then I found this…

      [url<]http://youtu.be/DEq2e1NA2tY[/url<]

        • Ninjitsu
        • 3 years ago

        Wow was he messing around with the CEO? 😮

        • EzioAs
        • 3 years ago

        Wow. It’s like a real-life version of The Office…

    • Krogoth
    • 3 years ago

    It is the 680 or 980 all over again.

    I don’t care about the stupid marketing points. It isn’t really that shocking 1080 would be faster than a 980 SLI solution (not 980Ti).

      • ImSpartacus
      • 3 years ago

      Yeah, but you have to admit that Nvidia is good at pulling some shit. They even got tr to claim 2x perf over a 980 ti. It’s hilarious. It’ll be fixed/caveated shortly, I’m sure.

        • torquer
        • 3 years ago

        Yeah, I’m sure they “got” TR to post something erroneous that they didn’t report in their own presentation. They clearly stated its 2x 980 performance, not 980 Ti.

          • ImSpartacus
          • 3 years ago

          Yes, they started by saying that a 1080 was superior to SLI 980s at around [url=http://anandtech.com/show/10305/the-nvidia-geforce-2016-liveblog<]40 minutes into the presentation[/url<], but that's not quite like saying that it's 2x the performance of a 980, as you know. Furthermore, they definitely made the claim that a 1080 is twice the performance of a Titan X (roughly equivalent to a 980 Ti for our purposes). However, this was much later in the presentation and it included some Pascal-specific performance "cheats" used for VR performance only (in real-world non-VR gaming, it's more like 10-20% more than a 980 Ti, as [url=http://images.anandtech.com/doci/10305/SSP_163.JPG<]stated earlier in their presentation[/url<]) This was in an early iteration of TR's article above (and it still sorta is in the phrase, "Huang said the GTX 1080 will be twice as fast as a single Titan X—all while using a little more power than a single GTX 980."), but they've since removed it as I suggested they would in the comment you replied to (the article no longer has "developing" at the bottom). TR can certainly report what the presentation says, but I would hope that a respected institution such as itself would be able to add a certain amount of commentary & analysis to rein Nvidia's marketing department back in. After all, Jen-Hsun famously says that he uses "CEO Math" and it's evident in this presentation.

            • torquer
            • 3 years ago

            I think your point could be made without your original implication that something unseemly was afoot.

      • ImSpartacus
      • 3 years ago

      Whew, looks like you’re sharing unpopular facts again.

      The folks that waited for their Pascal savior don’t want to acknowledge that Nvidia has real honest engineering & supply challenges and they won’t give away performance for free.

        • Krogoth
        • 3 years ago

        I don’t understand it either.

        680 and 980 shake-up the GPU market with their deputes. There were pretty good deals and held out pretty well over their lifetime. I don’t fall for silly gimmicks that are pracitcally useless for 90%+ of the PC gaming market.

        The days of massive performance and features boosts in GPU generations have been over since Fermi and Cypress.

          • ImSpartacus
          • 3 years ago

          Yeah, GP104 will be fine. It seems to be hitting all of the performance targets that a realistic person would set.

          Even if it didn’t, since there’s no Polaris equivalent at that performance level, Nvidia didn’t really have to hit this one out of the park for the 2016 lineup. Though I’m sure they want to at least have something ready for Vega, which should bump the dial at least a little bit for AMD (though I honestly haven’t kept up with Vega).

      • DeadOfKnight
      • 3 years ago

      I agree it’s not that impressive. I expected pascal to be exactly what it is. I am impressed they are not charging more than they are for it though. I estimated them charging the founders edition prices as base prices. I am just hoping to see some big gains before the inevitable plateau that is coming in 4-6 years. Moore’s law is coming to an end. We hear it all the time but I think it hasn’t really sunk in. This is a pretty decent leap and hopefully not the last of its kind.

      • General
      • 3 years ago

      Its sli titan x vs 1x 1080 not sli 980

Pin It on Pinterest

Share This