Rumor: Nvidia’s GeForce GTX 1080 shows its face in 3DMark

We've got more to report in our recent streak of Pascal rumors today. VideoCardz.com has released what it is claiming to be benchmarks of Nvidia's presumed next-gen flagship, the GeForce GTX 1080. The benchmarks include a run of 3DMark 11’s Performance preset and the Fire Strike Extreme preset.

Image: VideoCardz.com

The purported benchmarks show a card equipped with 8GB of graphics memory. VideoCardz guesses the GTX 1080 uses GDDR5X running at a base speed of 2500 MT/s. One of the benchmarks shows the core clock of the card running as high as a whopping 1860MHz. If these frequencies prove to be accurate for a reference GTX 1080, they're about 80% higher than the boost frequency of the GeForce GTX 980 Ti.

The 3DMark 11 performance preset on the card in question yields a score of 27683 points. That score is a fair bit above the 23,000 to 25,000 range VideoCardz cites for the GeForce GTX Titan X. The Fire Strike Extreme scores line up with expectations for the GTX 1080, too. The card scores 8959 on the Fire Strike Extreme benchmark. That score is significantly higher than factory-massaged GeForce GTX 980 Tis, which tend to score in the mid-7000s.

Comments closed
    • TwoEars
    • 3 years ago

    My EVGA 980Ti with 100 MHz OC has a Firemark extreme score of 8614 (9244 graphics), so if these 1080 scores are OC numbers as well there really isn’t that much in it.

    • f0d
    • 3 years ago

    looks like 16nm FF clocks really well
    2ghz with overclocking might be doable πŸ™‚

    • End User
    • 3 years ago

    This will be a nice upgrade from my 770.

      • Dizik
      • 3 years ago

      I have a GTX 460. 😐

        • Airmantharp
        • 3 years ago

        This looks like I could replace my 970’s with one card… which is what usually happens, except there’s usually a generation bump in between!

        • anotherengineer
        • 3 years ago

        Rock on!! Radeon HD 6850 here!!

          • ronch
          • 3 years ago

          HD 7770. Poor us. /self pity

            • tipoo
            • 3 years ago

            Iris Pro baby

            • anotherengineer
            • 3 years ago

            I’ve got no qualms with this card, for the money ($135) it is the best I have ever bought/gotten.

            No freesync screen yet, and the few older games I play still cap out at 60Hz refresh rate.

            Looking forward to 14/16nm though and DP1.3, and other new features. If my 6850 lasted this long, the next card should easily do the same.

          • digitalnut
          • 3 years ago

          HD5850 — waiting for the last year for an upgrade!

            • USAFTW
            • 3 years ago

            Same here! Been waiting ages to replace this tired 5870.

      • vargis14
      • 3 years ago

      nice upgrade for my 2 7704gb cards\

    • anotherengineer
    • 3 years ago

    Meh

    Probably way out of my $250 budget.

      • Deanjo
      • 3 years ago

      Pretty hard to buy a card for $40 USD. πŸ˜›

        • anotherengineer
        • 3 years ago

        You’re going to use your business NCIX discount rate for me πŸ˜‰

    • Meadows
    • 3 years ago

    If that score is correct, it means it’s only faster by 10-20%. In layman’s terms, you’ll get 68 fps instead of 60 fps in a modern game. Big whoop.

    I’m not sure why but I find I’d need to see a delta of at least 30% before I get excited about any of this stuff.

    Edit: I just realised the card tested was not nearly the best Pascal version, so there’s opportunity to get excited later on, after all.

      • chΒ΅ck
      • 3 years ago

      What if you’re currently getting 30fps with a gtx 660, and you upgrade to the 1080 and get 68fps?
      127% performance increase :).

        • Meadows
        • 3 years ago

        Sure, but if I’m going to throw out that much money then I might as well just settle for the 980 Ti instead.

      • tipoo
      • 3 years ago

      10-20% faster than a Titan X over 1000 dollars (I see 1500CAD). If it’s cheaper, that’s still progress. Then imagine what the one actually priced like the Titan X will do.

        • nanoflower
        • 3 years ago

        Which is why people looking at that level of performance will wait for the 10XX family TItan version. Can’t say that I blame them since we know AMD has their big chip coming in early 2017 and Nvidia will want to have theirs out to compete so at most a six month wait for Nvidia’s new Titan.

      • Vaughn
      • 3 years ago

      For me when its comes to video’s cards the delta has to be 50% performance improvement anything less is a waste of money.

        • Meadows
        • 3 years ago

        Talking about the previous high-end to latest high-end delta here, rather than what I’d consider for an upgrade. When it comes to upgrades, I don’t tend to settle for less than +50% either.

        • demani
        • 3 years ago

        Thing is some of these seem to be able to maintain the frame rates at higher resolutions. This might be the first real 4K gaming card.

      • cygnus1
      • 3 years ago

      Have we even seen a more than 10% difference between the last couple generations at similar price points though?

      The thing that makes it a big whoop to me is that if the peak clock is so much higher (as much as 80% higher), have they really even improved clock for clock performance at all?

        • nanoflower
        • 3 years ago

        That’s why it’s usually best to skip a generation so you aren’t paying for that small boost in performance from generation to generation. You don’t need to wait as long with GPUs to see a big boost in performance as we are having to wait with CPUs, but it still takes time for them to get the GPU performance boosted by a significant amount at a given price point.

        I guess we will shortly see if the predictions that the prices will be higher for a given class of GPU this time around due to the increased costs in a new process node vs a mature node turn out to be true or not.

          • cygnus1
          • 3 years ago

          And that’s what I’ve been doing. Have had a 760 for quite a while. And this time I want to go up to the x80 level. It should be a monumental leap.

          • bfar
          • 3 years ago

          There was a time when they used to almost double performance each year, but it’s long gone.

        • Ninjitsu
        • 3 years ago

        You’re comparing little Pascal to big Maxwell.

          • cygnus1
          • 3 years ago

          I was under the impression there wasn’t going to be a “big” Pascal for consumer GPU that it would be kept for HPC usage in Tesla cards.

            • bfar
            • 3 years ago

            Well they said that about big Kepler too, but as yields improve it becomes a realistic proposition.

            • ImSpartacus
            • 3 years ago

            You’re correct for 2016. We’ll likely see a big Pascal in or around 2017. It might not be based on the gigantic gp100, but it’ll likely have similar performance.

            The part underpinning the rumored 1080 is only gp104. Still enough to edge out the 980 Ti (in a much smaller power envelope), but it’s not blowing anything out of the water. There’s no free lunch.

            • cygnus1
            • 3 years ago

            It’ll blow my 760 out of the water πŸ™‚

            • ImSpartacus
            • 3 years ago

            But so would a 980 Ti. And the 1080 probably won’t be much cheaper than the discounted 980 Ti cards that we’ve seen throughout the past few months ([url=http://slickdeals.net/f/8712799-zotac-980-ti-amp-omega-549<]as low as $549[/url<]). I fully trust Nvidia to be able to continue at the same steady performance increases (e.g. the newest GX**4 part slightly beats the previous GX**0 part), but we won't see the ludicrous performance "jumps" that some people are pining for. If you want those jaw-dropping "jumps", GP100 has shown to be that gpu, but then Nvidia will effectively charge you thousands per gpu (and that's in bulk orders). So there's no free lunch for people playing the waiting game.

            • PixelArmy
            • 3 years ago

            And so would a 970! :rolleyes:

            “the 1080 probably won’t be much cheaper than the discounted 980 Ti cards”
            This phrasing suggests you expect the price to be somewhere in the same ballpark or maybe even [i<]slightly[/i<] (but not much) cheaper. Given that, I'm not sure why you're suggesting what will in effect be an older, slower, possibly more expensive card...

            • ImSpartacus
            • 3 years ago

            I’m just turning the hype dial down a notch or two.

            The poor chap seems to be eager for something that’ll “blow his 760 out of the water.”

            But if you’re waiting for a 1080, then you probably should’ve just gotten a 980 Ti in the past couple months.

            The performance difference will not be massive. The 1080 will be competing against Fiji for at least until early 2017. It just needs to beat the 980 Ti, which already (generally) beats a Fury X.

            And because Nvidia will be alone at the top until 2017, don’t expect a massive price difference from today’s parts. I wouldn’t be surprised if the 1080 costs $550-600. Just something to force fiji into at/under $500, no need to go lower. But oh wait, you can already get a 980 Ti for $550-600 during various discounts.

            The only appreciable difference will be power usage. The 1080 will probably have a <180W tdp while the 980 Ti is up at 250W.

            Now that we’re literally hours away from (probably) hearing about 2016’s Pascal parts, then obviously it would be wise to wait until then. But I wouldn’t have this tunnel vision where you are 100% sold on 2016’s Pascal. Nvidia isn’t in the business of giving away free lunches.

            • cygnus1
            • 3 years ago

            I totally agree, I don’t expect the 1080 to be dramatically faster than 980 Ti and I’m betting they’ll be close in pricing as the 1080 is released. I don’t generally buy a new GPU very often and I’d like it to last a couple years at least. The 760 is about at the 3 year mark and still runs most everything ok at 1080p. Which is fine with the monitor I have now.

            I think everyone knew last year that nVidia would finally be moving to a smaller process and releasing a new architecture this summer. Between that and the monitors coming out last year all failing to meet my personal feature-want list, for me there was no point buying a 980 or 980 Ti.

            What’s going to blow my GTX 760 with a cheap 1080p 27″ out of the water is a GTX 1080 and a 1440p or 4K monitor, possibly at 21:9. And all without increasing power usage by much. All THAT together qualifies as “blowing it out of the water”

            • Ninjitsu
            • 3 years ago

            That doesn’t matter – you’re still comparing two different classes of things. Of course the new i3 isn’t going be faster overall than last year’s i5.

            However, as others have said, GP100 or a derivative will likely come to the consumer market eventually, as has been tradition.

      • Ninjitsu
      • 3 years ago

      It’s well within the range of expectation, i.e. 0-30%, and 10-20% being the most realistic prediction from a lot of us.

      This is GP104 vs GM200 – compared to the same price point (GM104) this is 60-70% over. Big whoop indeed.

      Also, 10-20% means 66-72 FPS.

      Finally, it means GTX 980 Ti level performance from the GTX 1070, at $350. This isn’t for GM200 owners, it’s for everyone else.

      I don’t understand the disappointment to be honest.

        • Krogoth
        • 3 years ago

        It is just there some people who still stuck with the absurd expectation that every new generation must have a 50-100% gain. Physics and architectural limitations be damned.

          • ImSpartacus
          • 3 years ago

          And then it doesn’t happen, but yet they continue to get hyped generation after generation.

          • Ninjitsu
          • 3 years ago

          But then that’s the thing – there likely WILL be a 50-100% gain over the last generation, if you compare the same class of product.

          GP104 isprobably going to be 60-70% faster than GM104, and GP100 IS ALREADY 54% faster than GM200 (not to mention 25x more FP64).

    • ronch
    • 3 years ago

    Have you guys checked out AMD’s website today? Seems both camps are really burning to start the next GPU wars even before actual cards hit the shelves.

      • nanoflower
      • 3 years ago

      I didn’t see anything new there. The Polaris information has been out for a bit so what did you see that was new from AMD? I do hope that AMD plans to respond soon to Nvidia’s announcement. If the predictions are right then this will be sort of a pre-launch with the actual launch and availability of the cards starting at the end of the month during Computex. That will lead to Nvidia getting a lot of free press throughout the month.

      • bfar
      • 3 years ago

      It’s a sign they’re aware of decent competition. Nvidia seem particularly keen to get out of the traps first. I reckon there will be a real GPU war between 1070 and something using Polaris 10, which is great for us.

      From all the rumors and leaks I get the impression 1080 is up there on its own, which unfortunately means it will be more expensive than it could be.

    • chuckula
    • 3 years ago

    Assuming any of this is even half true and isn’t just a photoshop contest entry, would it be possible that there is some sort of double-pumping going on similar to Fermi where the “true” clockspeed isn’t quite as insane as that number suggests?

      • pranav0091
      • 3 years ago

      Would the clocks really matter if those scores are true?

      <I work for Nvidia, but my comments are purely personal>

        • chΒ΅ck
        • 3 years ago

        Maybe just for power consumption and heat concerns.
        Well this be 22nm or 14nm?

          • tipoo
          • 3 years ago

          I thought it was billed to be 16nm FinFET

          • pranav0091
          • 3 years ago

          Don’t you worry on those fronts πŸ˜‰
          16FF.

          <I work at Nvidia, but my opinions are purely personal>

        • nanoflower
        • 3 years ago

        Yes, the clocks matter. One possibility is that this is an already overclocked GPU in which case there isn’t much difference between it and an OC’ed 980TI. There’s also the question of if the card isn’t pre-OC’ed then how much OC headroom does it have. Those are going to be some of the questions that people really want to have answered before purchasing (at least enthusiasts who want to push their GPU.)

          • pranav0091
          • 3 years ago

          Fair enough. Personally, I’m not into overclocking, so I often forget people care about it.

          <I work for Nvidia, but my opinions are personal>

            • Prestige Worldwide
            • 3 years ago

            Most Maxwell cards have good OC headroom, you would be doing yourself a disservice to not overclock if you have a 980 or 970.

          • ColeLT1
          • 3 years ago

          Nvidia has been dialing in their boost clocks to near the edge since the 6 series. Maybe it was just me, but unless I volt it, +50mhz are where my 660, 660ti, 670SLI, and 970 all started crashing., while the memory typically handled +400 to +600 in afterburner. My 970 factory boost to 1405mhz, but 1455mhz is not stable at all.

            • f0d
            • 3 years ago

            thats almost the exact opposite experience than me when i had an nvidia gpu
            my 670 got to around 1200mhz with no voltage at all and the brief time i had with a gigabyte g1 gaming 970 i was able to get to 1470 with no voltage at all
            it all depends on your silicon lottery luck i guess

            my memory was similar in being able to overclock like crazy

            • ColeLT1
            • 3 years ago

            All my cards were OC models, the 970 I have is EVGA’s highest clocked model, (really should have mentioned that). So I guess it is more like the board vendors did the squeezing.

            • f0d
            • 3 years ago

            same with me
            my 670 was a windforce 3 OC and the 970 i briefly used was a gigabyte g1 gaming

            its all to do your luck in the silicon lottery more than “this gpu from this brand/vendor overclocks the same amount all the time”

            • pranav0091
            • 3 years ago

            GPU clocking is an art as much as it is science. You have to consider thermals and power and the fine art of how the ratios of gpu-vs-memory clocks contribute to performance differently on different chips even within the same architecture. And for that matter, even across different games on the exact same gpu. You are battling a thermal disspation wall with a different optimal clocks per game, per card, per-die.

            I don’t have the specific knowledge (or the authority to issue an official comment, obviously), but as a layman, it sounds unlikely that Nvidia has been edging the clocks closer to the limits – going by the reports I have read of how well the GTX9## series overclocked on plain-jane air coolers. I have heard regular reports of clocks going up to 1.4GHz+, and sometimes as high as 1.5GHz on the GTX9xx on air coolers (not stock though.)

            If you don’t leave room for OC, then the AIBs will not be able to differentiate their cards as much as they’d like. And you stand to annoy the hardcore users too. I don’t think any company would be happily willing to do that – unless of course they were desperate for that last ounce of performance. Thats not the feeling I get from how easy it was for the 980Ti (a floorswept chip) to beat the HBM-equipped Fury X (a full die).

            Well, you might not agree with the above, particularly when it comes from an Nvidia employee, but thats just my two cents.

            PS: How do you guys keep track of replies to your post? I dont see any options…

            <I work at Nvidia, but my opinions are purely personal>

            • ColeLT1
            • 3 years ago

            If you are silver or gold subscriber you get email notifications:
            [url<]https://techreport.com/subscriptions.x[/url<] I can get good clocks out of all the past cards I have had, it just took a volt modded bios to do it. I don't flash cards I sell, unless requested, so I only did it to a handful. Out of about thirty 6, 7 and 9 series cards, I have had only one, an ASUS TOP 780TI that was a golden goose, +700 on memory and +100 on core with no voltage needed.

            • Prestige Worldwide
            • 3 years ago

            My ASUS 970 Strix was stable at 1500 MHz with +37 mv with +300 memory.

            My friend’s Gigabyte G1 970 was stable at 1600 MHz.

        • Ninjitsu
        • 3 years ago

        From an academic and architectural standpoint, yes. Sometimes we’d like to go beyond what marketing wants to tell us today – it is why we read this website, after all.

    • ImSpartacus
    • 3 years ago

    Those clocks are frightening, but not all that surprising.

      • tipoo
      • 3 years ago

      Maybe they’re returning to having a split core clock/shader clock like Fermi? The shader clock was always much (double) higher on that. Then add in Boost.

        • Leader952
        • 3 years ago

        No

        • Prestige Worldwide
        • 3 years ago

        Many Maxwell cards could easily overclock to 1600 MHz. Going from 28nm to 16nm FF, 1.86ghz clock is quite understandable. We might even see overclocks over 2 GHz !!!!!!!

Pin It on Pinterest

Share This