Nvidia’s GeForce GTX 1080 Ti graphics card reviewed

Sometimes at TR, we do funny things for major product launches. We’ve asked chips to review themselves. We’ve had the NSA monitor our testing labs. With the advent of the GeForce GTX 1080 Ti, Nvidia’s fastest consumer graphics card ever, I felt like it was time to continue the tradition.

Once I got this card on the bench, though, I dropped that idea, because the GTX 1080 Ti is no joke. I’m also not that funny.

If you’re not already familar, Nvidia tends to launch its biggest graphics chip per generation aboard a Titan-branded card that sticks around for a while, after which it tends to send some folks in with the world’s tiniest chainsaws to produce a slightly cut-down yet similar-performing product it can sell for less money. When the Titan X Pascal launched in July of last year, we knew that a GTX 1080 Ti would likely follow. It just took a while.

In the case of the $700 GTX 1080 Ti, the tiny-chainsaw-wielders didn’t have much work to do with the GP102 graphics chip that’s shared by the Titan X Pascal. The GTX 1080 Ti comes with the same 3584 stream processors enabled as its $1200 forebear. To justify the $500 price difference, Nvidia buzzed off eight ROPs and narrowed the memory bus width to 352 bits, resulting in an unusual 11GB pool of GDDR5X RAM compared to the Titan X Pascal’s 12GB on a 384-bit bus. At the same time, the green team bumped up the boost clock 51 MHz and filled out that 11GB of RAM with new 11 GT/s memory. That means in some regards, the GTX 1080 Ti is actually faster than the Titan. Maybe some people will really want that black cooler.

  GPU

base

clock

(MHz)

GPU

boost

clock

(MHz)

ROP

pixels/

clock

Texels

filtered/

clock

Shader

pro-

cessors

Memory

path

(bits)

GDDR5(X)

transfer

rate

Memory

size

Peak

power

draw

E-tail

price

GTX 970 1050 1178 56 104 1664 224+32 7 GT/s 3.5+0.5GB 145W $329.99
GTX 980 1126 1216 64 128 2048 256 7 GT/s 4 GB 165W $499.99
GTX 980 Ti 1002 1075 96 176 2816 384 7 GT/s 6 GB 250W $649.99
Titan X 1002 1075 96 192 3072 384 7 GT/s 12 GB 250W $999.99
GTX 1080 1607 1733 64 160 2560 256 10 GT/s 8GB 180W $499.99
GTX 1080 Ti 1480 1582 88 224 3584 352 11 GT/s 11GB 250W $699.99
Titan X Pascal 1417 1531 96 224 3584 384 10 GT/s 12GB 250W $1200.00

Even folks who prefer black may want to go with the GTX 1080 Ti anyway, because this Founders Edition card has an improved design compared to the first FE coolers.

Most importantly, the DVI output is no more, so the 1080 Ti’s blower has more vent area to exhaust the heat the cooler wicks away from the 471 mm² GPU underneath. Those who still need a DVI connector will find an DisplayPort-to-DVI adapter in the 1080 Ti FE’s box. Even if you have to burn a DisplayPort this way, the 1080 Ti FE still offers two more DisplayPort 1.3 outs and an HDMI connector.

The cooler keeps the same 63-mm blower fan and a vapor-chamber heatsink similar to the ones we know from earlier Nvidia reference designs. We didn’t have time to strip down the 1080 Ti FE for forensic purposes, but the new card weighs 25g more than the GTX 1080 FE. We’re guessing not that much is different underneath the shroud.

The GTX 1080 Ti reference PCB. Source: Nvidia

At a board level, though, there are most definitely differences. With a 250W TDP, the Founders Edition card needs both six-pin and eight-pin PCIe power connectors to operate. Nvidia also says it’s beefed up the power-delivery subsystem of the 1080 Ti with a seven-phase “dual FET” setup capable of delivering 250A of power. This setup purportedly delivers cleaner power with less waste heat. Compare that to the five-phase design of the GTX 1080 FE. We’ll see how this setup translates to overclocking prowess a little later on.

 

Our testing methods

Most of the numbers you’ll see on the following pages were captured with PresentMon, a software utility that uses data from the Event Timers for Windows API to tell us when critical events happen in the graphics pipeline. Namely, we’re interested in the time between present() calls, which correlate with the frame times that we used to collect using Fraps.

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-7700K
Motherboard Gigabyte Aorus GA-Z270X-Gaming 8
Chipset Intel Z270
Memory size 16GB (2 DIMMs)
Memory type G.Skill Trident Z

DDR4-3866

Memory timings 18-19-19-39 2T
Hard drive 2x Kingston HyperX 480GB

Corsair Neutron XT 480GB

Power supply Corsair RMx 850W
OS Windows 10 Pro

 

  Driver revision GPU base

core clock

(MHz)

GPU boost

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

Radeon R9 Fury X Radeon Software 17.2.1 1050 1000 4096
EVGA GeForce GTX 1070 SC2 GeForce 378.78 1594 1784 2002 4096
GeForce GTX 1080 Founders Edition 1607 1733 2500 8192
GeForce GTX 1080 Ti Founders Edition 1480 1582 1753 11264

Thanks to Intel, Corsair, Kingston, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and EVGA supplied the graphics cards for testing, as well.

Some game settings (especially texture sizes) were not friendly to the Radeon R9 Fury X’s 4GB of RAM. Where necessary, we reduced these settings to prevent crippling performance issues. All other in-game settings remained the same between the Fury X and the Nvidia cards on the bench.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests. We tested each graphics card at a resolution of 3840×2160 and 60 Hz, unless otherwise noted.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Sizing ’em up

Take some clock speed information and some other numbers about per-clock capacity from the latest crop of high-end graphics cards, and you get this neat table:

  Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear

filtering

int8/fp16

(Gtexels/s)

Peak

rasterization

rate

(Gtris/s)

Peak

shader

arithmetic

rate

(tflops)

Memory

bandwidth

(GB/s)

Asus R9 290X 67 185/92 4.2 5.9 346
Radeon R9 295 X2 130 358/179 8.1 11.3 640
Radeon R9 Fury X 67 269/134 4.2 8.6 512
GeForce GTX 780 Ti 37 223/223 4.6 5.3 336
Gigabyte GTX 980 85 170/170 5.3 5.4 224
GeForce GTX 980 Ti 95 189/189 6.5 6.1 336
GeForce Titan X 103 206/206 6.5 6.6 336
GeForce GTX 1080 111 277/277 6.9 8.9 320
GeForce GTX 1080 Ti 139 354/354 9.5 11.3 484
GeForce Titan X (Pascal) 147 343/343 9.2 11.0 480

We won’t be testing every card in the table above, but we think it’s useful to see how far we’ve come from some popular graphics cards of the past. The GTX 1080 Ti and the Titan X Pascal both offer more of, well, everything than any other card in this table save memory bandwidth. As we’ll soon see, however, theoretical peaks alone don’t tell the whole story.

To see how these theoretical numbers play out, we turn to our trusty Beyond3D test suite.

In the first of our synthetic Beyond3D tests, the GTX 1080 delivers what will soon become a familiar sight throughout this review. That 130 GPixels-per-second figure likely falls short of our theoretical peak because of the complex way that Nvidia’s cuts to GP102’s resources likely affect its real-world performance.

Thanks to Pascal’s delta-color-compression mojo and the fact that GP102 is just a whole lotta chip, the GTX 1080 Ti excels at moving a whole lotta data with a 100% compressible black texture. Its incompressible-texture performance approaches the Fury X’s compressed performance. Good grief.

Likely thanks to GPU Boost doing its thing, the 1080 Ti happily beats its theoretical peak texture-filtering rates. It also happily beats up on every other card here.

Here’s a benchmark that’s posed a long-running mystery for us: how do Nvidia’s Maxwell-and-newer chips outstrip their theoretical polygon throughput? Well, we can finally explain what’s going on. TR has long surmised that Nvidia’s Maxwell cards have used a form of tiled rendering to achieve this counterintuitive result, and friend-of-the-site David Kanter proved it a while back. Nvidia admits as much now, and the technique explains the Pascal cards’ performance here.

The GTX 1080 Ti continues to excel in these tests of pure number-crunching ability. It’s hard to say too much about these numbers with my jaw on the floor. Let’s see how these startling benchmarks translate into real-world performance for the GTX 1080 Ti now.

 

Doom (OpenGL)

id Software’s 2016 Doom revival is a blast to play, and it’s also plenty capable of putting the hurt on today’s graphics cards. We selected the game’s Ultra preset with 16X anisotropic filtering and 8X TSSAA and dialed up the resolution to 3840×2160 to see what the GeForce GTX 1080 Ti is capable of.


With Doom‘s OpenGL rendering path, the GTX 1080 Ti achieves average framerates unlike anything we’ve ever seen at 4K with ultra settings, and it delivers 99% of its frames within a picture-perfect 16.7-ms window. A+.


Our “time-spent-beyond-X” graphs can be a bit tricky to interpret, so bear with us for just a moment before you go rocketing off to the conclusion. We set a number of crucial thresholds in our data-processing tools—50 ms, 33.3 ms, 16.7 ms, and 8.3 ms—and determine how long the graphics card spent on frames that took longer than those values to render. Those values correspond to instantaneous frame rates of 20 FPS, 30 FPS, 60 FPS, and 120 FPS.

If even a handful of milliseconds start pouring into our 50-ms bucket, we know that the system is struggling to run a game smoothly, and it’s likely that the end user will notice severe roughness in their gameplay experience if time starts building up there. Too much time spent on frames that take more than 33.3 ms to render means that a system running with traditional vsync on will start running into equally ugly hitches and stutters. Ideally, we want to see a system spend as little time as possible past 16.7 ms rendering frames, and too much time spent past 8.3 ms is starting to become an important consideration for gamers with high-refresh-rate monitors and powerful graphics cards.

By those measures, the GTX 1080 Ti is practically perfect for gaming at 4K and 60 FPS with Doom. It spends just a sliver of time on frames that take more than 16.7 ms to render, and it spends 10 seconds less of our one-minute test run past 8.3 ms compared to the GTX 1080. That means a smooth and fluid gameplay experience throughout.

 

Doom (Vulkan)

Aside from the API change, we didn’t change any settings to test Doom‘s Vulkan renderer. Here’s what happened.


Although the GTX 1080 Ti’s average frame rate doesn’t get any better with Vulkan on, its 99th-percentile frame time gets a smidge better. The Radeon R9 Fury X is the real beneficiary of this change, slicing its 99th-percentile figure nearly in half. It still can’t quite catch the GTX 1070, however.


Fury X aside, our “time-spent-beyond” graphs are practically identical to our OpenGL results. The GTX 1080 Ti maintains its considerable lead at 8.3 ms, and it delivers sterling results at the 16.7-ms mark.

 

Crysis 3
Crysis 3 hasn’t failed as a test of the mettle of any graphics card we’ve thrown it at, and the GTX 1080 Ti is no different. We dialed in a 3840×2160 resolution and Very High settings all around, save for SMAA antialiasing instead of more demanding methods.


Crysis 3 still has teeth, and the GTX 1080 Ti doesn’t quite have enough power to defang it. Still, we’ve never seen better performance from this game with all the eye candy cranked at 4K. A bit of judicious dialing-back (or a bit of overclocking) could produce the magic 60-FPS average we want to see.


Our time-spent-beyond graphs back up that close-but-no-cigar feeling. The 1080 Ti spends just over six seconds of our one-minute test run on frames that take more than 16.7 ms to render. None of the other cards here can even come close.

 

Deus Ex: Mankind Divided

We couldn’t quite get Deus Ex: Mankind Divided running well at 4K with any graphics card in our suite, so we decided to switch things up and ran it at 2560×1440 instead with a blend of High and Very High settings with no AA.


Nvidia says the GTX 1080 Ti is about 30% faster than a GTX 1080 on average, and these 2560×1440 results bear out that claim. If you have one of the increasingly-common high-refresh-rate G-Sync panels out there, the Ti is a perfect pairing.


As we should expect, the GTX 1080 Ti spends an imperceptible amount of time past 16.7 ms on difficult frames, but the gap between it and everything else isn’t quite as pronounced in Deus Ex as it is at higher resolutions. Flip over to 8.3 ms, though, and the 1080 Ti asserts its unquestionable superiority once more.

 

Gears of War 4
Gears of War 4 is one of Microsoft’s first-party DirectX 12-only efforts, and it has plenty of PC-friendly graphics settings to make even the most powerful graphics card sweat. We dialed everything up to Ultra at 3840×2160, save for High shadows and screen-space reflections, before doing our usual test run in the early stages of the game.


Here’s another title where the 1080 Ti could likely be made to run a little faster with a couple judicious settings tweaks. Still, the overall experience from the Ti is fast and smooth.


I hate to sound like a broken record, but the 1080 Ti is the best thing going for minimizing the time spent beyond 16.7 ms on tough frames at 4K. Nothing else comes close.

 

Grand Theft Auto V

At 4K with maximum settings, Grand Theft Auto V is a worthy opponent for high-end graphics cards. We expected great things from the GTX 1080 Ti here, so we fired up our usual test run and got to it.


High FPS average? Check. Low (albeit not perfect) 99th-percentile frame time? Check. If you want to dial in stupid settings for GTA V at 4K, the 1080 Ti is your best way there.


Yep, still amazing.

 

Hitman (DirectX 12)

You know the drill by now. We maxed out Hitman at 4K and put our graphics cards to the test.


I’d love to say something interesting about these results, but the Ti defies me with another record-breaking performance. Perhaps it’s more interesting that the once-closely-matched GTX 1070 and R9 Fury X now diverge quite a bit with Hitman‘s DX12 renderer at 4K.


As usual, the 1080 Ti barely puts any time in the 16.7 ms bucket, and all our other cards trail it by wide margins. This is what Tiger Woods must have felt like at the 1997 Masters.

 

Rise of the Tomb Raider (DirectX 11)


Rise of the Tomb Raider is a tougher cookie than most games, and even the GTX 1080 Ti can’t quite manage a 60-FPS average with it at 4K. Still, we’re setting new bars for performance here: just look at everything else.


Although the 1080 Ti spends enough time past 16.7 ms that you’ll likely notice, all the other cards will make you notice a lot more.

 

Tom Clancy’s The Division (DirectX 11)
The Division has a reputation for being a toughie, so we set it up with a blend of high and ultra settings and stepped into the disaster-stricken landscape of New York to see whether it lives up to that billing.


The Division‘s open-world New York setting is a formidable challenge for all of these cards to render, and the 1080 Ti’s 99th-percentile frame time indicates that it has to work harder in spots than its commendable average frame rate would first suggest. We wouldn’t put too much stock in the Fury X’s numbers here—it seems our settings overtaxed the Radeon’s 4GB of RAM.


Even with some tough spots and a bit of spikiness, the GTX 1080 Ti once again spends just a handful of seconds past 16.7 ms in The Division. Nothing else is nearly as smooth.

 

The Witcher 3
The Witcher 3 remains as graphically-challenging a game as ever when the resolution climbs, so we set it up with Ultra settings and HairWorks off at 4K to see how that played in White Orchard.


If you want to enjoy the world of The Witcher in smooth and glorious 4K, the 1080 Ti is your ticket. We’re not sure what happened to the GTX 1070 here, although the result was repeated in all three test runs. We probably need to re-run those numbers for a more accurate picture of performance ASAP.


Ayup. That sure is an impressively small amount of time spent past 16.7 ms. You won’t find a smoother Witcher 3 experience at 4K from any other graphics card.

 

Watch Dogs 2

Here’s a new addition to our GPU-testing suite. We know from our recent Ryzen review that Watch Dogs 2 can challenge every part of a system, so we turned up the eye candy and walked through the forested paths around the game’s Coit Tower landmark to get our graphics cards sweating. Ignore the 1920×1080 resolution in the settings below.


Like The Division, Watch Dogs 2 is set in a huge open world, but in foggy San Francisco rather than post-apocalyptic New York. Those sweeping vistas full of fine detail put the hurt on any graphics card, and even the 1080 Ti can’t quite get this dog to heel.


Still, the 1080 Ti spends about a third as much time on tough frames that take longer than 16.7 ms to render as the GP104-powered cards do. If you need visuals that match the potential of a 4K screen, the 1080 Ti is the only way to go for smooth gameplay.

 

A quick turn at overclocking

Nvidia boasts that GTX 1080 Ti cards can reach 2000 MHz or so with some tweaking, so we decided to turn up the clocks in MSI Afterburner to see whether there was any performance left on the table in our Founders Edition card.

To start off, we ran the Unigine Heaven benchmark for 10 minutes to allow the card to get nice and toasty. After that period, the card settled on a 1759 MHz boost clock, and temperatures hovered around 84° C. As is usual for Pascal cards, Nvidia is quite modest about the boost clock range on offer with the GTX 1080 Ti.

With that baseline established, we maxed out the card’s power and temperature limits and began adding a boost clock offset to the card’s stock figure until the Heaven benchmark crashed or we observed other instability. At the end of that process, we achieved a stable 1974 MHz boost clock, and GPU temperatures hovered around 85° C (albeit with much higher fan speeds and more noise than at stock clocks). We didn’t observe any throttling or other limits kicking in at that speed.

Once we had a core clock dialed in, we began increasing memory speeds while playing Doom with its highest-quality textures enabled and its virtual texturing page file at its maximum size. We progressively increased memory clocks and looked for artifacts or other signs of instability. After many iterations of this process, we settled on a 6048 MHz Afterburner memory clock for an effective transfer rate of 12.1 GT/s. While the card was stable at that eye-popping speed, we saw clocks begin dipping into the 1898 MHz-1936 MHz range under load. GPU-Z indicated that the card was hitting a power limit when this happened.

A 12% increase in core clocks and a roughly 10% boost in memory clock speeds is an impressive result, although we would have preferred to have some extra voltage or board power to work with so that the card could sustain both its full memory and boost clock speeds at once. We expect Nvidia’s board partners will take care of this balancing act with their custom card designs.

For an idea of what that performance boost bought us, we ran the built-in benchmark for Rise of the Tomb Raider at 4K using the same test settings we used earlier in the review. At stock clocks, the GTX 1080 Ti ran the benchmark at 67 FPS on average, while our overclock pushed that figure to 73 FPS. 9% more performance at the cost of higher power draw and fan noise is nothing to sniff at.

Be ready for that extra power draw, though, because a pushed GP102 chip sucks down a lot of extra watts. We observed a total system power draw of 414W with our overclocked card running all-out, compared to 365W for stock settings. That’s nothing new with overclocking, though.

 

Conclusions

We’ll kick off our conclusions with our famous value scatter plots. The best values cluster toward the upper left of the chart, where performance is highest and prices are lowest. We’ve presented this data two ways: one as average FPS per dollar, and the other as 99th-percentile FPS per dollar (converted from frame times so that our higher-is-better approach makes sense).


So there you have it. Whether in performance potential (as measured by average FPS per dollar) or delivered smoothness (as measured by 99th-percentile FPS per dollar), the GeForce GTX 1080 Ti is the finest single-GPU graphics card that we’ve ever tested by a wide margin. It makes smooth, fluid gaming at 4K with ultra settings an accessible prospect, and it’ll scream at similar settings and lower resolutions. In an age where incremental advances are becoming the norm, the GTX 1080 Ti sets a new bar for what’s possible from a gaming PC. Don’t play games on a GTX 1080 Ti unless you want to go home with one. It’s that good.

Though $700 is a lot to pay for a graphics card, I think the GTX 1080 Ti is fairly priced, too. Its 99th-percentile-FPS-per-dollar result is 33% or so better than the GTX 1080’s. With Nvidia’s new $500 suggested price tag for those cards, the 1080 Ti is about 40% more expensive in its Founders Edition form than the former king. Considering that we’re getting similar performance to what the $1300 Titan X Pascal offered before this point, the GTX 1080 Ti seems like a great value. Of course, a $1300 graphics card makes almost anything seem like a great value.

The GTX 1080 Ti Founders Edition card is a fine piece of hardware in its stock form, allowing the GP102 GPU to boost well above Nvidia’s modest 1582 MHz boost clock for sustained periods with low enough noise levels. Overclockers attempting to wring the most from GP102 may want to wait for custom cards with more board power to spare, though, because overclocking both the memory and the GPU core clock on our example required the GPU to run well below its peak overclocked boost speed from time to time thanks to power limits.

Even so, we got our 1080 Ti’s GDDR5X memory to a jaw-dropping 12.1 GT/s stable speed and its GPU to a 1974 MHz boost clock without much drama. Those tweaks were good for an extra 10% or so of performance in our brief tests, and that could make the difference between near-60-FPS and 60 FPS on average in some of the titles we tested at 4K. It’s definitely worth spending some time in MSI Afterburner with the GTX 1080 Ti if you don’t mind more noise.

Whither AMD in these results? We decided not to include the R9 Fury X in our final tally, since not all of our games ran acceptably on it and retail stock of that card seems to be drying up ahead of the Radeon RX Vega launch. Still, some back-of-the-napkin math suggests Nvidia is extracting about twice the performance that Fiji offered from the same power budget. That’s an impressive achievement.

Nvidia GeForce GTX 1080 Ti

March 2017

AMD may have a high-end fighter soon in its upcoming Radeon RX Vega, but the GTX 1080 Ti’s arrival now likely isn’t happenstance. Our seat time with pre-production Vega hardware suggests that at least one of those cards will provide GTX 1080-class performance rather than GTX 1080 Ti-slaying power. If so, cutting GTX 1080 prices to $499 is a rather aggressive play by the green team, since it could set a ceiling on what AMD can charge for some members of its latest and greatest. Still, we expect that RX Vega will be competitive with higher-end Pascal for the first time in a while, and if there’s one thing we welcome in the PC hardware market, it’s more competition.

Back in the present, we have no real complaints about the GTX 1080 Ti. It’s as close to perfect as a high-end graphics card can be right now, and its killer combo of delivered performance, high efficiency, and a reasonable price tag make it a shoo-in for a TR Editor’s Choice award.

Comments closed
    • VincentHanna
    • 3 years ago

    So, Question: About temps.

    These things have a 250-300 Watt power draw, and under load get up to 84C+

    Are these things going to be putting out the same kind of heat comparable to Fermi GPUs? or are those days behind us? How hot will this thing make a 10×10 room after 2-5 hrs?

    To put this in perspective, the GTX 580 had the same 250w-300w power draw. This seems like an especially pressing question because Nvidia felt the need to cut S video to improve airflow.

      • GrimDanfango
      • 3 years ago

      It certainly kicks out some heat. I’m glad I went for the reference model that exhausts out the back – you’d need a well ventilated case otherwise!

      Under full load, and overclocked to suck up the full 300W, it does warm my 9×12 office pleasantly. Possibly less pleasantly when summer arrives – I’ll need to open the windows for sure 🙂

      I can easily keep it to a maximum of 84 degrees, a bit of a tweak to the fan profile, and reigning in the overclock a little keeps temperatures under control.
      If you exclusively use headphones and don’t mind running the fan on leaf-blower mode, you can keep it around 70 pretty easily.

        • VincentHanna
        • 3 years ago

        When I was in college, I left my windows open and heated my room with just GPU power.
        I seriously considered running special duct work for the exhaust.

          • Usacomp2k3
          • 3 years ago

          I actually did that. I had the 120mm output from the top of the case going through 4″ dflexoble duct to a 4″ booster fan and then exhaust out the window. It didn’t work that great to be honest. It did work great for the winter though bringing cold air in as the intake. I had the R120, which was among the first AIO water coolers. It kept my xp-m 2500 at a chilly +10 over ambient.

        • slate0
        • 3 years ago

        Haha, my tiny apartment needs AC all year round due to these cards (I’m sure the plasma tv doesn’t help)

      • Waco
      • 3 years ago

      The load temperatures mean nothing. All that really matters (for the sake of the topic) is the power usage.

      Yes, things are more efficient at cooler temperatures, but that’s a very small effect compared to the amount of power being used.

      In general, power in == heat out. A 500 watt rig, running at whatever “high” temperature, will produce just as much waste heat as one running at extremely cool temperatures.

        • VincentHanna
        • 3 years ago

        Well, no, because there is a certain amount of power being consumed to do actual work. My GTX 580s have about 1/3 the number of transistors, so applying the same 300W power should theoretically yield better efficiency across that component. Heat is the result of power being lost from the system. The less heat waste produced, the more efficient the system is, and therefore the cooler my room stays while gaming.

        Also the amount of heat that collects at the sensor is a function of airflow, cooler design, and the amount of waste heat that needs to be dissipated, since the other factors can be assumed to stay more or less stable across varoius levels of GPUs, it’s not unreasonable to consider heat at the sensor as analogous (proportional) to total heat output.

          • Waco
          • 3 years ago

          That’s not how it works, though. Power in, heat out.

          The temperature at the sensor means nothing – you can tune the fan up and down, and in general, the power usage will stay fairly constant (assuming no throttling and no runaway conditions in VRMs). It doesn’t matter if the GPU is running at 30 C or 90 C, if it’s using 300 watts, you’re going to be heating the room up with a 300 watt load.

            • VincentHanna
            • 3 years ago

            [quote<]Power in, heat out. [/quote<] Actually, THAT'S not how it works. Power is a measure of work. Heat energy is one type of work. There are others. In a 100% efficent system, 0% of the power applied to the system would be lost as heat energy. [quote<]The temperature at the sensor means nothing - you can tune the fan up and down[/quote<] You can tune the fan up, sure, but doing so would not mean anything because you would be changing the parameters of the measurement. You could also switch the dial on the thermometer to Fahrenheit and declare that "zomg there is so much more heat now." Either way, you'd be wrong. I already stated that the assumption was that the other factors remained stagnant. The corollary to that statement is if you arbitrarily mess with the fan speed, or the unit of measurement or any other fool thing, you compensate for it.

            • Waco
            • 3 years ago

            Computers turn power into heat. They are nearly 100% efficient at that. I’d love to see anything otherwise, since I have entire machine rooms full of computers that I would love to be able to cool better thanks to snarkiness and ignorance…

            Changing fan speeds doesn’t change heat output (measurably, and can increase it). Changing temperature of the local chip doesn’t change heat output. There’s no magic here. Power in, heat out.

            • VincentHanna
            • 3 years ago

            The only person being either snarky, [b<]or ignorant[/b<] is you. The only person arguing the point that messing with the fan speed alters the heat output of the chip, or that there are "meaningful relative efficiency gains at cooler temperatures" is you. The only person arguing that computers are magical, and somehow exempt from the laws of physics, is you. None of that, literally, none of it was relevant to my original topic in any way. I just followed you down your own rabbit hole.

            • Waco
            • 3 years ago

            Sigh. Okay. I’ll go back to pretending that [i<]this is my job[/i<] and leave you in bliss.

            • VincentHanna
            • 3 years ago

            Precisely what is your job?

            I mean if you want to whip out the ol’ argument from authority, you may as well actually state those credentials, right?

            • Waco
            • 3 years ago

            [quote<]Precisely what is your job? Measuring the heat output/reletive efficiency of GPUs at various levels of power draw (even though, according to you they are all 0% efficient)? Or standing in a hot room?[/quote<] I quoted your original response, since it's far more snarky and useless. 🙂 Go ahead and downvote me all you like. I work in HPC (many tens of megawatt datacenters), and like I said, I'd love to see any methods of increasing "efficiency" in regards to heat output versus power input. If you'd like a reference that's more GPU oriented, look no further: [url<]https://www.pugetsystems.com/labs/articles/Gaming-PC-vs-Space-Heater-Efficiency-511/[/url<]

            • exilon
            • 3 years ago

            All electricity used by a semiconductor based integrated circuit ends up as heat within seconds. All of it, except for a few nanojoules used up in bumping atoms around the silicon lattice. This is because the transistors are using power to pump electrons up and down a voltage gradient. All of the circuitry is small enough that they’re leaky as well so any charge above ground level quickly releases its energy as heat.

            This means heat output is equal to electrical power input. Thus Fermi at 250W heats the room at the same rate as a Pascal at 250W.

    • Goloith
    • 3 years ago

    [quote<]While the card was stable at that eye-popping speed, we saw clocks begin dipping into the 1898 MHz-1936 MHz range under load. GPU-Z indicated that the card was hitting a power limit when this happened. A 12% increase in core clocks and a roughly 10% boost in memory clock speeds is an impressive result, although we would have preferred to have some extra voltage or board power to work with so that the card could sustain both its full memory and boost clock speeds at once. We expect Nvidia's board partners will take care of this balancing act with their custom card designs. [/quote<] So you're running into a power limit even with 120% set on the power limit?

      • GrimDanfango
      • 3 years ago

      I think that’s by design… whatever power limit you set, if you’re running enough of an overclock to let it, it’ll scale up automatically until it hits that limit, and then regulate the boost clock to fit as close as it can… unless you’re hitting your monitor’s refresh rate first, and only partially loading the card.

    • Jeff Kampman
    • 3 years ago

    Subscribers will now find some hi-res card images in this article’s gallery. Enjoy!

    • GrimDanfango
    • 3 years ago

    Well, I received mine. I can thus confirm that yes, it is indeed fast.

    Also with my lack of overclocking expertise, it seemed relatively straightforward to crank the thing up to a-hair-short-of 2000Mhz typical boost core and 11750Mhz ram… didn’t need to fiddle with voltages (beyond the default boost scaling it uses). So far it’s seemed stable in everything I’ve thrown at it.

    Did need to set a custom fan profile, even for non-overclocked use, as it was set by default to a very quiet setting. It would run right up to its default 84 degree thermal limit under load, and promptly limit the boost clocks. With a custom fan curve, it can keep the card at around 77-78 degrees under load, overclocked, and only running up to around 75% fan speed (which is still fairly audible… 100% would probably require closed-back headphones :-))

    • green
    • 3 years ago

    i’m aware that this is a high-end graphics card review
    but just checking to see if putting the 1050ti through it’s paces is on the todo list:
    [url<]https://techreport.com/news/30867/in-the-lab-evga-geforce-gtx-1050-ti-superclocked-graphics-card[/url<]

    • michael_d
    • 3 years ago

    Judging by Doom (Vulkan) results Radeon Vega should be very close + or- 10% – 15%.

      • Airmantharp
      • 3 years ago

      Doom in Vulkan is unfortunately the exception, not the rule.

      Few developers (if any!) have the time and patience to do the optimization that is id’s hallmark, and few care about vendor-neutral APIs as much as id.

      The silver lining is that Bethesda is rumored to be pushing id’s tech for all of their developers- the catch is that it’s still Bethesda.

    • Kougar
    • 3 years ago

    [quote<] Those who still need a DVI connector will find an HDMI-to-DVI adapter in the 1080 Ti FE's box.[/quote<] Everyone is reporting this is a DisplayPort to DVI single-link adapter, not HDMI.

      • Jeff Kampman
      • 3 years ago

      You’re correct. Not sure where I got the idea it was an HDMI connector.

    • Phishy714
    • 3 years ago

    I have had GTX TITAN 6GB (original) in SLI since the day those cards came out.. I have been holding out for a single card to beat that setup by a good margin to finally make the upgrade. This the card to finally give a sizeable performance increased over SLI Titans? Or did I miss that boat a while ago?

      • Freon
      • 3 years ago

      First gen Titans are between a GTX 970 and GTX 980 or so.

      [url<]http://gpuboss.com/gpus/GeForce-GTX-TITAN-vs-GeForce-GTX-970[/url<] [url<]http://gpuboss.com/gpus/GeForce-GTX-TITAN-vs-GeForce-GTX-980[/url<] I went from 970 SLI to a 1070 and it was on average an improvement. I think GTA5 was the only one where perhaps there was a *tiny* loss, but that's a game that just seems exceptional with SLI. There are other games where it was a massive gain. SLI just... doesn't run that great half the time. Most developers just don't care to spend the time to do any optimization work for SLI. I think a GTX 1080 would smear your Titan SLI in tons of games. Even in the best games for SLI the 1080 will win. This 1080 Ti is just in another plane from what you have.

        • Redocbew
        • 3 years ago

        A single 1080 Ti would probably provide better frame times also.

      • Waco
      • 3 years ago

      I went from a pair of GTX 780s (reasonably well clocked) to a Maxwell Titan X…it was a massive upgrade. A 1080 Ti will only further that lead by a significant amount.

    • Jigar
    • 3 years ago

    Best Graphic card for FULL HD Gaming for atleast 5 years.

      • southrncomfortjm
      • 3 years ago

      The name says it all.

    • deruberhanyok
    • 3 years ago

    edit- aha, I missed some of the press release info, apparently the plan is they will be available on the morning of March 10.

    (yes, I’m thinking about it)

    • Rza79
    • 3 years ago

    Still kinda surprising how close the R9 Fury X is to the GTX 1070. I hope that AMD can add around 50% extra performance to this card with Vega so we get some real competition.

    • Laykun
    • 3 years ago

    And people tried telling me that the R9 Fury’s 4GB of VRAM wouldn’t be limiting :\.

    • NoOne ButMe
    • 3 years ago

    10-15% faster than stock top RX Vega I think.

    Top Volta beats this card 25-35% if G5X. 40-60% if HBM.

    • Rectal Prolapse
    • 3 years ago

    You sure there is an HDMI->DVI adapter in your review sample? Strange – AnandTech says this about the DVI adapter included:

    “With that said, as a consolation item of sorts for the remaining DVI users, NVIDIA is including a DisplayPort-to-SL-DVI adapter with the Founder’s Edition card.”

    • Ifalna
    • 3 years ago

    My poor 1070 is getting self esteem issues here.
    NVidia, what have you done?

      • derFunkenstein
      • 3 years ago

      I looked at it and said “hey my 1070 could do 4K if I dial back the settings” and then remembered that I play at 1440p and it’s going to be just fine. 😀

        • EzioAs
        • 3 years ago

        Even my 1060 could probably do 4K if I dial the settings down and target 30 FPS for not super-demanding games. Too bad I don’t have a 4K display to try 😛

          • Klimax
          • 3 years ago

          It won’t. Brutal lack of ROPs and memory bandwidth will be ultimate killer. It won’t be at that point any different from original Titan at 4k.

            • Waco
            • 3 years ago

            I gamed at 4K on my HTPC with a 750 Ti and I still game on it with a vanilla 780. Just don’t expect miracles and it’s no problem…

        • Ifalna
        • 3 years ago

        I still play at 1080p, so I doubt that I’ll have to worry anytime soon either. ^^

      • albundy
      • 3 years ago

      i’d love to see a 1070 ti come out.

    • Ninjitsu
    • 3 years ago

    So no-compromise and more affordable 4K and mid range 1440p with the next generation?

    Impressive, but not unexpected, seeing that 4K hype started back in 2013 or something. Didn’t expect it to be feasible till 2017-2018.

    EDIT: Cool thing is, going by previous generations, the GTX 1180 should be what, 25% faster than the 1080 Ti? That’ll be quite some performance!

    • Meadows
    • 3 years ago

    [quote<]"Even folks who prefer black may want to go with the GTX 1080 Ti anyway, because this Founders Edition card has an improved design compared to the first FE coolers."[/quote<] Black cards matter

    • K-L-Waster
    • 3 years ago

    My inner gamer is making yuuge plans….

    But my credit card seems to be trying to get on the midnight train to anywhere…

    • synthtel2
    • 3 years ago

    I’m still curious how you choose settings for this stuff. For instance, dropping SSR and shadows from ultra to high in GoW4 looks like it should have only a very tiny performance impact (according to the GFE settings guide plus some personal knowledge) – why not just ultra across the board? I don’t think there’s anything untoward going on here, to be clear (the settings weirdness in aggregate doesn’t look biased at all), but I do think it would look better if that process were a bit more transparent.

      • VincentHanna
      • 3 years ago

      If you can run ultra, then run ultra

    • Dysthymia
    • 3 years ago

    Please execute properly on Vega, AMD… and could you hurry too?

    • drfish
    • 3 years ago

    Hey guys, here’s a [url=https://goo.gl/OMQq6o<]bit of extra data[/url<] that didn't make it into the review.

      • chuckula
      • 3 years ago

      How many times do we need to tell you… it’s the frame times that count!

      No go back and run those detailed benchmarks again! :-p

      • Mr Bill
      • 3 years ago

      Now if only I can get the unintentional Sir Mix-a-Lot reference in the title out of my head.

        • Jeff Kampman
        • 3 years ago

        Who said it was unintentional? :V

          • Anovoca
          • 3 years ago

          I think you need to change that to “I like big chips with an hot clocked die”

        • Redocbew
        • 3 years ago

        You like big…. bars?

      • Anovoca
      • 3 years ago

      Hey now, no stealing my Arkkam City joke :O

        • drfish
        • 3 years ago

        I’m sorry, I couldn’t help myself. o_0

    • Firestarter
    • 3 years ago

    I just love that graphics processing is such an embarrassingly parallel problem, there’s no problem they can’t solve by just throwing more logic gates at it!

    • tipoo
    • 3 years ago

    Since we worked out what Ryzen is in Via Nanos, pop quiz, what is the 1080TI as represented by a number of S3 Graphics Chrome 540s?

      • chuckula
      • 3 years ago

      Easy, 2 x 540 = 1080.

      DONE!

        • tipoo
        • 3 years ago

        How could I be so blind!

        It’s even a 540GTX, lol

    • jihadjoe
    • 3 years ago

    Sweet! So basically the perfect 4k/60Hz GPU?

      • JustAnEngineer
      • 3 years ago

      … Except that it doesn’t support VESA standard adaptive sync (variable refresh).

      If GeForce GTX1080Ti supported FreeSync, I would have pre-ordered it.

        • Klimax
        • 3 years ago

        It doesn’t look like it is hurting NVidia at all.

          • Firestarter
          • 3 years ago

          they haven’t gotten my money yet so that’s minus 1 GTX1080Ti gross revenue

        • Kretschmer
        • 3 years ago

        There are trade-offs. Make a choice.

        • tipoo
        • 3 years ago

        Yep. Gsync was cool and all but Freesync 2 offers near everything it had over Freesync 1, for a significantly lower total package cost, looking at comparable monitors.

        My monitor choice might drive my GPU choice more than the other way around if Nvidia doens’t add the open standard.

    • ozzuneoj
    • 3 years ago

    I remember when my GTX 970 was awesome. Now its not even a blip on the radar compared to these monsters. The strange thing though, is that I paid $250 for it about 2 years ago… it was a very good price. But what do we get in that price range now? The 1060 6GB, which is barely even an improvement.

    I’m hoping that nvidia’s announcement that higher memory speeds would be coming to lower end cards means that there will be a bit more performance in this range.

    I certainly don’t need any extra GPU power at the moment, its just the idea that a ~$250-$300 card in 2015 is only marginally slower than a $225-$250 card in 2017, where as the 1070 offers twice the performance for $320-$400 depending on sales.

    And then further down the line it just gets worse, with the exception of the 1050 at around $99 being a very capable value card. Considering how well received the RX 4xx series has been, I’d hope for a bit more competition in pricing in that range. Entry level Maxwell or Pascal GPUs wouldn’t hurt either… they’re still pushing Kepler and in some cases even Fermi in current products.

      • MrDweezil
      • 3 years ago

      Pascal has brought us more performance, but for more money. Its the big reason that we need some competition in the $250+ space.

      • PixelArmy
      • 3 years ago

      Your comparison price is way off (though I agree there are currently some neglected price points).

      2 years ago = March 2015 (roughly 6 months after launch), a $250 GTX 970 would have been an insane deal. [url=https://techreport.com/news/28506/deals-of-the-week-an-ultrawide-ips-display-a-cheap-gtx-970-and-more<]June 2015, TR thought a $310 model was "the cheapest GTX 970 around right now"[/url<]

        • ozzuneoj
        • 3 years ago

        It was a great deal, yes. It was a super basic PNY model with a blower cooler and the price included all of the cash back and discounts I could get on it in March of 2015. I bought it on Wal-Mart.com of all places. Also, TR is the best tech site online but they aren’t penny pinching bargain hunters like some of their readers are, or the people at slickdeals.

        [url<]https://slickdeals.net/e/7796537-pny-gtx-970-4g-gddr-video-card-296-99-ac-free-game-back-in-stock?src=SiteSearchV2Algo1[/url<] [url<]https://slickdeals.net/newsearch.php?page=5&firstonly=1&q=gtx+970[/url<] My price was not way off, as that was what I spent, and it was not much different than other posted deals. If we're going to split hairs, then fine, call it a $300 card in 2015 (I did say $250-$300). What do you get brand new for $300 now? How about for $250 on an insanely awesome deal? Nothing from nvidia, aside from the 1060 which is a great card but is not much better than a 970.

          • PixelArmy
          • 3 years ago

          My point is your deal skews your relative price/performance ratio. The 1060 started out cheaper and faster than the 970 and its price has dropped quicker.

          Let’s call 970 performance = 1.0, 1060 ~= 1.12
          Let’s compute the perf per $ at your deal price: 1.0 / $250 = 0.004 perf/$
          and at the $300: 1.0/ $300 = 0.0033 perf/$
          The 1060: 1.12 / $250 = 0.00448 perf/$

          0.00448 vs 0.004 is obviously just the relative speed diff (12%) since your insane deal price was the same as the 1060 msrp. However, 0.00448 vs 0.0033 represents a 35% increase.

          Now, there are price gaps in nvidia’s card line up, but it’s not because price/performance hasn’t gotten better.

      • EzioAs
      • 3 years ago

      Not really a fair comparison. The GTX 970 has a higher launch MSRP ($329) than the GTX 1060 ($249). Plus, the 1060 was never targeted as an upgrade for people using a 970 (or above), it was meant for the GTX 960 or to be even more precise, mid-range Keplers (GTX 660, 660 Ti, 760)

        • ozzuneoj
        • 3 years ago

        How is it not fair? The fact that after two years there isn’t anything substantially better than a 970 available under $300 is a sign of stagnation. It’s starting to feel like the 9xx series is the new 8×00 and 9×00 series. Those were relevant in the mid to high end range for a very long time.

        The only real competition seems to be in the $150 range, which is only marginally behind the $200-$250 range.

        We need AMD to stir the pot a bit, that’s for sure. I’m not sure that I’m totally prepared to go for an AMD card myself at this point with some early signs of gsync+backlight strobing being possible finally (a big deal that no one seems to be talking about) , but the market could certainly use a kick in the pants, particularly in the $200-$300 range.

          • MOSFET
          • 3 years ago

          I see your point – there’s a hole in the lineup after the general 10xx price increase.

    • djayjp
    • 3 years ago

    Really wish there were more cards included in the value graph as that’s one of the main reasons I look forward to TR reviews. It really helps to put everything into context when trying to decide at what point one gets diminishing returns.

      • Jeff Kampman
      • 3 years ago

      I wish there were more high-end graphics cards worth including in the chart.

        • Voldenuit
        • 3 years ago

        Any thoughts of comparing SLI/XF setups to 1080Ti? And including data on dropped/runt frames, in such a comparison?

        EDIT: Ooh, and histograms!

          • Beahmont
          • 3 years ago

          I figure that we’ll have to wait for Jeff to get cloned and we’d need to buy him another lab or two.

            • Voldenuit
            • 3 years ago

            Didn’t Damage have a baby? Put him to work in the salt and fps mines of TR!

        • chuckula
        • 3 years ago

        WE TOLD YOU TO WAIT FOR VEGA! 😉

        • Meadows
        • 3 years ago

        Ah, so it’s called “high-end chart” now?

          • Jeff Kampman
          • 3 years ago

          You do understand that commingling different data sets doesn’t work?

            • NeelyCam
            • 3 years ago

            Maybe the problem then is that the data sets are different.

            • Meadows
            • 3 years ago

            Most likely. There’s literally no reason not to show the Radeon 480 or the GTX 1060 for reference, if nothing else.

            • derFunkenstein
            • 3 years ago

            Here’s a reference: spend more on a graphics card if you want to play any of these titles at 4K. An RX480 and a GTX 1060 are not going to do it. The 1070 barely does it. Arguably, the Fury X can’t.

            Most, if not all, of TR’s benchmarks are manual. If you’re playing at 15fps, good luck repeating the demo.

            • Jeff Kampman
            • 3 years ago

            At what settings? We’ve never tested those cards for 4K gaming and maxing out the sliders for them at that resolution like we did here has no real-world relevance.

            • K-L-Waster
            • 3 years ago

            I can just imagine the complaints if you tried…

            • Klimax
            • 3 years ago

            More importantly it would be painful for reviewer. Years ago I was stuck playing in those FPS ranges. (1-20FPS max on few years old games curtsey SiS VGA…)

            Not best experience.

            • Ninjitsu
            • 3 years ago

            I’m usually also jumping up and down for comparisons at lower resolutions or for comparisons with older stuff, but I can’t see what value adding mid range cards brings to that chart. They’re not relevant at 1440p or 4K.

            • anotherengineer
            • 3 years ago

            Ya, be nice, but most sites are now grouping same/similar $$ class cards together.

            [url<]https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080_Ti/6.html[/url<] edit - chart I like [url<]https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080_Ti/32.html[/url<]

            • ImSpartacus
            • 3 years ago

            I don’t understand.

            Are you saying that “midrange” stuff like the 480 or 1060 can’t be tested alongside “high end” stuff?

            I feel like I’m missing something.

            • Jeff Kampman
            • 3 years ago

            That’s not what I’m saying at all. I’m saying that you can’t take two different sets of test data obtained using different games, resolutions, and settings and mash them up on a chart that assumes all of the data was collected under identical conditions.

            It’s not worth our limited time to quantify that an RX 480 can only produce 15 FPS or whatever at 4K and ultra settings in many titles, because that’s simply not the type of work it’s meant to do and nobody is purchasing the product for that job. Nobody would ever look at an RX 480 set up that way and claim that it was delivering “playable” performance.

            If we were asking a different question, like “how do today’s midrange and high-end graphics cards perform at 2560×1440?”, we could change our test setup to be more accommodating of cards like the RX 480 and the GTX 1080 Ti in the same review. But for a test that needs to prove the mettle of the GTX 1080 Ti, there is no way to accommodate midrange graphics cards in a way that provides a useful account of both products’ performance.

            • ImSpartacus
            • 3 years ago

            Haha, I think you could’ve just said, “We only had time to test the essential GPUs,” and people would’ve instantly understood.

            • Jeff Kampman
            • 3 years ago

            Still not my point. Please run a Radeon RX 480 at 4K and ultra settings with any of these games and tell me how it goes.

            • Meadows
            • 3 years ago

            I remember the days when TR tested almost every card, [url=https://techreport.com/review/11211/nvidia-geforce-8800-graphics-processor/11<]even if it produced 10 fps[/url<] in some cases, or [url=https://techreport.com/review/14168/nvidia-geforce-9600-gt-graphics-processor/8<]an average fps under 20[/url<] in other cases. Back then it had actually helped me make a few highly informed decisions. Now we're down to so few points on the value plot that you might as well remove the plot altogether.

            • Redocbew
            • 3 years ago

            Wouldn’t a frame time graph of that range defeat the purpose of doing a scatter plot in the first place? Assuming there was a reasonable way to run these tests, I wonder what the effect would be on the GPUs we might actually care to compare with such an outlier in the mix.

        • slate0
        • 3 years ago

        Well it makes it hard to figure out what I’ll gain for trading up my 980 Ti if I have to do some sort of graphics-card-chart-centipede by chaining together results from different generations of reviews 😛 🙂

          • Jeff Kampman
          • 3 years ago

          The GTX 1070 is an almost perfect substitute for your GTX 980 Ti.

            • slate0
            • 3 years ago

            Thanks Jeff! Apologies if I came off as entitled and rude, I really appreciate the work you did on this review.

            • Jeff Kampman
            • 3 years ago

            Didn’t take it that way at all. Thanks!

          • PixelArmy
          • 3 years ago

          Techpowerup GPU DB:
          [url<]https://www.techpowerup.com/gpudb/2724/geforce-gtx-980-ti[/url<] or conversely [url<]https://www.techpowerup.com/gpudb/2877/geforce-gtx-1080-ti[/url<] Though this is FPS-based on frame-time based, if you care. Edit: and 1080p.

            • slate0
            • 3 years ago

            Thank you very much sir. Super cool of you.

        • ptsant
        • 3 years ago

        Besides people who want to go from one generation of a high-end GPU to another, there are also people who may want to know if it’s worth going from from a mid-tier to a high-end card (for example because they changed screen or won the lottery or got a job or upgraded CPU and are wondering if it’s worth changing).

        You have convincingly shown that the 1080Ti is the best high-end card. Now, we are wondering how much better it is compared with everyday cards. Would be nice to add an RX480 and a 1060 if you have the time…

          • Voldenuit
          • 3 years ago

          I think you have a good point there. Many people upgrade by skipping generations, so it’s worth showing how much a 780Ti or 290X owner would get by upgrading to a current card.

      • DoomGuy64
      • 3 years ago

      Wish there were more dx12 benchmarks. Deus Ex in dx12 nets AMD a hefty performance boost.

      Also GOW4 is a gameworks title that had massive problems on release. I’m being reminded of PCars all over again. Not that AMD can really compete in 4K with a 4GB framebuffer, so this was pretty much a Ti showcase of 4K, and nothing else.

      We now know that 4k gaming is possible with a Ti. Thats all I took away from this review.

        • Waco
        • 3 years ago

        We’re so sorry that you’re disappointed in the consistent quality of TR reviews. Perhaps a more biased site would be to your liking?

        • VincentHanna
        • 3 years ago

        Well, if standard 4k gaming is possible, then so is 3k, 2k, 1080p and any resolution in-between.

        What precisely would you have LIKED to take away from this review, aside from the fact that it is good enough for almost any use-case and, at the same time priced competitively?

      • D@ Br@b($)!
      • 3 years ago

      Bit late to the party but here you go…

      [url<]http://www.guru3d.com/articles-pages/geforce-gtx-1080-ti-review,1.html[/url<] From GTX1050/RX460 to TitanX/GTX1080Ti, including Fury(X), GTX980(Ti), R9390, GTX780Ti, R9280X. 1080p to 4K, DX12 and DX11. You are welcome..... edit: typo.

    • USAFTW
    • 3 years ago

    Great card, as one would expect from Nvidia, the first truly 4K 60fps card out there.
    AMD, your move. And you better not be taking too long about it.

    • Cannonaire
    • 3 years ago

    What a card – and the best review presentation out there, as usual. Great work, Jeff.

    I wasn’t expecting to see [i<]any[/i<] 1080Ti reviews today!

    • AnotherReader
    • 3 years ago

    Great review, Jeff! Any idea why the 1070 is so close to the 1080 in Watch Dogs 2? It doesn’t get that close in any other benchmark.

    Unlike most high-end cards, the price is commensurate with its performance.

    • Visigoth
    • 3 years ago

    Jesus…an absolute slaughter of all current GPU’s, even by NVIDIA themselves! Damn. I still can’t get over it. Great job, as usual, NVIDIA!!!

    • I.S.T.
    • 3 years ago

    So, what was the issue with Mankind Divided and getting it to work on 4K? Was it just too slow to be worthwhile, or was it some issue with the tools like with the Ryzen review?

    • Freon
    • 3 years ago

    Wow, very impressed. Some of those games are 35-40% faster in average FPS, very good scaling for a card with specs that are not greater by much more of a percentage (bandwidth, FLOPS, fill and texture rate, etc).

    4K has finally arrived?

    • HERETIC
    • 3 years ago

    It’s almost mind-boggling how far we have come in the last year-What a card.
    Jeff you have a great job.

    On a side note-I’d wager there are several hundred people here that would love to see the
    1080Ti come out to play with a Ryzen 1800X perhaps at 4G on 8 native cores only.

    • Krogoth
    • 3 years ago

    It is the 980Ti launch redux again. It just makes $699 FE 1080 at launch look bad.

      • chuckula
      • 3 years ago

      THE KROGOTH HAS DELIVERED THE OFFICIAL MEH!

        • Krogoth
        • 3 years ago

        It is just a nice bump over 1080 and provides Titan XP-tier performance at ~1/2 of the launch MSRP price point.

        Exactly what the 980Ti did back its own debute. 1080Ti doesn’t make sense for current 1080 users who should wait for Volta. Personally, I’m waiting on how RX Vega will shape up the market.

          • End User
          • 3 years ago

          I’d argue that it makes total sense for single card 4K gamers to upgrade from a 1080 to a 1080 Ti.

            • RAGEPRO
            • 3 years ago

            I wouldn’t, unless you’re really really into Rise of the Tomb Raider. Look again at the 1080 4K numbers. >60 FPS in 4K with maxed-out settings in GTA V is pretty darn decent.

            • End User
            • 3 years ago

            The TR GTA V 4K results show the 1080 average FPS @ <50.

          • RAGEPRO
          • 3 years ago

          “Debut”, Mr. President.

            • slate0
            • 3 years ago

            Don’t make him tapp your lines…

            • Wonders
            • 3 years ago

            PRetty sure it’s De-Butt
            !!!!
            lol

          • I.S.T.
          • 3 years ago

          One exception: if you need the boost for 4K. Some games showed a dramatic increase.

      • derFunkenstein
      • 3 years ago

      A year later, I guess, but don’t forget that those people have had a FE 1080 for nearly a year.

        • Krogoth
        • 3 years ago

        1080 FE actually launch back in June 2016. The FE had a “premium” on top of that to get “early access”. They could have wait just another nine mouths for the same price point to get something that is significantly faster at 4K gaming ie the 1080Ti.

          • derFunkenstein
          • 3 years ago

          Oh, sorry, 9 months. Still, if you bought a 1080 FE, you’ve had a card for 9 months.

          • pranav0091
          • 3 years ago

          All the car purchases of 2014 could have waited to purchase a car today, and saved some cash or gotten more features… Amirite?

          <Employed at Nvidia, but I speak only for myself>

            • Froz
            • 3 years ago

            Of course they should, if car from today would be 2 times faster for the same price…

            • pranav0091
            • 3 years ago

            [quote<]Of course they should, if car from today would be 2 times faster for the same price...[/quote<] That'd very nearly be all the folks that bought a 980Ti... Now find me that car, will you? 😉 <I work at Nvidia, but my opinions are only personal>

            • Shobai
            • 3 years ago

            (you may not have noticed, but he pointed out the flaw in your equivocation)

            • pranav0091
            • 3 years ago

            I still havent noticed…

      • SomeOtherGeek
      • 3 years ago

      Yep, what I thought as well. But people don’t like to hear that.

    • MrJP
    • 3 years ago

    Hi Jeff. I think there might be something off in the calculation of the “Time beyond…” plots.

    In a lot of these very taxing benchmarks, the slower cards are always well above 16.7ms for the whole time in the frame time plots. However in the “Time beyond…” charts, the time beyond 8.3ms is much larger than the time beyond 16.7ms. Shouldn’t both of these charts show the same time if the card is above 16.7ms the whole time?

      • pranav0091
      • 3 years ago

      Wouldn’t the time beyond 8.3 ms always include all of the time beyond 16.7 ms and then an [b<]additional[/b<] 8.4ms per frame? 🙂

        • morphine
        • 3 years ago

        What he said.

        • MrJP
        • 3 years ago

        Apologies to Jeff as it looks like I was mistaken, because I thought the intention was to show the total time spent during the benchmark on frames higher than the threshold.

        Here’s the wording from the review:

        [quote<]We set a number of crucial thresholds in our data-processing tools—50 ms, 33.3 ms, 16.7 ms, and 8.3 ms—and determine how long the graphics card spent on frames that took longer than those values to render. [/quote<] Hence I thought this measure was showing the total benchmark time spent on frames of this length, so would show up the same total at 8.3ms and 16.7ms if all the frames were above 16.7ms. However looking back at the [url=https://techreport.com/review/22151/nvidia-geforce-gtx-560-ti-448-graphics-card/2<]original article[/url<] when "Time beyond..." replaced "Frames beyond...", I can see that it as actually the total incremental time that is being plotted. So thanks for the correction, though now I'm trying to decide which way of looking at it makes more sense...

    • torquer
    • 3 years ago

    Nvidia charged my CC overnight for my 1080Ti FE, so hopefully it ships today.

    • Froz
    • 3 years ago

    Something looks odd in the summary graphs. Is 1070 to 1080 price difference really that small in US now? If yes, is that an effect of 1080 getting that much cheaper?

      • chuckula
      • 3 years ago

      The GTX-1070 has technically received a price down to $350 or so. Retailers may not have actually implemented the price cut just yet.

      • derFunkenstein
      • 3 years ago

      That $450 point wasn’t true a week or so ago. I got a GTX 1070 on Amazon for $369, and some cards are already down at $350, without any sales or gimmicks.

      Zotac for $349: [url<]https://www.newegg.com/Product/Product.aspx?Item=N82E16814500409[/url<] Gigabyte for $369: [url<]https://www.amazon.com/Gigabyte-GeForce-GDDR5-Windforce-GV-N1070WF2OC-8GD/dp/B01HHCA1IO/[/url<] Gigabyte "mini" card for $369: [url<]https://www.amazon.com/Gigabyte-GeForce-GDDR5-Graphics-GV-N1070IXOC-8GD/dp/B01JD2OSX0[/url<]

        • Froz
        • 3 years ago

        Interesting. I checked the prices here and GTX 1070 mostly stays at the same price point it was since January (it got cheaper then). However, I see crazy prices for GTX 1080. Some models can be get right now for just 20% more than the cheapest 1070. 2 weeeks ago that was around 50%.

          • derFunkenstein
          • 3 years ago

          Ah, if the prices hadn’t gone down here I think we’d be in teh same boat.

    • ImSpartacus
    • 3 years ago

    [quote<]Be ready for that extra power draw, though, because a pushed GP102 chip sucks down a lot of extra watts. We observed a total system power draw of 414W with our overclocked card running all-out, compared to 365W for stock settings. That's nothing new with overclocking, though.[/quote<] Damn, ~2000 MHz and 12 Gbps and you only needed 50 extra watts?! That really discredits the folks that believe you need some kind of 800+W PSU so you can run a 250W GPU and <100W CPU. Even with another 100ishW from aggressive CPU & GPU overclocking, I feel like it'd be tough to eclipse 600W on a single GPU system.

      • chuckula
      • 3 years ago

      Some of the issues with power supplies are that the boilerplate label doesn’t tell you the whole story. For example, a 600W power supply that can output 600W [i<]in aggregate[/i<] is clearly powerful enough to meet the overall power consumption levels of the large majority of PCs out there, even under heavy load. However, sometimes the individual components in the system can exceed the power delivery capabilities from individual rails within the power supply. So even though your overall power consumption is within spec, the ability of the PSU to deliver power to individual components may be compromised. That's not even getting into the finer points of ripple and voltage droop. This is usually not an issue with higher-quality PSUs, but some of the cheap ones with big wattage ratings can suffer from these issues.

        • ImSpartacus
        • 3 years ago

        Hasn’t it always been a common heuristic that cheap PSUs are to be avoided?

        I know aggregate wattage figures oversimplify things, but I feel like psu makers have abused consumers by marketing inflated PSUs to gamers that simply don’t need them.

        And I don’t want to be coming from some kind of high horse. I know I was horrified to find a 250W PSU in one of my first general use oem machines many years ago. Of course you needed a 500W psu at the minimum! That’s what everyone says! But it’s just not true.

          • RAGEPRO
          • 3 years ago

          Well, there’s a few issues at play. Most modern power supplies hit peak efficiency in the 40-80% load range. Ideally you want to be around 60% loaded for maximum efficiency (although this does vary across units and vendors.) This means that for a 414W machine, you will indeed want to have around a 600W power supply.

          There’s also degradation to take into account. After five years, even a high quality power supply will have its maximum capacity reduced. This is conservative, but I usually use a baseline of 5% per year. That means after 5 years, that 600W power supply is behaving more like a 450W power supply.

          Taking those two things in mind, those big 1kW power boxes aren’t as silly as they might seem—especially for folks doing things like running a BWE CPU and a pair of Fury X cards. I ran into power issues myself not all that long ago with a pair of GTX 580s, a desktop Intel CPU, and a high-quality 850W power supply. In theory, it should’ve handled it just fine. In practice, the box was too old and the Fermis were too thirsty. 🙂

        • Waco
        • 3 years ago

        Power supplies haven’t had that issue in a long time though – almost anything built in the last half decade is 90+% dedicated to the 12v rail(s) and not too many of them.

        Even the crappy PSUs that have multiple smaller rails should be able to feed a 300 watt GPU via a pair of rails and attentive wiring. I haven’t seen a 10 amp 12v rail in a while, most of them seem to be 20 amps+ each.

          • Bauxite
          • 3 years ago

          Literally every PSU I’ve bought in the last 10 years (almost more than I want to admit) can use 99% of rated power on 12V, as long as you are not a bargain bin cheapskate it is a total non-issue.

      • I.S.T.
      • 3 years ago

      Yeah, a 550W is all you need for about 96%(Bull**** made up figure that is still probably accurate) of single GPU configs. Hell, 450W is enough for the vast majority.

      • Freon
      • 3 years ago

      I think the higher power numbers are a hold over from years ago. Bloomfields at 125W, higher end video cards that start off in the 250W range, add in overclocking, SLI, some overhead so you are running your PSU more in the sweet spot (50-80% load), you could need a 800W supply.

      Today that’s largely not needed unless you are trying to run Prime95 or Linpack and Furmark at the same time.

      I was able to get just over 700W peak total draw at the wall (according to my UPS) with my previous i7-950, overclocked to 4ghz, with 970 SLI, running Prime95 and Furmark at the same time. With a Platinum rated PSU that wasn’t much less output to the PC. I have a 860W rated supply, which probably means ~950W draw to max its output? Today with a 7700K and 1070 it’s more like 450W peak with the same sort of test. Same PSU as it’s a hold over.

      • JosiahBradley
      • 3 years ago

      My 1050W PSU laughs at your mediocre power requirements! Hah. But yeah modern systems are light on power, my rig is not however so there are still reasons for the big gun PSUs.

      • Whispre
      • 3 years ago

      I’m building a open air wall mount PC… I’m putting in a 1200 watt power supply for 1 reason…

      I want the power supply fan to stay off 99% of the time, even during gaming.

        • nutjob2
        • 3 years ago

        That makes no sense, and won’t work. Just get a fanless PSU at around the 460W mark if you want silence.

        • Voldenuit
        • 3 years ago

        My Seasonic X-650’s fan stays off even when gaming with a GTX 1070. If you’re open air, a good zero-rpm idle PSU would likely run fanless most of the time (read up reviews on temperature and power trigger points).

          • Whispre
          • 3 years ago

          I will be open air, running custom cooling (2 loops), as well as powering some extra devices… my estimated max power draw is around 700w.

      • ptsant
      • 3 years ago

      If you can pull 500W over long gaming sessions, you really should get at least a 800W PSU. For the system mentioned in the review, I would get a 750W Seasonic Prime or similar.

      Generally, power supplies are most efficient between 20-80% of max load. For optimal performance and longevity, I wouldn’t use a 550W PSU for a 500W system. And I’m only talking about reputable manufacturers here. A low-tier 550W PSU running at 500W over a long time is a guaranteed failure.

    • deruberhanyok
    • 3 years ago

    The performance is INSANE. I have an EVGA 1070 SC in a system – same clocks as the SC2 you used here – and I’ve never once thought it was a slow card. But here we are, looking at something [i<]that isn't a Titan[/i<] pushing anywhere from 50-75% additional performance less than a year after the Pascal launch. Really makes me wonder what Vega and Volta are going to look like. Recently with NVIDIA it has seemed like the performance has trickled down with each generation - 780ti matched by 970, then 970 by 1060, for instance - so does that mean we could see 1080ti levels of performance in sub $500 cards next generation? That would be pretty sweet for 4k gaming. Now, where are the 4K 120Hz HDR monitors? 🙂

      • ImSpartacus
      • 3 years ago

      As long as process tech keeps up, there’s no reason to expect any slow downs. GPU makers get to “cheat” and make increasingly wide designs (a luxury not offered to container CPU makers). So as long add they can cram more transistors on a die, then

      For example, Vega 20 rumors show it to be a glorified shrink of Vega 10 on 7nm with 4 stacks of HBM2 (probably for bandwidth-hungry pro stuff). Imagine Vega 10 performance in something that only needs a single 6-pin power connector. Just blows my mind.

      [url<]http://wccftech.com/amd-vega-10-vega-20-gpu-details-dual-gpu-graphics-card/[/url<]

        • southrncomfortjm
        • 3 years ago

        Then…. then…. WHAT?

      • Airmantharp
      • 3 years ago

      Also waiting for those 4k 120Hz HDR monitors, in 32″+ form, please.

    • Longsdivision
    • 3 years ago

    Great review, glad to see Nvida trust TR enough to send a card before review release this time.

    Side note: why do I feel so unbalanced looking at the HDMI mixed in with the display ports. =\

    • Kretschmer
    • 3 years ago

    I’m frantically refreshing EVGA’s step up list to no avail. Can’t wait to upgrade to a 100Hz 35″ screen and 1080ti in the next month!

    • Anovoca
    • 3 years ago

    In before : [url<]https://memegenerator.net/instance/75968146[/url<]

      • SomeOtherGeek
      • 3 years ago

      LOL!

    • superjawes
    • 3 years ago

    What? No histograms?

    😉

      • Cannonaire
      • 3 years ago

      I think that would be a good first step in figuring out just what’s happening in The Division.
      [url<]https://techreport.com/review/31562/nvidia-geforce-gtx-1080-ti-graphics-card-reviewed/12[/url<] I could have sworn Jeff said more about the bumps in the 'Frame Times By Percentile' graphs the first time I read it, but I could be imagining it.

    • derFunkenstein
    • 3 years ago

    If you’re playing at 4K, the 1080Ti is kind of a surprising value champion on top of being the absolute fastest thing out there. 75% more speed than a 1070 at 75% more money (the $450-ish price on the graphs is wrong these days; the price is much closer to $400 and some dip down below that). That’s right on the nose right there.

    • Jeff Kampman
    • 3 years ago

    Folks, as you’re probably used to by now, certain things didn’t get in under the line for this. We’re working on noise and power consumption graphs and we’ll also have more complete game settings later on. Sorry for the incompleteness.

      • chuckula
      • 3 years ago

      It’s fine. Great review as always on (yet another) short deadline.

      • Glix
      • 3 years ago

      Not so bothered about noise.
      Temps, power and fan utilisation are the important ones.
      No one cares what the microphone thinks it can pick up. Your subjective opinion on what you hear and think in comparison to the competition would be great.

        • chuckula
        • 3 years ago

        If the noise on the Founder’s Edition versions isn’t too terrible then it’s probably a good sign since the blower-style cards usually aren’t the quietest ones out there.

          • psuedonymous
          • 3 years ago

          Nvidia’s blowers tend to be pretty damn good unless you go slapping the fan up to 80%-100% (stock settings will usually stay under that). OEM blowers, however, are generally garbage.

          • Glix
          • 3 years ago

          According to reviews this one runs at 70c under load, so safe to assume it’s hitting it’s max boost and the fan will probably be running up to full tilt (80%ish). I don’t think that will be quiet in a small space. 😀

          Edit: Guru show it to be the same as the 980 TI and throttles at 84c.

      • tipoo
      • 3 years ago

      Sweet launch day review dude.

      • Topinio
      • 3 years ago

      It’s great, thank you. The /only/ other thing I would in any way have liked to have seen in there would have been how much it stomps the RX 480 8GB.

        • ImSpartacus
        • 3 years ago

        I feel like a “part 2” is in order with some of the missing pieces and a wider look at the competition.

        With how late Vega has been, there might be time to fit it in before that release.

      • DPete27
      • 3 years ago

      I’m feeling more and more that it’s probably better for business for a review site to just get the review out. There are small things that can be done to give yourself preferential readership over other sites when it comes to launch day reviews, and I feel like TR is near the top of the list in that regard.

      That said. There are many times when a follow up article is promised or necessitated and it never actually happens (even before [Jeff] took the reigns). Those instances are unfortunate because that’s the window to greatness. Obviously there are only so many man-hours to go around, and there are times of frenzied product launches (now being one of them), not to mention the assumed backlog of products waiting their initial review. It’s all a balancing act, I/we know. If it’s one thing I’ve learned from my career, it’s to make time to improve. This usually requires stepping back from the daily monotony from time to time. It’s easy to get wrapped up in the same-ol-same-ol.

      All that aside, thanks for this prompt review.

      • SomeOtherGeek
      • 3 years ago

      But you did! You said that you could hear it and all. That is probably good enough for 99% of people. The rest of us geeks like the numbers, of course, but in due time.

      Great review anyway. Keep up the good work!

      • Redocbew
      • 3 years ago

      No worries dude. You’re doing fine, and I personally don’t mind the coverage being done in installments.

      • NarwhaleAu
      • 3 years ago

      Great review! I’d rather have this now and your conclusions than a complete review in a day or twos time. Besides, that gives us an excuse to come back and re-read it! 😀

      • Meadows
      • 3 years ago

      I, and most people, understand the reason you’re making these tradeoffs versus TR’s old mantra of “when it’s done”.

      • Freon
      • 3 years ago

      No worries, great job on what you were able to get out for the NDA lift.

      • deruberhanyok
      • 3 years ago

      <3 TR!

      • slate0
      • 3 years ago

      edit: I’m dumb

      • Pholostan
      • 3 years ago

      It will be done when it is done. Quality before… posts seconds after NDA? A couple of days is not a long time in my book.

      Maybe it is time for two review systems again. As in one AMD and one Intel. It was a practice back in the day, I remember.

      • rechicero
      • 3 years ago

      No, don’t be sorry: Thanks for being able to mix quickness and completeness!

    • chuckula
    • 3 years ago

    Blam!

    Not a huge surprise considering the Pascal Titan X was already out there, but this is a formidable card and Vega better be ready to go out of the gate when it launches.

      • daneracer
      • 3 years ago

      Looks like Vega need 100% improvement overFuryX to match the 1080 TI. That should be double right?

        • Shobai
        • 3 years ago

        Heh, 100% extra would indeed be ‘double’. Did your ‘do-able’ get auto-corrected, by any chance?

Pin It on Pinterest

Share This