Nvidia’s GeForce GTX 1070 Ti graphics card reviewed

Now that both Nvidia and AMD have made a beachhead on next-generation process technology with next-generation architectures throughout their product stacks, the fight for high-end graphics card supremacy seems ready to settle down into guerilla skirmishes. Witness the GeForce GTX 1070 Ti that Nvidia just launched. Although the chips aren’t quite as big and the stakes aren’t quite as high as they were among the Radeon R9 290X, the GeForce GTX 780, and the GeForce GTX 780 Ti during the 28-nm era, history echoes. AMD’s Radeon RX Vega 56 basically tied a hot-clocked GeForce GTX 1070 in our inital review of AMD’s latest, and had I tested those cards reference-for-reference, it’s quite likely the GTX 1070 would have fallen behind. For a company that’s on top of its GPU game like Nvidia is right now, that kind of rebeliousness won’t stand. Time to Ti it up.

As we discussed prior to today’s launch, Nvidia is taking one simple step to power up the GTX 1070 non-Ti: bring the world’s tiniest chainsaw to bear on one of the GP104 GPU’s 20 SMs while holding over the GTX 1070’s 8 GT/s GDDR5 memory subsystem. Nvidia clocks the GTX 1070 Ti at the same 1607 MHz base clock as the GTX 1080 Founders Edition, but it slightly throttled back this card’s boost clock to 1683 MHz, compared to 1733 MHz on the GTX 1080. (The green team’s GPU Boost 3.0 dynamic frequency mojo remains in effect, though, so that 50-MHz haircut is unlikely to matter much in practice.)

  GPU

base

clock

(MHz)

GPU

boost

clock

(MHz)

ROP

pixels/

clock

Texels

filtered/

clock

Shader

pro-

cessors

Memory

path

(bits)

Memory

bandwidth

Memory

size

Peak

power

draw

RX 580 1257 1340 32 144 2304 256 256 GB/s 8 GB 185 W
GTX 1060 6GB 1506 1708 48 80 1152 192 192 GB/s 6 GB 120 W
GTX 1070 1506 1683 64 120 1920 256 256 GB/s 8 GB 150 W
RX Vega 56 1156 1471 64 224 3584 2048 410 GB/s 8 GB 210 W
GTX 1070 Ti 1607 1683 64 152 2432 256 256 GB/s 8 GB 180 W
RX Vega 64 1274 1546 64 256 4096 2048 484 GB/s 8 GB 295 W
GTX 1080 1607 1733 64 160 2560 256 320 GB/s 8 GB 180 W
GTX 1080 Ti 1480 1582 64 224 3584 352 484 GB/s 11 GB 250 W
Titan Xp 1405 1585 96 240 3840 384 547 GB/s 12 GB 250 W

Going by Nvidia’s official numbers, this dial-a-yield strategy gives us the following theoretical peak measures of graphical throughput, going by the card’s official boost clock:

  Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear

filtering

int8/fp16

(Gtexels/s)

Peak

rasterization

rate

(Gtris/s)

Peak

FP32

shader

arithmetic

rate

(tflops)

Radeon RX 580 43 193/96 5.4 6.2
GeForce GTX 1060 6GB 82 137/137 3.4 4.4
GeForce GTX 1070 108 202/202 5 7
Radeon RX Vega 56 94 330/165 5.9 10.2
GeForce GTX 1070 Ti 108 256/256 6.7 8.2
Radeon RX Vega 64 99 396/198 6.2 12.7
GeForce GTX 1080 111 277/277 6.9 8.9
GeForce GTX 1080 Ti 139 354/354 9.5 11.3
Nvidia Titan Xp 152 380/380 9.5 11.3

Since both the GTX 1070 and GTX 1080 already enjoyed the full complement of 64 ROPs from the GP104 GPU, the GTX 1070 Ti doesn’t gain anything in raw pixel fill rate over its forebear. What it does get is 54 GTex/s more texturing horsepower, a lot more theoretical rasterization potential, and a teraflop and change more compute capacity, at least going by Nvidia’s stated boost clock. Those bolstered specs get us most of the way to a fully-enabled GTX 1080, and they should give the GTX 1070 Ti more than a fighting chance against the Radeon rebellion at its $450 suggested price.

In another tip-off to the fact that this card is closer to a GTX 1080 than not, Nvidia suited up the GTX 1070 Ti with the fancy heatsink and circuit board from the fully-enabled GP104 card. That move means a five-phase power-delivery subsystem feeds the GPU instead of a four-phase design, and a vapor-chamber heatsink sits atop the GPU instead of the copper-and-aluminum deal that cooled the GTX 1070 FE. If you’d like to know more, you can check out our dismantling of the GTX 1080 in our original review of that card.

A wide array of custom GTX 1070 Tis will be available today from Nvidia’s board partners with even fancier coolers on board, but the Founders Edition card no longer carries a premium compared to third-party options. If you only want to spend $450 on a GTX 1070 Ti, the Founders Edition will probably be nicer than similarly-priced partner cards. Its all-aluminum shroud and verdantly-illuminated GeForce logo remain just as classy as when they bore the GTX 1080 name, and the axial-blower design will push all of this card’s waste heat out of a case.

I could tire your eyes with more words about the GTX 1070 Ti, but GP104-powered graphics cards are a well-known quantity at this point. Let’s see whether this card’s performance is as straightforward as my spitballing would suggest.

 

Our testing methods

Most of the numbers you’ll see on the following pages were captured with OCAT, a software utility that uses data from the Event Timers for Windows API to tell us when critical events happen in the graphics pipeline. We run each test run at least three times and take the median of those runs where applicable to arrive at a final result.

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-8700K
Motherboard Gigabyte Z370 Aorus Gaming 7
Chipset Intel Z370
Memory size 16GB (2 DIMMs)
Memory type G.Skill Trident Z

DDR4-3600

Memory timings 16-16-16-36 2T
Hard drive Samsung 960 Pro 500GB

Kingston HyperX 480GB

2x Corsair Neutron XT 480GB

Power supply Seasonic Prime Platinum 1000W
OS Windows 10 Pro with Fall Creators Update

 

  Driver revision GPU base

core clock

(MHz)

GPU boost

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

Radeon RX 580 Radeon Software 17.10.3 1411 2000 8192
Radeon RX Vega 56 1156 1471 1600 8192
Radeon RX Vega 64 1274 1546 1890 8192
EVGA GeForce GTX 1070 SC2 GeForce 388.13 1594 1784 2002 4096
GeForce GTX 1080 Founders Edition 1607 1733 2500 8192
GeForce GTX 1080 Ti Founders Edition 1481 1582 2750 11264
GeForce GTX 1070 Ti Founders Edition 1607 1683 2000 8192
EVGA GeForce GTX 1060 6GB SC 1607 1835 2000 6144

Thanks to Intel, Corsair, G.Skill, Kingston, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and EVGA supplied the graphics cards for testing, as well. Behold our fine Gigabyte Z370 Aorus Gaming 7 motherboard before it got buried beneath seven graphics cards and a CPU cooler:

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests. We tested each graphics card at a resolution of 2560×1440 and 144 Hz, unless otherwise noted.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Forza Motorsport 7

Let’s kick things off with a new addition to our test suite. Forza Motorsport 7 is the latest big-budget racer from Microsoft and Turn 10 Studios, and it offers gorgeous renderings of a variety of exotic auto fauna. Even better, the Xbox Play Anywhere-ready Forza uses the DirectX 12 API to do its thing, so it gives us a look at cutting-edge API performance. To that end, I cued up max settings at a 4K internal resolution.


Forza generally runs swiftly on our test subjects, though all of the GP104-powered cards in this bunch exhibit some minor hitching. Folks hoping for some console magic to transfer from the Xbox One to our Radeons are left wanting, though, as both RX Vega cards finish midpack (albeit in a tight field).


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time our graphics card spends beyond certain frame-time thresholds, each with an important implication for gaming smoothness. Recall that our graphics-card tests all consist of one-minute test runs and that 1000 ms equals one second to fully appreciate this data.

The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS, or a 30-Hz refresh rate. Go lower than that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame For powerful graphics card like the GTX 1080 Ti, it’s useful to look at the 8.3 ms threshold. That corresponds to 120 FPS, the lower end of what we’d consider a high-refresh-rate monitor.

Thanks to one big hitch near the beginning of our test run, the GTX 1080 makes a brief appearance in our 50-ms and 33-ms baskets. Even so, the spikiness exhibited by GP104 cards doesn’t translate to more than a second spent past 16.7 ms for any of this bunch. The RX 580, on the other hand, runs into enough trouble to spend eight seconds of our one-minute test run working on tough frames that drop its instananeous frame rate below 60 FPS. We have to click over to the 8.3 ms mark to really tease out any differences between our high-end contenders. There, the GTX 1070 Ti narrowly leads the RX Vega 64 and RX Vega 56, while the GTX 1080 turns in the best performance of anything save its GP102-powered cousin.

 

Wolfenstein II: The New Colossus

Here’s another game that’s hot off the presses. Wolfenstein II uses the Vulkan API as its sole rendering path, and it boasts support for RX Vega cards’ Rapid Packed Math instructions, so it promises to be a showcase for the capabilities of AMD’s latest. I used the “Mein Lieben!” preset across all of the cards and left async compute enabled wherever it was available. The only AMD-specific tweak I had to make was to turn on GPU culling on Radeons. GeForces apparently run better with that feature off (as is its default), so I followed the game’s guidance and left it that way for the green team’s cards.

(Ignore that 1920×1080 resolution; I had to use it to get this screenshot)


Although it’s not really a surprise given the amount of Radeon-friendly tech baked in, Wolfenstein proves a big win for the red team’s cards across the board. The RX Vega duo beats out the GTX 1070 Ti and the GTX 1080. Going from 56 to 64 compute units has basically no effect on performance for AMD’s latest, though.


The high-end graphics cards in this bunch spend no perceptible time past 16.7 ms at all. Flip over to the 8.3 ms mark, and it’s clear how the RX Vega cards earn their victories. Both Vegas spend about five fewer seconds under an instantaneous 120 FPS over the course of our test run compared to the GTX 1070 Ti and GTX 1080. None of these cards are providing unsatisfying gameplay experiences, to be clear, but the Vegas are simply in a class of their own. The GTX 1080 Ti is the stopper here, though.

 

Gears of War 4
Gears of War 4 is another DirectX 12 title that’s become a staple for modern GPU testing. Thanks to the guiding hand of Microsoft and The Coalition, this title offers an GPU-vendor-neutral DirectX 12 environment from which to draw performance results. To judge its performance, I took a one-minute stroll through a convenient section of “The Raid” at the beginning of the game. I used the Ultra preset at 2560×1440 to make our cards sweat.


For all our talk of next-gen APIs, Gears of War 4 generally likes GeForces best. That remains the case today. The GTX 1070 Ti handily outperforms the RX Vega 56 and shadows the RX Vega 64 in both performance potential (as measured by average FPS) and in delivered smoothness (as measured by 99th-percentile frame times). Perhaps thanks to its faster memory subsystem, however, the GTX 1080 still holds a decent lead over the GTX 1070 Ti here.


As is becoming a trend for this article, it’s most informative to start our analysis of these high-end graphics cards at the 8.3-ms mark in our time-spent-beyond-X data. The GTX 1070 Ti spends five fewer seconds of our one-minute test run chewing on tough frames that drop its instantaneous rate of delivery below 120 FPS compared to the RX Vega 56, and it only trails the RX Vega 64 by a little under two seconds of accumulated time. The GTX 1080 cuts almost another three seconds off the RX Vega 64’s result here to claim its overall superiority, though.

 

Deus Ex: Mankind Divided
Deus Ex: Mankind Divided remains one of the more geometry- and lighting-rich games out there. I fired it up with a blend of very high and ultra settings to see how these cards handle it.


DXMD puts the GTX 1070 Ti neck-and-neck with the RX Vega 56 in our average-FPS measure of performance potential, and the RX Vega 64 just noses out the GTX 1080. Our 99th-percentile frame-time metric shows that the GTX 1070 Ti still has a slight edge in smoothness versus the RX Vega 56, though. Really, you’d be hard-pressed to take issue with any of our high-end contenders’ frame rates or smoothness in this title. These are minor differences, pound-for-pound.


Diving deeper into our test data, the RX Vega 56 and GTX 1070 Ti land within a third of a second of time spent past 16.7 ms, and an even slimmer (and even more imperceptible) margin separates the RX Vega 64 and GTX 1080. Flip over to the 8.3-ms threshold, and the GTX 1070 Ti and RX Vega 56 come out dead even. The RX Vega 64 spends about a second and a half less time working on tough frames past this threshold compared to the GTX 1080. Still, fine performances for our high-end graphics cards all around, as exhibited by the generally flat and uniform percentile curves in the graph above.

 

Hitman
Hitman‘s DirectX 12 renderer can stress every part of a system, so we cranked the game’s graphics settings at 2560×1440 and got to testing.


Hitman is another title that tends to showcase the virtues of Radeons, though comparable Nvidia cards remain in the mix at every head-to-head comparison we can make in this bunch. The RX Vega 56 and the GTX 1070 Ti finish in a dead heat in both performance potential and delivered smoothness, while the RX Vega 64 ekes out a slight edge over the GTX 1080. Given the gap in performance between the GTX 1070 Ti and the GTX 1080 in Hitman, I have to wonder whether this is another title that benefits from the full-fat card’s GDDR5X memory subsystem.


As we should expect by now, the GTX 1070’s relatively high 99th-percentile frame time is tempered by the fact that it spends barely a fifth of a second past 16.7 ms on tough frames in our one-minute run. Clicking over to the 8.3-ms mark reveals a virtual dead heat for the GTX 1070 Ti and RX Vega 56, while the RX Vega 64 spends a whole three seconds less time past 8.3 ms than the GTX 1080 does. That’s a nice little boost from our initial testing, where the Vega 64 and GTX 1080 were dead-even.

 

Watch Dogs 2
WD2‘s DirectX 12 renderer can stress every part of a system, so we cranked the game’s graphics settings at 1920×1080 and got to testing.


In a change of pace, the geometry-rich environments of Watch Dogs 2 tend to favor GeForce cards, though AMD’s recent driver updates have closed the large performance gaps that plagued the RX Vega duo in our initial review somewhat. Still, the green team has the advantage in Watch Dogs 2‘s San Francisco, and the GTX 1070 Ti lands about dead-even with the RX Vega 64. The plain old GTX 1070 slightly outperforms the RX Vega 56, as well.


At the time-spent-beyond-16.7-ms mark, Nvidia’s advantage is most in evidence in the case of the RX Vega 56 and the GTX 1070. The Vega 56 spends twice as much time past this mark as the GTX 1070 does. Unfortunately, the time-spent-beyond-8.3-ms bucket is too aggressive a threshold for this title, so we end up showing that all of the cards save the GTX 1080 Ti are winded pretty hard. No biggie, though: just make an 11.1-ms chart to correspond to time spent past 90 FPS. Using this one-shot chart, we see that the RX Vega 64 and GTX 1070 Ti are dead-even, while the GTX 1080 spends almost three seconds less time past the mark compared to its ostensible competitor. The RX Vega 56 spends almost five seconds more of our one-minute run behind the GTX 1070 Ti here, even if the match is pretty close between the Vega 56 and the GTX 1070. Hopefully AMD’s driver team can find yet more performance from the Vega duo in future updates.

 

Rise of the Tomb Raider (DirectX 12)
Rise of the Tomb Raider remains a gorgeous game today, and its DirectX 12 renderer means it remains useful for assessing our graphics cards’ performance, too.


Rise of the Tomb Raider‘s DX12 mode produces some nice, clean head-to-heads for every pair of competitors in this bunch. Both the RX Vega 56 and GTX 1070 Ti end up in a dead heat in our average-FPS measure of performance potential, and the RX Vega 64 and GTX 1080 tie, as well. Our 99th-percentile frame-time metric gives a slight edge to the GTX 1070 Ti and GTX 1080 in delivered smoothness, though.


Our time-spent-beyond-X charts confirm this basic impression. Although it does hold a slight edge on the RX Vega 64, both the Radeon and the GTX 1080 spend less than a second of our one-minute run on tough frames that drop the instantaneous frame rate under 60 FPS. Although it’s not quite on the same level as those cards, the GTX 1070 Ti spends about a second less past 16.7 ms than the RX Vega 56 does. Overall, this is an exceptionally close match-up, and both Radeons and GeForces deliver smooth gameplay in this title with little fuss.

 

Grand Theft Auto V


Grand Theft Auto V tends not to play well with RX Vega cards, and even AMD’s most recent driver update doesn’t much change that picture. In fact, both Vega are dead-even, suggesting some kind of bottleneck somewhere is limiting performance for both these cards. Weird. In any case, the GTX 1070 Ti speeds well past the AMD competition, but once again, the superior memory bandwidth of the GTX 1080 seems to give it an edge.


Our frame-time plots show low, smooth curves, so it’s no shock that none of our high-end contenders post meaningful time past 16.7 ms in this analysis. Flip over to the 8.3-ms mark, however, and the GTX 1070 Ti shows just how much further ahead of the Vega cards it can run. It spends just about half the time the RX Vegas do past 8.3 ms, and that simply translates to a smoother and more consistent experience overall. Hopefully AMD’s driver team can break whatever bottleneck seems to be choking Vega performance in this title.

 

Doom

For our second-to-last round of performance tests, it’s time to go to Hell. Doom‘s Vulkan renderer is one of the fastest-running FPS experiences available, and we’ll put it to good use here to tease out the performance of our high-end contenders.


Although I wouldn’t have expected as much given RX Vega cards’ already-high performance in Doom, it seems that AMD found even more oomph in reserve somewhere. The RX Vega 64 runs 16 FPS faster on average than it did at launch, and the RX Vega 56 enjoys a smaller 8-FPS gain on average, as well. That’s enough for the RX Vega 56 to handily outpace the GTX 1070 Ti, while the RX Vega 64 rockets past the GTX 1080. Our 99th-percentile measure of delivered smoothness favors the Radeon cards, as well.


Only the GTX 1060 6GB even begins to register any perceptible time spent past 16.7 ms in Doom. Click over to the 8.3-ms and 6.94-ms marks, though, and it becomes clear just how scaldingly fast the Radeon RX Vega duo is compared to the Pascal competition in this title.

 

The Witcher 3

I hadn’t initially planned to test The Witcher 3 given the wealth of fresh titles available to me for benching, but the performance boosts I observed in some games with the Radeon RX Vega duo led me to fire it up. As usual, we’re using ultra settings at 2560×1440 with Nvidia’s HairWorks turned off.


Unfortunately, The Witcher 3 doesn’t seem to have been an optimization target in AMD’s most recent rounds of drivers. The game’s performance on Vega cards is about the same as it was at launch, so the GTX 1070 Ti butts heads with the RX Vega 56. The Radeon manages a slight edge in delivered smoothness, however.  Meanwhile, the GTX 1080 delivers a smoother and more fluid experience than the RX Vega 64.


Like we did for Watch Dogs 2, it’s helpful to add an 11.1-ms graph to our time-spent-beyond-X mix in order to tease out performance differences between these cards. Joke’s on us, though, because even with this graph, the GTX 1070 Ti and the RX Vega 56 end up about dead-even. These graphs do emphasize just how slight the difference in performance between the Vega 56 and GTX 1070 Ti is, though. The really interesting result is that the RX Vega 64 spends twice as long past 11.1 ms as the GTX 1080 does. Given the tight race between the Vega 56 and GTX 1070 Ti, I expected better from the Vega 64. With luck, this could be another polish job for AMD’s driver team.

 

System power consumption

To test system power consumption, I stood in the entry hall of the chateau in Hitman‘s Paris level.

 

Under this load, the GTX 1070 Ti only needs moderately more power to run than the custom EVGA GTX 1070 we tapped to represent that card, and it uses a whopping 70W less than the RX Vega 56. Meanwhile, the RX Vega 64 still requires 148W more power than the GTX 1080 to offer roughly equivalent performance. Let’s see how those extra watts translate into dBAs on my noise meter.

Noise levels

At idle, AMD’s blower cooler and the semi-passive coolers on everything but the three Founders Edition cards are indistinguishable from the noise floor in my testing environment. The GTX 1070 Ti, GTX 1080, and GTX 1080 Ti all spin their fans at idle, but their noise levels are still hardly noticeable.

Load noise levels put the Founders Edition trio solidly in the middle of this grouping. Surprisingly, the GTX 1080 Ti FE doesn’t get all that much louder than the GTX 1080 and GTX 1070 Ti FEs despite being asked to dissipate about 100W more heat. Despite its relatively high noise levels compared to the large dual-fan custom cards on the GTX 1070 and RX 580 I tested, the Founders Edition cooler is hardly unpleasant to listen to as blowers go. The noise it produces is a moderately high-pitched but broad-spectrum hiss.

Meanwhile, AMD’s RX Vega blower cooler lands at the back of the pack in absolute noise levels. Like the Founders Edition blower, though, the RX Vegas’ axial fan is a high-quality one, and it produces a fairly broad-spectrum noise. I was honestly surprised that both cards registered as high as they did on my sound meter. Still, there’s no denying that the RX Vega 56 will be slightly louder than even the Founders Edition cards in use, and the RX Vega 64 will unquestionably make itself known without a closed and well-damped case.

 

Conclusions


It feels like it’s been much longer, but Nvidia’s Pascal architecture has been with us for just about a year and a half. In that time, Pascal has been implemented in everything from tiny chips for thin-and-light notebooks to 250W, bar-raising beasts and everything in between. Nvidia really has nothing left to prove for this generation of GPUs; the breadth and success of its execution with the Pascal architecture may be the best of any chipmaker’s in recent memory.

The GTX 1070 Ti is an excellent victory lap. It usually holds a lead over the RX Vega 56 in our tests, and it even kicks sand in the RX Vega 64’s face from time to time. It comes within 10% of a GTX 1080 Founders Edition in our average-FPS measure of performance potential for 10% less money, at least going by suggested prices. The GTX 1070 Ti’s 99th-percentile frame-time gap versus the GTX 1080 is even smaller—just 7% or so. I’m all in favor of linear or better-than-linear price-to-performance improvements like that. Although I didn’t have time to try it, overclocking the GTX 1070 Ti could close the gap even further (and yes, you can do it).

Despite its best efforts, the GTX 1070 Ti doesn’t completely shut out the RX Vega 56. AMD’s recent driver updates for its Vega duo have delivered a solid performance boost over those cards’ launch numbers. There’s work yet to be done, but in both performance potential and in delivered smoothness, the RX Vega 56 is now superior to a hot-clocked GTX 1070. It only trails the GTX 1070 Ti by about 3% in our average-FPS index, too. That same polish has helped the RX Vega 64 close the smoothness gap that troubled it at launch, and it now delivers performance potential within 4% of the GTX 1080’s. The Vega 64’s 99th-percentile frame times are now about 4% behind those of the GTX 1080’s, as well. That’s a much-needed step forward for the red team, even if it doesn’t cure the full-fat Vega’s eyebrow-raising power draw and noise output.

Given today’s e-tail pricing trends, Nvidia may have sliced the high-end graphics-card pie just a bit too thin at $449 and up for a GTX 1070 Ti. Custom GTX 1080s have been readily available on sale for around $500 in recent months, and partner GTX 1070 Ti cards have a median price of $470 at the moment on Newegg. Depending on the way the discount winds blow this holiday season, GTX 1070 buyers may find it sensible to make the step up to the fully-fledged GP104 card. If those same zephyrs go the other way, though, the GTX 1070 Ti could get affordable enough to be a no-brainer.

Retailers might get itchy trigger fingers on those coupon codes regardless, because the GTX 1070 Ti makes uber-fancy custom GTX 1080s seem like a hard sell. Going by that same median-price metric, the midpoint for custom GTX 1080s is $560 on Newegg right now. Paying 20% more money for roughly 10% higher performance potential doesn’t seem ideal. All that is to say nothing of higher-end custom GTX 1070s, whose collective reason for being now seems perilous outside of cryptocurrency prospectors. A flurry of price adjustments seems likely, in any case.

If AMD can keep burnishing the 99th-percentile frame times of its star players, an RX Vega 56 for $400 seems poised to become an excellent value in entry-level high-end graphics cards in its own right. At least two RX Vega 56 cards are now available for that $400 suggested price as I write, and some Vega 64 cards are available at or near their $500 suggested price now. If Vega prices continue to fall, that means another stumbling block for AMD’s high-end Radeons is crumbling.

Back in Nvidia’s corner, the GTX 1070 Ti continues to embody everything that we enjoy about Pascal cards. It brings most of a GTX 1080 to a lower price point, and its power efficiency, quiet manners, and delivered smoothness remain enviable. Hard to argue with any of that. Much as I hate to crack this old reviewer’s chestnut, though, the newly-vital RX Vega 56 makes calling the race between these two cards excrutiatingly hard.

Buyers will need to weigh the GTX 1070 Ti’s energy efficiency, wide range of custom-cooled options, and overclocking potential against the Vega 56’s appealing FreeSync support and potentially lower price. I suspect most builders are well aware of the tradeoffs between AMD and Nvidia’s offerings by now. Follow your heart. If the RX Vega 56’s sudden fondness for its suggested price at e-tail goes away, though, the GTX 1070 Ti will stand unchallenged as the finest $450 graphics card around. For the moment, though, the freshly-competitive high-end graphics-card market means that builders really can’t lose either way.

Comments closed
    • UnknownZA
    • 2 years ago

    Can you do an update on the card with the new 388.31 drivers?

      • Voldenuit
      • 2 years ago

      Man, the 388.31 drivers broke video playback for me /hard/ on my 1080Ti. I was already running 382.53 because anything later caused intermittent blocking in videos, but with the 388.31, a lot of my videos showed nothing [i<]but[/i<] blocks. It wasn't a big problem on MPC-HC, where I could set LAVFilter to software processing, but it made browser-based playback and native Win10 playback a real hassle. Finally made a virtual display hooked to my intel 4600 IGP, and coaxed windows into prioritizing QuickSync as the default playback filter that way. Buh. EDIT: Des2ny framerates are smoother with the 388.31s, though. I was already getting 120 fps @1440p fairly consistently before, but the 388.31s feel smoother and seem have fewer dips, and smoother performance even when framerates dip (such as being in town with a heavy player load).

    • ermo
    • 2 years ago

    [b<]@Jeff Kampman:[/b<] Lovely review. I noticed that the "Time spent beyond 16.7 ms" graphs have weird looking y-axis labels on the GoW4 and GTAV pages? Got my Sapphire RX Vega 64 for GTX 1080 money a couple of months ago, and the boost in speed compared to launch drivers (plus some fiddling with underclocking and undervolting) means that it still feels like a good deal.

    • BorgOvermind
    • 2 years ago

    So we got a partially crippled 1080.
    Well….good addition, I guess something had to fill in the gap between 1070 and 1080.

    • ronch
    • 2 years ago

    It’s nice to see Vega generally holding up well against the 1070 and 1080 (and the 580 against the 1060) but the thing that will make me go with Nvidia is the power consumption, despite being someone who prefers AMD graphics over Nvidia and sticking with AMD graphics since ’04. Now one might argue that power consumption isn’t such a big deal but if you’re paying about the same price for about the same kind of performance from either company, why wouldn’t you want to choose the more power efficient product over the other? Why would you want to put up with higher heat output and energy usage?

      • Freon
      • 2 years ago

      The 56’s power consumption is at least not completely out of line, and it’s probably just a few bucks a year of electricity even if you play games several hours a day 365 days a year. Power consumption and the related noise is definitely is a thumb on the scale, though, all else being equal. If you can get a good deal on the 56 vs. a 1070 I don’t think it’s a horrible choice.

      The 64 looks much worse to me. It’s running on the hairy edge right out of the box. It’s enough to make you rethink your PSU purchase.

      All the modern cards do very well for desktop, idle, and sleep.

    • End User
    • 2 years ago

    If someone has waited this long for a Pascal based card they can wait a tad longer for Volta.

    • Mr Bill
    • 2 years ago

    Thanks for a good review Jeff. Its nice to have something to read first thing Friday morning and its nice to see how all the cards in the review have changed in performance since they were released.

    • derFunkenstein
    • 2 years ago

    [quote<]Given today's e-tail pricing trends, Nvidia may have sliced the high-end graphics-card pie just a bit too thin at $449 and up for a GTX 1070 Ti.[/quote<] This sentence and the whole accompanying paragraph are spot-on. There's so little pricing room between the 1070Ti and the 1080 that I can't see who's going to buy this thing. The only real solution is to drop the vanilla 1070 further (maybe around $320, which would probably put AIB partner cards with custom coolers around $350 and makes Vega 56 a tougher sell) and then drop the 1070Ti into the ~$400-430 bracket. That won't happen until after Christmas, surely, but I don't see this field being this crowded for long.

    • Jeff Kampman
    • 2 years ago

    FYI for readers who are just arriving to this: some retesting this afternoon with the GTX 1070 card I initially used against the MSI GTX 1070 Gaming Z we have from a while back revealed a performance problem with our initial data set in some titles, most notably [i<]Doom[/i<], [i<]Forza Motorsport 7[/i<], and [i<]Gears of War 4[/i<]. I haven't been able to track down the source of this problem, although it seems to be an interaction between our initial GTX 1070 of choice and our Coffee Lake testbed. In the interest of fairness, I re-ran our test suite on the MSI GTX 1070 Gaming Z, and I've used results from the MSI card wherever our initial GTX 1070 seemed to exhibit this performance issue. My revised numbers don't materially change my initial conclusions, but they do lead to improved performance for the GTX 1070 overall, and therefore a tighter 99th-percentile frame time result between that card and the RX Vega 56. I regret the error and any misleading conclusions our initial data may have caused.

      • chuckula
      • 2 years ago

      This can mean only one thing: [b<]WE HAVE PROOF THAT INTEL [s<]CONSPIRED[/s<] [u<]MAKE THAT COLLUDED[/u<] WITH AMD TO GIMP NVIDIA PERFORMANCE ON COFFEE LAKE![/b<]

        • RAGEPRO
        • 2 years ago

        Give it a rest, man.

          • chuckula
          • 2 years ago

          Given that RAJACHIP just got announced this week maybe I was predicting the future.

      • Klimax
      • 2 years ago

      There are not that many variables these days. Maybe some odd clock variance on PCI-E causing some sort of trouble?

    • freebird
    • 2 years ago

    Someone was looking for 1070 TI mining rates back in the quick look of the 1070TI and they can look it up here if they want:
    [url<]http://www.legitreviews.com/nvidia-geforce-gtx-1070-ti-ethereum-mining-performance_199622[/url<] Suffice to say, a 1070TI with the same memory speed of a regular 1070 won't mine ETH any faster. It is memory bandwidth dependent. ETH loves OCed Memory, especially HBM2.

    • Coran Fixx
    • 2 years ago

    Look at that scatterplot, now look at your budget
    Look at your framerate, now look at your budget
    Look at how old these cards are, now look at your budget

    Dang it, looks like another year for my gtx 970

      • ClickClick5
      • 2 years ago

      This all over.

      I stayed with my Radeon 6970 for 5 years trying to push it along. Once I upgraded my screen from 1080 to 1440, the card started to struggle. Then I bought Far Cry 4, and that was practically like putting a gun to the back of its head.

      So bought a 980. So far…it has been able to do everything I have asked of it. And Vulkan has helped as well.

      So another year.

      EDIT: Looking at the launch price of the 6970 in 2010, $369 according to wikipedia. I think I paid $320 for mine then. Now a top end card it $800+ 🙁

    • moose17145
    • 2 years ago

    Wolfenstein II: The New Colossus page. Time spent beyond 33.3ms.

    RX Vega 64 Graph is green instead of red. (technically not an error I suppose, but given the color coding of the rest of the article, I has assumed that I was looking at an NVidia card at first).

    GTVA V page. Time spent Beyond 16.7ms graph.

    Has GTX 1070 Ti on there three times. Appears to be missing GTX 1080 and Vega 56.

    • mudcore
    • 2 years ago

    Would it make sense to replace either Wolfenstein II or Doom with say Prey? Since those two games share similar tech and similar performance profiles I’d think it’d be worth swapping in Prey. Which would also serve as the modern representative of CryEngine and/or Lumberyard should more than Star Citizen use it.

    I’d keep Doom. I strongly suspect it’ll have a much longer playerbase tail than Wolf2, despite it being the notably older release.

    • PopcornMachine
    • 2 years ago

    For me, the decider between this card (GTX 1070 Ti) and Vega 56 is the fact that there are no 3rd party versions of the Vega.

    I would not mind getting at 1070 Ti and manually OC it a bit, if that gets me a quiet and effective cooler instead of a leaf blower.

      • JustAnEngineer
      • 2 years ago

      Custom Radeon RX Vega:
      [url<]https://techreport.com/news/32786/xfx-takes-to-reddit-to-tease-a-custom-radeon-rx-vega-card[/url<] Good blower cooling designs work better in small cases like the [url=http://www.silverstonetek.com/product.php?pid=771&area=en<]Fortress FTZ01-E[/url<] because they exhaust heat out the back instead of dumping it inside the case.

        • freebird
        • 2 years ago

        OR big cases when using multiply GPUs… I found my RX Vega 56 couldn’t mine faster than 39-40MH ETH with a Gigabyte 1070 G1 Gaming installed below it… thought I just had an underperforming Vega, but I replaced the 1070 with another Vega 56 and they both were hitting 43MH on ETH, with the same setting I used for the 1070 + Vega 56. The 1070 was leaving too much heat in the case that was getting sucked into the Vega Blower and not cooling it as well. The 1070 has been banished to another miner in the basement. The 1070 was definitely quieter though. 🙂

    • Krogoth
    • 2 years ago

    This is really a stealth replacement for 1080. Nvidia is allocating its GDDR5X chips towards upcoming Volta SKUs. I wouldn’t be too shock that 1080 gets discontinued early Q1 2018.

      • NTMBK
      • 2 years ago

      Or it’s a stealth replacement for the 1070, now that yields have improved. Quietly phase out the 1070, and effectively push up average prices.

        • chuckula
        • 2 years ago

        OR IT’S A STEALTH REPLACEMENT FOR THE GTX-1060Ti!

        WHICH NEVER EXISTED!

          • NTMBK
          • 2 years ago

          We’re through the looking glass here, people!

        • Krogoth
        • 2 years ago

        I don’t think so and I’m willing to bet that majority of 1070Tis use good GP104 chip that are artificially binned to prevent the 1070Ti matching 1080 instead of being behind by a small margin.

        I wouldn’t be too shock if Nvidia quietly releases a “1060Ti or 1070SE” in Q1 2018 to move more defective GP104 stock while making room for Volta.

          • James296
          • 2 years ago

          [quote<] wouldn't be too shock if Nvidia quietly releases a "1060Ti or 1070SE" in Q1 2018 to move more defective GP104 stock while making room for Volta[/quote<] wouldn't be the first time Nvidia's pulled that move

          • jihadjoe
          • 2 years ago

          Even if the 1070Ti was fully enabled it’ll never match the 1080 because GDDR5 vs GDDR5X.

            • Krogoth
            • 2 years ago

            Actually, it would match it in situations where 1070Ti isn’t memory-bandwidth limited (1080P gaming and 1440P w/o AA/AF for most titles) . GDDR5X only helps when you throw in AA/AF and do general compute.

            Notice how 1070Ti is almost ~95% of a 1080 in some of the benches/resolutions? (FYI, 1070TI has 95% of GP104 silicon enabled) You only see a larger gap when memory bandwidth becomes a concern.

            • K-L-Waster
            • 2 years ago

            Ok… but seriously, how many people are going to spend this much money on a graphics card to play at 1080P gaming or 1440P w/o AA/AF? You can do those easily with a 1060 and save a bundle. Anyone who is stretching out their budget to 1070 TI is going to be playing at more ambitious settings than those.

            • Krogoth
            • 2 years ago

            FPS junkies that care about frame rate and frame-time above all else. It is a surprising larger crowd then we think due the sudden attention to frame-times.

            • psuedonymous
            • 2 years ago

            I care about frame times because of VR, but running VR without a bare minimum of 4x MSAA (and ideally 2x or above supersampling) is Doing It Wrong. AA for VR isn’t merely nice to have, it’s mandatory due to aliasing during constant head movement.

      • HERETIC
      • 2 years ago

      Oh to be a fly on the wall-
      It’s also quite possible 1070Ti with GDDR5 is more profitable than 1080 with GDDR5X.

        • Krogoth
        • 2 years ago

        Yep, it is somewhat more profitable per unit since there’s a flood of GDDR5 stock while GDDR5X is still limited in stock and cost by comparison. HBM2 is even more expensive and tighter in supplies which is the main reason why AMD hasn’t released a “Vega RX 48” yet to fill the gap between the RX 480/580 and RX Vega 56. The lack of “RX Vega 48” is the main reason why Nvidia didn’t rush out a 1060TI/1070SE to the market.

        • Freon
        • 2 years ago

        edit: nevermind

      • BorgOvermind
      • 2 years ago

      It all depends on the yield of the 1080s.

    • darryl
    • 2 years ago

    In looking to upgrade from my little GTX-770 I am (still) hoping to see the 1070Ti in a 11gb version. Any chance of that version?

      • AnotherReader
      • 2 years ago

      That won’t happen. The 11 GB of the 1080 Ti is because of its 352 bit wide memory bus. There have been oddballs like the [url=http://www.anandtech.com/show/6159/the-geforce-gtx-660-ti-review/2<]660 Ti and the 550 Ti in the past[/url<] that have memory capacity that isn't a power of 2 * bus width, but that also has tradeoffs in not being able to access the entire memory at full speed.

    • TravelMug
    • 2 years ago

    What’s up with those awful spikes in the GF1070 frametime graphs? Forza7, W2, GoW4, Doom and a bit in DX? The GF1070Ti or the Vega 56 doesn’t exhibit this and also the 1060/RX580 are both smooth compared to those 1070 graphs.

      • Jeff Kampman
      • 2 years ago

      I analyzed this in the review. The short answer is that it looks worse than it is.

        • TravelMug
        • 2 years ago

        I read that in the text, what I was getting at is that it’s interesting why it is even there regardless of the impact. The faster 1070Ti and 1080 not showing those would make sense, but then the 1060 didn’t either so it’s a pretty huge outlier.

          • Jeff Kampman
          • 2 years ago

          I’m gonna run some tests on another GTX 1070 we have here and see if I can’t get to the bottom of this.

            • DragonDaddyBear
            • 2 years ago

            I commend you on your willingness to take feedback. Just one of many reasons TR is my #1 tech site (for what they cover)

            • Jeff Kampman
            • 2 years ago

            This is starting to look like a weird hardware interaction. I’ll be updating the article shortly to account for it.

            • TravelMug
            • 2 years ago

            Thanks for the follow-up, Jeff, you da man! 🙂

      • Krogoth
      • 2 years ago

      Possible software and hardware issue due how 1070 is gimped.

        • Krogoth
        • 2 years ago

        Fanboys are going to be fanboys. It is possible that there’s a driver optimization in the version used in the test that breaks on 1070 because how it is gimped or it exposes an underlying issue with hardware. The infamous “3.5 GiB + 512MiB bug” on 970 was exposed in this matter.

        It is certainly worth checking out.

          • K-L-Waster
          • 2 years ago

          OR it’s possible there was an odd anomaly with one of the cards.

          (Y’know, like the one Jeff just mentioned repeatedly…)

    • DPete27
    • 2 years ago

    Jeff, you mentioned that GPU Boost 3.0 was likely to drive the clocks higher than advertised boost on the 1070Ti, yet I didn’t catch that you mentioned what the actual operating clocks were. Please advise.

      • DPete27
      • 2 years ago

      And this from TomsHardware:
      [quote<]Did we get a bad sample from MSI or a great one from Nvidia? A comparison using boards from Zotac, Gigabyte, Colorful, and Gainward suggests that we really hit the jackpot with our Founders Edition card, [b<]similar to previous launches.[/b<][/quote<] (emphasis mine) ....apparently that didn't raise any red flags from Toms....

        • nanoflower
        • 2 years ago

        JayzTwoCents raised a similar issue with his review, suggesting that Nvidia is binning chips for their founders cards so you will always win the silicon lottery with a Founders Edition card over an AIB partner card.

      • AnotherReader
      • 2 years ago

      Boost clocks seem to be [url=http://www.anandtech.com/show/11987/the-nvidia-geforce-gtx-1070-ti-founders-edition-review/15<]higher than the founder editions of the 1070 and 1080[/url<]: 1826 to 1860 MHz versus 1770 MHz or so for the 1070 and 1080.

      • Jeff Kampman
      • 2 years ago

      In [i<]Hitman[/i<], which really seems to punish the GPU, the card starts thermal-throttling at 82° C and seems to settle on a boost speed of around 1771 MHz.

        • DPete27
        • 2 years ago

        This might be good info to keep track of in each review actually. It would certainly help to convey any possible discrepancies that are a result of GPU clocks. That’s pretty straigh-forward for AMD cards because your’e already doing it in your GPU summary table in testing methods, but with Nvidia’s GPU Boost, your results are certainly at the mercy of the chip lottery and cooling solution with how far past the advertised Boost Clock the GPU is pushing itself.

        Your particular sample OC’d itself 5% past the advertised boost clock, other reviewer’s samples could be > or < that amount.

    • PixelArmy
    • 2 years ago

    Tangential… but the blurbs above comments sections used to link back to the main article… can we get those back or something similar? I know it’s in the “Latest Stuff” bar, but that’s awkward.

    Edit: Thanks!

      • JustAnEngineer
      • 2 years ago

      The photo links to the article, but that’s only there for those of us who know the secret handshake.

        • DPete27
        • 2 years ago

        (mind blown)
        Thanks btw.

        • UberGerbil
        • 2 years ago

        I remember making this suggestion a couple of site-redesigns ago, and then eventually deciding that the secret “click on the photo” trick was more fun. That secret handshake is well over a decade old at this point.

      • Mr Bill
      • 2 years ago

      It would also be nice if the ‘review page menu’ were at the bottom of the conclusions so one did not have to use the secret handshake to get back the review menu.

    • Chrispy_
    • 2 years ago

    Nice review Jeff.

    The TR scatter plot actually illustrates the problem with the GPU market quite nicely; It’s starting to look a little crowded between about $400 and $500, but there’s still that huge gulf between ~$250 and the $400 options.

    That 1060Ti is what the market needed, not this market segment that’s already served by the three options you already mentioned (hot-clocked 1070s, base-model 1080s on discount, and the Vega56).

      • Pville_Piper
      • 2 years ago

      Considering how big the performance gap is between the 1060 and 1070 I agree.

        • Chrispy_
        • 2 years ago

        Same problem as the performance gap between the RX 580 and Vega56.

        When you consider that 2560×1440 is gaining traction as the sweet spot gaming resolution, both the RX 580 and 1060 struggle to hit a consistent 60fps at those resolutions without dialling back details from [i<]ultra[/i<] to [i<]high[/i<], yet the 1070 and Vega56 at the same settings are comfortably running at 100+ fps once you drop the settings down a notch from [i<]ultra[/i<]. A 1060Ti or Vega48 would probably be the 1440p sweet spot for this generation, but it's conspicuously absent from the market.

      • DPete27
      • 2 years ago

      I couldn’t agree more. Especially given that the price difference going from a 1070 to 1080 is 25% and going from a 1060 to 1070 is about 60% it seems Nvidia could’ve just made a 5% price cut to the 1080 and released a 1060Ti to give them sole occupancy in the ~$325 market.

      • mnemonick
      • 2 years ago

      Let’s not forget that the GTX 1070 was [i<]supposed[/i<] to be sitting in that $250 - $400 gap after the msrp cut to $350. It chaps my ass that crypto-miners are still blowing up the cards' prices.

    • JustAnEngineer
    • 2 years ago

    That’s another great review, Jeff.

    What strikes me is how adept NVidia and AMD have become at matching price/perfomance ratios. Years ago, there was usually a “sweet spot” where one card offered exceptionally good performance for its price. Today, we see the graphics cards from both GPU makers lined up in a nearly-straight line on your value charts.

    • NarwhaleAu
    • 2 years ago

    It’s hard to know what to do. I stepped all the way up from a Radeon 6950 to a Geforce 960 (almost doubling my graphics horsepixelpower) and saw NO improvement in Dota 2.

    Mayhaps I need to set up to that 1080 Ti. I’m pretty sure that is what is holding me back.

      • chuckula
      • 2 years ago

      Have you run FRAPs on minesweeper too?

        • Pville_Piper
        • 2 years ago

        Don’t bother him… He’s busy doing his third benchmark pass on MS Solitaire…

          • K-L-Waster
          • 2 years ago

          I want to see the 99th percentiles on Pong.

          • UberGerbil
          • 2 years ago

          You joke, but remember that “screen filling with bouncing cards” animation you got when you won? Years ago I had an office mate who played that game and I could tell when she won without turning around because I could hear the fan in her machine crank up when that animation was taxing the system.

            • derFunkenstein
            • 2 years ago

            Her system was new enough that fans would spin up on a temperature-controlled basis but it was slow enough that Solitaire was taxing. Must have been a Willamette Pentium 4.

    • AnotherReader
    • 2 years ago

    Great review! I think that giving it 9 Gbps GDDR5 would have closed the performance gap with the 1080. The 1070 Ti [url=http://www.anandtech.com/show/11987/the-nvidia-geforce-gtx-1070-ti-founders-edition-review/15<]boosts higher[/url<] than the [url=http://www.hardocp.com/article/2016/05/17/nvidia_geforce_gtx_1080_founders_edition_review/5<]1080 FE[/url<]. Addendum: I think the Vega 56 has more overclocking potential than is commonly thought. An undervolted Vega 64 can overclock by 15% and the Vega 56's overclocking is only [url=http://www.gamersnexus.net/guides/3040-amd-vega-56-hybrid-results-1742mhz-400w-power/page-2<]limited by the lower board power limit.[/url<]

    • chuckula
    • 2 years ago

    Hey Jeff, one technical comment about the review: I noticed you have moved the platform to our pal Covfefe Lake. Not a complaint, BTW, it’s a good platform and you have to stay current.

    My only question is, do you think the change in platform had any effect on the relative performance levels of the GPUs in this review compared to the old 7700K? I’m expecting there to be some performance improvement, but did it favor AMD or Nvidia more heavily or was it a wash?

      • Jeff Kampman
      • 2 years ago

      None of these cards should be CPU-bound at 2560×1440 or 4K save the GTX 1080 Ti. If there were performance improvements from the CPU change alone, they were likely a couple percent at best.

    • mynameispepe
    • 2 years ago

    I know this article is not about the GTX 1080 Ti, but goddamn that thing is powerful.

      • chuckula
      • 2 years ago

      It’s almost like they should have left it out of the review since its numbers tend to skew the graphs and compress the differences between all the other cards.

      [Edit: Downthumbed by AMD fanboys who [b<]WANT THE GTX-1080Ti to be present in TR's reviews[/b<]. Once again proving that being able to exercise rudimentary logic is a disqualifier to be let into the club.]

      • Chrispy_
      • 2 years ago

      Well yes it is, but only because it’s huge, hungry and expensive.

      It’s less technically impressive than the vanilla 1080, since the Ti throws ~40% more GPU at the problem and only provides a ~20% more performance in the games tested.

      I wouldn’t turn down a 1080Ti if someone gave me one, but at the same time, the 1080 is just a more impressive piece of kit once you start to look at metrics like noise, power, performance/Watt, performance/$, performance/die-size, and profit per sale for Nvidia.

        • Jeff Kampman
        • 2 years ago

        The 1080 Ti really shines at 4K; it’s just here to show where the top end of the market is more than anything.

        • derFunkenstein
        • 2 years ago

        Not as hungry than a Vega 64, which is also huge. Not as expensive and not as fast.

        Don’t forget that the top end just has a law of diminishing returns. When you absolutely need the best 3D performance money can buy (for your 4K rig) that extra 20% is more often than not:

        a.) more than 20%
        b.) worth it, whatever “it” is.

          • Firestarter
          • 2 years ago

          the returns only diminish because of nvidias pricing strategy, they have the fastest damn card on the market and they charge us out of the proverbial behind for it [b<]because they can[/b<]. The margin on the 1080 Ti must be mind-boggling compared to the Vega 64. If AMD's GPUs were anywhere near as efficient with their transistor/power budget and memory bandwidth as Pascal, the 1080 Ti would be significantly cheaper and the returns would therefore not be nearly as diminishing as they are right now

            • derFunkenstein
            • 2 years ago

            It’s not just true in graphics, though. Look at CPU lineups, high-end motherboard pricing vs. what you get (and what the extra pieces cost), and super-OC’d memory.

            About the only place that doesn’t happen is in solid-state storage, and that’s because even the slowest SSDs are so much faster than spinning alternatives.

            • Firestarter
            • 2 years ago

            my point is that the 1080 Ti is not even the bleeding edge of performance, its die is not as large as previous high end GPUs and its power consumption is just fine instead of borderline too high. Nvidia could have made a much faster GPU than the GTX 1080 Ti if they felt they had to and they could have [i<]sold it for the same price as the GTX 1080 Ti is now[/i<] and still make money. Consequently the 1080 Ti would have been cheaper and so fast we would be discussing whether the hypothetical faster GPU would be worth it because it's just 25% faster for a 50% higher price. in that sense, the GTX 1080 Ti is sort of like the i7-7700K. We knew Intel could easily sell us more cores at the same price, but they didn't have to until AMD got competitive again. Vega isn't as competitive as Ryzen is and it shows in the behavior of AMDs competitors

            • chuckula
            • 2 years ago

            [quote<]my point is that the 1080 Ti is not even the bleeding edge of performance,[/quote<] Oh, it's got an edge that makes me bleed all right! -- Raj

            • Voldenuit
            • 2 years ago

            Large die on a new process?

            Nvidia learned not to do that the hard way with the 5700/5800 series.

            • chuckula
            • 2 years ago

            I think he is forgetting history.

            Just because the very last-gasp 28nm GPUs like the 980Ti and the Fury-X were on huge dies due to the fact that 14/16nm was massively delayed does [i<]not[/i<] mean that Nvidia/AMD were regularly producing reticule-limit sized GPUs for the consumer market in the 12-18 month timeframe after 28nm first came onto the market. It's not a particularly good line of reasoning.

            • derFunkenstein
            • 2 years ago

            It’s not even that – the GP102 is all the gaming goodness of a GP100 but with all the [s<]high-dollar-compute[/s<] HPC stuff cut out and the extra-expensive HBM replaced by more reasonably-priced GDDR5X.

            • tsk
            • 2 years ago

            Maybe I’ll [url=https://techreport.com/news/29999/rumor-nvidia-kills-some-maxwell-chips-ahead-of-june-pascal-launch?post=974557#974557/<]finally[/url<] be proven wrong. :')

            • derFunkenstein
            • 2 years ago

            You got some minuses for your trouble, but so far so good. HBM too costly to even cripple GP100’s high precision math capabilities

            • Firestarter
            • 2 years ago

            you’re right, the 1080 Ti is pretty much normal sized for its class and the last 28nm high end cards were outliers. They were still being sold for the same price as the GTX 1080 Ti is being sold right now though, even when they were *probably* a lot more expensive to make due to the giant die. However without actually knowing the production cost of these cards it’s just speculation

            • Voldenuit
            • 2 years ago

            Or maybe if Vega were faster, AMD would charge more for it.

            • chuckula
            • 2 years ago

            NO NO NO NO NO!

            Yes.

            A bit a bit.

            • Mr Bill
            • 2 years ago

            [url=https://www.youtube.com/watch?v=iRlqmTKyQx0<]+3 for 'The Vicar of Dibley' reference[/url<]

            • Freon
            • 2 years ago

            Right, I imagine their margins at $399 and $499 for the 56/64 are pretty awful, but those are the price points they have to be at to have any chance.

        • jihadjoe
        • 2 years ago

        IMO it’s super impressive, considering the vanilla 1080 launched at the same $700 price. Imagine what Nvidia’s margins were on those 1080FE cards!

      • Airmantharp
      • 2 years ago

      Picked up a Corsair Hydro closed-loop for US$709 (MSI-branded)- and it can maintain ~2GHz core clocks at a whisper.

      So even higher on that graph- and it drives 1440p165 nicely :D.

      • ptsant
      • 2 years ago

      There was a fire sale in a nearby store where they sold 20 1080 Tis for $500. I couldn’t afford that but I seriously thought about buying it. Still think about it every time I read a GPU review…

      • Krogoth
      • 2 years ago

      The bloody thing has 80 ROPs, 224 TMUs and 3584 Shaders at its command operating at ~1.4-1.5Ghz of course it is going to be a behemoth.

        • ColeLT1
        • 2 years ago

        More like 1.9-2.0ghz

          • Krogoth
          • 2 years ago

          You are confusing the GP104 with GP102. The boost and stock speeds on GP102 are lower due to thermal limitations on stock HSF. The silicon can certainly handle 1.9-2.0Ghz with sufficient cooling.

            • ColeLT1
            • 2 years ago

            I have 5x1080ti’s, 3 different brands. They run 1900+ under full boost (unless you have crappy cooling). At +25 overclock on afterburner, they typically hold 1974-2000mhz

            • Krogoth
            • 2 years ago

            It is the speeds that Nvidia posts on their official portal. It doesn’t stop ODMs from tweaking it up.

            Turbo boost speeds like overclocking have always been YMMV.

            • ColeLT1
            • 2 years ago

            I would be concerned (high case temps or a bad card) if I put in a 1080ti and it would not hold at least 1900mhz boost.

            Mining rig 3x1080ti
            EVGA Strix 3 fan, Zotac 2 fan, gigabyte aorus 3 fan
            1911mhz 1936mhz 1987mhz (top card runs warmest/slowest)
            [url<]https://imgur.com/pfYrn2Q[/url<] Home gaming PC + home server each have 1080ti that hold 1987mhz gaming or mining. (both are gigabyte aorus 3 fan) [url<]https://imgur.com/XLF6AIK[/url<] Afterburner on all +120 power, +25mhz core, +600 memory (except the evga card is +300mem).

            • Krogoth
            • 2 years ago

            YMMV applies here. You can easily come across a “dud” that barely moves and boosts no matter how many volts you throw it.

            Turbo boost speed is not guaranteed.

        • AnotherReader
        • 2 years ago

        According to AnandTech, the founder edition [url=http://www.anandtech.com/show/11180/the-nvidia-geforce-gtx-1080-ti-review/16<]averages from 1620 to 1746 MHz[/url<]. Factory overclocked cards are [url=http://www.hardocp.com/article/2017/10/25/msi_geforce_gtx_1080_ti_gaming_x_trio_review/3<]even faster[/url<].

          • Jeff Kampman
          • 2 years ago

          According to The Tech Report, [url=https://techreport.com/review/31562/nvidia-geforce-gtx-1080-ti-graphics-card-reviewed/15<]the Founders Edition card manages 1759 MHz boost at 84° C[/url<]. [url=https://techreport.com/review/31763/aorus-geforce-gtx-1080-ti-xtreme-edition-11g-graphics-card-reviewed<]Factory-overclocked[/url<] [url=https://techreport.com/review/32159/corsair-hydro-gfx-geforce-gtx-1080-ti-graphics-card-reviewed<]cards[/url<] are even faster.

            • AnotherReader
            • 2 years ago

            Sorry. I didn’t realize that you had calculated the average speeds for one of the Pascal cards. I would have used your review if I had known.

      • Kretschmer
      • 2 years ago

      My 1080Ti is the bee’s knees. Perfect for 3440×1440.

    • crystall
    • 2 years ago

    Thanks for having included the GTX 1060 an RX 580 numbers. The performance of both cards seems to have evolved quite a bit over time; I didn’t expect the RX 580 to come out on top as often as it does now.

    Earlier this year I begrudgingly picked up a hot-clocked RX 480 over a 1060 mostly because I’m a heavy Linux user and can’t stand nVidia closed drivers. I would have preferred the 1060 because it was both faster and more efficient. Knowing my card is probably faster now – in spite of its high power consumption – is pleasant news.

    • Anovoca
    • 2 years ago

    The most impressive take away here is how precisely nvidia tuned this card to nestle in almost exactly halfway between the 1070 and 1080 in almost all benchmarks. A lot of times a Ti comes out and makes an old SKU obsolete but this baby split the uprights perfectly.

      • jihadjoe
      • 2 years ago

      Small chance of that unless they gave it GDDR5X, which Nvidia obviously wouldn’t want to do.

      But that said, the 1070 is probably the most gimped x70 card in Nvidia’s history, having a full 25% of the shaders disabled. The 1070Ti having 95% of the 1080’s shaders and being gimped mostly by the memory is a positive step toward fixing that, though it would’ve been nice if it was priced closer to $400.

        • Krogoth
        • 2 years ago

        Actually the most gimped x70 card would been 970 due to the memory partition issue.

          • jihadjoe
          • 2 years ago

          Good call, totally forgot about that and was focused on the 20 vs 25% shader gap.

          • psuedonymous
          • 2 years ago

          On paper, yes. In practice, the performance delta between the 970 and 980 was small (and with the price difference, there was not all that much reason to buy a 980 over a 970 bar bragging rights). The memory partition was of such minimal real-world performance impact that it wasn’t even noticed for several months, and even with knowledge of it present testers were only able to actually produce measurable performance deltas above the noise floor in tests with hilariously unplayable settings (it doesn’t matter if a card performs 2fps slower when it’s only doing 8fps in the first place because you’re rendering at UHD with 4x supersampling).

            • Krogoth
            • 2 years ago

            Memory partition issue has a noticeable impact on frame-time for content using more than 3GiB of VRAM. It is how was discovered in the first place. It wasn’t an issue during 970’s heyday but it is becoming more of an issue with current and future titles that easily consume 3GiB of VRAM or more.

    • chuckula
    • 2 years ago

    Reading this review just reminds me of one thing.

    If it hadn’t been for Vega, we would have had to wait until 2017 for Nvidia to finally get around to launching Volta!

    THANK YOU AMD!

      • smilingcrow
      • 2 years ago

      It’s a bit early in the day but what the heck, I’ll have what he’s been drinking as it is a Friday. Cheers.

        • morphine
        • 2 years ago

        Can you share?

    • juzz86
    • 2 years ago

    Top review Jeff, as always mate.

    For me, the existence of this card does two things:

    1) Makes me wonder why it exists a little; and
    2) Makes me appreciate the absurdly fine-tuned NVIDIA market knowledge to allow it to exist.

    When your 1070 review hit, the performance difference between it and the 1080 seemed average and ‘about right’ for NVIDIA, give or take. I’d definitely not anticipated an intermediate SKU, nor really thought there was room for one. The gulf between the 1080 and 1080Ti reinforced that.

    And here, this card sits almost perfectly – to the frame-per-second (and often enough, the frametime) – between the two. And it actually seems to exacerbate the difference and justify it’s existence.

    I still see it as a somewhat pointless SKU considering price (although your US prices are a bit skew-whiff at the moment anyway), but on performance it is [i<]exactly[/i<] where it should be. Credit where credit is due - that's 'knowing your market'.

      • DeadOfKnight
      • 2 years ago

      Note that with this card more than any other, the “founders edition” version is in direct competition with their OEMs. You call it marketing brilliance, I call it corporate greed. In other words, business as usual over at Nvidia. Not that other players are innocent in that regard, but Nvidia never misses an opportunity to make a buck. This is why prices are double what they used to be for the same class of chip 7 years ago.

        • chuckula
        • 2 years ago

        [quote<]Not that other players are innocent in that regard, but Nvidia never misses an opportunity to make a buck.[/quote<] WE SURE DON'T HAVE THAT PROBLEM! -- AMD

        • K-L-Waster
        • 2 years ago

        Sooooo NVidia shouldn’t try to sell graphics cards? Kinda puts a crimp in the old business plan, doesn’t it?

        Not sure what your argument is here…. maybe “don’t sell cards and the prices of cards would be lower”… sorry, not following how that would work.

          • nanoflower
          • 2 years ago

          More like Nvidia probably shouldn’t be in competition with their AIB partners. Which is essentially what they are doing with the 1070TI since Nvidia doesn’t want any AIB to provide a factory overclock.

        • juzz86
        • 2 years ago

        Upvoted you mate, because you’re not wrong – often enough, marketing brilliance and corporate greed are one-and-the-same.

        That’s an interesting point about the more direct OEM competition and something I didn’t consider.

        I’d also like to make it clear that while the post may look it, I’m not pro-NVIDIA here by any means, nor do I actually endorse the practice (aside from $$, I still don’t see the point).

        Despite dropping the name twice, the post was more directed at the phenomenon than the manufacturer.

    • DeadOfKnight
    • 2 years ago

    Thanks for the review, Jeff. Too bad it’s a product that seems hardly worth the effort.

    • PrincipalSkinner
    • 2 years ago

    Last graph on GoW4 page has GTX 1080 shown twice.

    • derFunkenstein
    • 2 years ago

    Top of the GTAV page just says “words”. Is that intentional? lol

      • NTMBK
      • 2 years ago

      The best words

      • Jeff Kampman
      • 2 years ago

      Nope!

        • derFunkenstein
        • 2 years ago

        I thought you were making a joke about how nobody reads text in these things and just fights about graphs in the comments.

          • UberGerbil
          • 2 years ago

          [quote<]just fights about graphs in the comments.[/quote<] I know scatter is the crowd favorite but for me it's pie charts to the death!

            • JustAnEngineer
            • 2 years ago

            [url<]http://dilbert.com/strip/2009-03-07[/url<] Pie charts are mostly useless. That sort of data should be plotted as a [url=https://en.wikipedia.org/wiki/Pareto_chart<]Pareto chart[/url<].

            • UberGerbil
            • 2 years ago

            [url<]https://i.imgur.com/U7Ghu2s.gif[/url<]

            • derFunkenstein
            • 2 years ago

            [url<]https://imgur.com/MiPIl[/url<] (this thread is now about GIFs)

            • Mr Bill
            • 2 years ago

            GIF me a home, where the Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo; roam.

Pin It on Pinterest

Share This