Nvidia’s GeForce GTX 1060 graphics card reviewed

Nvidia is on quite the roll this year. The GeForce GTX 1070 and GeForce GTX 1080 remain the uncontested performance champions of the high-end graphics card market, thanks in part to AMD’s more mainstream ambitions for its Polaris-powered graphics cards. If that dominance wasn’t enough, Nvidia did itself one better and advanced its single-GPU performance lead with the GP102 chip in the Pascal Titan X. Y’know, just because.

Of course, the green team didn’t ignore the average Joe while it was busy pushing the limits of graphics performance. Back in July, Nvidia introduced the GeForce GTX 1060, its response to the Radeon RX 480 8GB. The $250 GTX 1060 was the first card to play host to a more wallet-friendly Pascal GPU: GP106. The GTX 1060 6GB, as we now know it, immediately went to work against the hard-to-get RX 480 with a slew of readily-available aftermarket cards that stickered near Nvidia’s $250 suggested price tag. The mainstream onslaught didn’t stop there, however. A couple weeks later, Team Green took the wraps off a GTX 1060 with 3GB of RAM that rang in at $200.

The GP106 GPU.

While the GTX 1060 3GB’s name might imply a simple halving of its RAM versus its bigger brother, there’s more going on under the hood of that card than its innocuous name might suggest. The GTX 1060 3GB sustained some cuts to its graphics-processing resources to hit its price target. Nvidia disabled one of the card’s shader multiprocessor (SM) blocks, dropping the GTX 1060 3GB’s resource allocation to 1152 stream processors and 72 texture units. Contrast that approach with AMD’s Radeon RX 480 4GB, whose only difference from its 8GB cousin is that 4GB of missing RAM. Here’s how the two “GTX 1060s” compare on paper, in convenient tabular form: 

  Base

clock

(MHz)

Boost

clock

(MHz)

ROP

pixels/

clock

Texels

filtered/

clock

(int8/

fp16)

SP

TFLOPs

Stream

pro-

cessors

Memory

path

(bits)

Memory

transfer

rate

(Gbps)

Memory

bandwidth

(GB/s)

Peak

power

draw

RX 470 926 1206 32 128/64 4.9 2048 256 6.6 211 120W
RX 480 1120 1266 32 144/72 5.8 2304 256 7 224 150W
GTX 960 1126 1178 32 64/64 2.4 1024 128 7.01 112 120W
GTX 970 1050 1178 56 104/104 3.9 2048 256 7.0 224 145W
GTX 1060 3GB 1506 1708 48 72/72 3.9 1152 192 8.0 192 120W
GTX 1060 6GB 1506 1708 48 80/80 4.4 1280 192 8.0 192 120W
GTX 1070 1506 1683 64 120/120 7.0 1920 256 8.1 259 150W

A block diagram of the GP106 GPU. Source: Nvidia

Nvidia has played this kind of name game with its cards before. Recall that the GeForce GTX 460 came in 768MB and 1GB flavors. Despite the identical name on the box, the lesser GTX 460 was down eight ROPs and had a narrower path to memory than its better-endowed counterpart. We complained about that false equivalency then, and we’re complaining about it now. AMD isn’t ashamed of putting a smaller number on its cut-down Polaris 10 card, the Radeon RX 470, and we don’t think calling the GTX 1060 3GB… well, anything other than a GTX 1060 would have hurt its perception in the marketplace that much.

Thanks to that questionable naming scheme, the uninformed builder picking one of these cards off the shelf probably won’t notice that more is missing from the GTX 1060 3GB than 3GB of RAM—assuming those specs are clearly spelled out on the box at all. In the case of the EVGA cards we have on hand, we found no mention of stream processor or texturing unit counts on the cards’ packaging. We think that Nvidia’s board partners should be more upfront about what buyers are getting if there’s as substantial a difference between cards as there is between these two, even if that information is a little arcane.

  Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear

filtering

int8/fp16

(Gtexels/s)

Peak

shader

arithmetic

rate

(tflops)

Peak

rasterization

rate

(Gtris/s)

Memory

bandwidth

(GB/s)

GeForce GTX 1060 3GB 82 123/123 3.9 3.4 192
GeForce GTX 1060 6GB 82 137/137 4.4 3.4 192
GeForce GTX 960 38 75/75 2.4 2.5 112
GeForce GTX 970 61 130/130 3.9 4.7 224
GeForce GTX 980 78 156/156 5.3 5.0 224
GeForce GTX 1070 108 202/202 7.0 5.0 259

Some quick math shows that a full-fat GP106 chip has slightly more raw pixel throughput and slightly less texturing muscle than the GTX 980. It’s also slightly less capable than GM204 in sheer number-crunching power and memory bandwidth, although the Pascal architecture’s improved delta-color-compression facility might help make up some of that gap. Of course, both GTX 1060s utterly wipe the floor with the GM206 chip that powered the GTX 960 and the GTX 950. It’s a testament to the power of Pascal that we’re comparing the $250 GTX 1060 6GB to cards that used to cost $350 to $500-ish.

In another potentially controversial move, Nvidia removed SLI support from both GTX 1060s. At least one set of SLI fingers has been available on every GeForce card in recent memory except for the GTX 750 Ti and below, so this move marks a new era for the spec sheets of budget-friendly GeForces. Some DirectX 12 multi-adapter modes might let gamers harness multiple GTX 1060s in the future, but the option is no longer available in DirectX 11 titles, full stop.

In recent years, we’ve suggested that gamers get the best single graphics card they can afford for the most consistent and smoothest possible performance in games, so we’re not bothered much by this move. Folks willing to tolerate SLI’s inconsistent performance scaling and potential frame-pacing issues will be disappointed by this omission, however, especially considering that the price for a pair of GTX 1060 6GB cards ends up somewhere in between a GTX 1070 and a GTX 1080. If the GTX 1060 6GB delivers on Nvidia’s promise of GTX 980-class performance, a pair of those cards might have approached a GTX 1080 in raw speed, so it’s not hard to imagine why the green team made this choice. Those brave folks willing to pair multiple budget graphics cards for a potential performance boost will need to stick with Radeons for now.

Now that we have the lay of the land for the GTX 1060, let’s see how EVGA has chosen to put the chip to work on a pair of its graphics cards.

 

EVGA’s diminuitive GTX 1060 duo

The GTX 1060’s modest TDP means that monster dual-fan coolers aren’t needed to keep GP106 in check. Nvidia’s board partners have released a number of compact, single-fan versions of the GTX 1060 alongside the usual barrage of dual- and triple-fan beasts. EVGA kindly sent over one of its $260 GTX 1060 6GB SC Gaming cards when we began shaking the trees, but we weren’t as lucky securing a 3GB card for this review. Eventually, we threw in the towel and picked up the logical counterpart to our 6GB test subject from retail: the $210 EVGA GTX 1060 3GB SC Gaming.

Outwardly, these cards seem identical. They use the same cooler, the same 6.8″-long PCB, and the same six-pin power connector. You’d have a hard time telling them apart without squinting at their labels. Just because these cards are tiny doesn’t mean they’re cheaply made, though. A look under the understated plastic shroud of each card reveals a dense aluminum fin array and plenty of copper making contact with the GP106 chip itself. That’s reassuring given EVGA’s factory clock speed boosts over Nvidia’s reference numbers. Here’s a full rundown of each card’s specs compared to the GTX 1060 Founders Edition:

  GPU

base

clock

GPU

boost

clock

Memory

config

Memory

transfer

speed

PCIe

aux

power

Peak

power

draw

E-tail

price

EVGA GeForce GTX 1060 3GB SC Gaming 1607 1835 3GB GDDR5 8 GT/s 1x 6-pin 120W $209.99
EVGA GeForce GTX 1060 6GB SC Gaming 1607 1835 6GB GDDR5 8 GT/s 1x 6-pin 120W $259.99
GeForce GTX 1060 Founders Edition 1506 MHz 1683 MHz 6GB GDDR5 8 GT/s 1x 6-pin 120W $299.99

Four screws are mercifully all that stands in the way of removing these cards’ heatsinks. Once those screws are out and the single four-pin fan connector is unplugged, the heatsink flips off to reveal a neat application of thermal paste on a copper contact plate. Two beefy copper heatpipes run over this plate and into the aluminum fin array above. Simple, clean, and effective. EVGA’s engineers didn’t include a contact plate for cooling either card’s voltage regulators or memory chips, but the blow-down fan should keep enough air moving over those critical components to make sure that design choice isn’t an issue.

Not everything about these cards is the same, though. Once we started testing this duo, we noticed that our GTX 1060 6GB card ran considerably quieter than the 3GB version under load. It seems our 6GB card came flashed with EVGA’s “silent” firmware, while the 3GB card we grabbed off the shelf wasn’t so lucky. EVGA used to offer this special firmware to owners of its cards on a case-to-case basis, but no longer. Seeing as how one can set custom fan curves for either card in EVGA’s PrecisionX OC software, that’s probably not a big deal. We didn’t perform any such tweaking before testing either card, however, so the noise and thermal results you see in this review represent straight-from-the-factory performance.

Our testing methods

As always, we did our best to deliver clean benchmarking runs. We ran each of our test cycles three times on each graphics card tested, and our final numbers incorporate the median of those results. Aside from each vendor’s graphics drivers, our test system remained in the same configuration throughout the entire test.

Processor Intel Core i7-6700K
Motherboard ASRock Z170 Extreme7+
Chipset Intel Z170
Memory size 16GB (2 DIMMs)
Memory type 16GB (2x8GB) G.Skill DDR4-3000
Memory timings 16-18-18-36
Chipset drivers Intel Management Engine 11.0.0.1155

Intel Rapid Storage Technology V 14.5.0.1081

Audio Integrated Z170/Realtek ALC1150

Realtek 6.0.1.7525 drivers

Storage Two Kingston HyperX 480GB SSDs
Power supply SeaSonic SS-660XP2
OS Windows 10 Pro with Anniversary Update

Our thanks to ASRock, G.Skill, Kingston, and Intel for their contributions to our test system, and to EVGA, MSI, AMD, and XFX for contributing the graphics cards we’re reviewing today.

  Driver revision GPU base

core clock

(MHz)

GPU boost

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

XFX Radeon RX 470 RS 4GB Radeon Software 16.10.1 1256 1750 4096
Radeon RX 480 8GB 1120 1266 2000 8192
Asus Strix Radeon R9 Fury 1000 500 4096
AMD Radeon R9 Fury X 1050 500 4096
MSI GeForce GTX 970 Gaming 4G GeForce 373.06 1114 1253 1753 4096
MSI GeForce GTX 980 Gaming 4G 1190 1291 1753 4096
MSI GeForce GTX 1070 Gaming Z 8G 1632 1835 2027 8192
EVGA GeForce GTX 1060 3GB SC Gaming 1607 1835 2000 3072
EVGA GeForce GTX 1060 6GB SC Gaming 1607 1835 2000 6144

For our “Inside the Second” benchmarking techniques, we now use a software utility called PresentMon to collect frame-time data from DirectX 11, DirectX 12, OpenGL, and Vulkan games alike. We sometimes use a more advanced tool called FCAT to capture exactly when frames arrive at the display, but our testing has shown that it’s not usually necessary to use this tool in order to generate good results for single-GPU setups.

You’ll note that aside from the Radeon RX 480 and Radeon R9 Fury X, our test card stable is made up of non-reference designs with boosted clock speeds and beefy coolers. Many readers have called us out on this practice in the past for some reason, so we want to be upfront about it here. We bench non-reference cards because we feel they provide the best real-world representation of performance for the graphics card in question. They’re the type of cards we recommend in our System Guides, and we think they provide the most relatable performance numbers for our reader base. When we mention a “GTX 1060” or “Radeon RX 470” in our review, for example, just be sure to remember that we’re referring to the custom cards in the table above.

With that exposition out of the way, let’s talk results.

 

Doom (OpenGL)

id Software’s 2016 Doom revival is a blast to play, and it’s also plenty capable of putting the hurt on today’s graphics cards. We selected the game’s Ultra preset with a couple of tweaks and dialed up the resolution to 2560×1440 to try and figure out whether any of the graphics cards on the bench had made a deal with the devil.


Both GTX 1060s are off to a solid start under Doom‘s OpenGL renderer. The GTX 1060 6GB is just a hair off the GTX 980 in average FPS, and its 99th-percentile frame time is only a bit higher than the fully-enabled GM204 card’s. Meanwhile, the GTX 1060 3GB turns in results indistinguishable from the GeForce GTX 970. As we’ve come to expect, however, the Radeons fare poorly with Doom‘s OpenGL mode—the R9 Fury can’t even best the GeForce GTX 1060 3GB, and the R9 Fury X likewise can’t get past the GTX 1060 6GB.

And no, the GTX 1070’s performance above is no mistake. It’s just that much swifter than everything else here. We had to double-check, too.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. And 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame. And since it matters for Doom, 8.3 ms works out to 120 FPS, a figure that folks with high-refresh-rate monitors will want to be hitting more often than not.

None of the cards in our test spend any time beyond 50 ms or 33 ms, so the most meaningful result is to consider how much time they spend working on tough frames that would drop frame rates below 60 FPS. By this measure, only the Radeon R9 Fury, the RX 480, and the RX 470 spend noticeable amounts of time under 60 FPS. The GeForces all turn in fine performances here, even at 2560×1440 with settings cranked to ultra. The GTX 1070, overachiever that it is, only spends eight seconds of our one-minute test run past 8.3 ms, too. It’s clear from our results that OpenGL is not the ideal API for Radeons, however, so let’s turn the tables and see how our competitors perform with Doom‘s Vulkan implementation.

 

Doom (Vulkan)




As we’ve come to expect, Doom‘s Vulkan renderer benefits Radeons and hobbles GeForces a bit. The R9 Fury X rockets into second place in both our average-FPS and 99th-percentile frame-time measures, followed closely by the R9 Fury. Both GTX 1060s fall toward the back of the pack, but the GTX 1060 3GB card takes the switch especially hard. Its average frame rate drops behind the GTX 970’s, and its 99th-percentile frame time is also significantly worse than the Maxwell card’s.

The 3GB card also happens to have the smallest amount of RAM here. We can’t conclusively say that the size of the GTX 1060’s 3GB memory pool is the cause of this larger-than-expected performance drop, but it’s the only explanation that seems to make sense. That said, the GTX 1060 3GB’s frame-time plot doesn’t exhibit any large or repeated frame-time spikes, so its performance at least degrades in a way that doesn’t harm the user experience much. Of course, the real answer here is to stick with OpenGL if you’re a GeForce owner.


Another sign that the GTX 1060 3GB might be struggling because of the amount of RAM onboard is the relatively large amount of time it spends past 16.7 ms in our measures of “badness.” The GTX 1060 6GB has no such issues—it spends under a second working on similarly-challenging frames. Even the GTX 970 and its unusual memory configuration fare better. Regardless, the numbers above tell a simple story: GeForce owners shouldn’t bother with Vulkan if they want maximum Doom performance, but the Radeon faithful should absolutely enable it.

 

Rise of the Tomb Raider (DirectX 11)


All of the cards we tested end up delivering relatively similar performance in Rise of the Tomb Raider‘s DX11 mode, save the GeForce GTX 1070 at the front of the pack and the RX 470 at the rear. Strangely, the Radeon R9 Fury and R9 Fury X deliver practically the same average frame rate and 99th-percentile frame times, despite the Fury X’s resource advantage. Still, none of the cards’ frame-time plots offer any reason for concern. Rise of the Tomb Raider just happens to be a demanding game, and both average frame rates and the 99th-percentile frame times we collected attest to that fact.


Our time-spent-beyond-X graphs offer a bit more insight into the tightly-packed results above. The GTX 1070 almost never drops below 60 FPS, of course. Both Radeon R9 Furies turn in respectable results, as well.  The GTX 1060 6GB offers a smoother gaming experience than the RX 480, and the GTX 1060 3GB spends about five fewer seconds than the RX 470 on difficult scenes that would drop frame rates below the 60-FPS threshold. Let’s see if a switch to RoTR‘s DirectX 12 renderer puts some space between these cards.

 

Rise of the Tomb Raider (DirectX 12)




The move to DirectX 12 with this version of Rise of the Tomb Raider and the latest drivers from AMD and Nvidia actually helps some cards for the first time we can recall. The R9 Fury and R9 Fury X both get small FPS boosts from the API switch, though the other Radeons aren’t as fortunate. Meanwhile, the GeForce cards either regress slightly or see minor improvements, depending on the card. The differences are quite small either way, though. 99th-percentile frame times barely change between APIs, and whether a given card improves or worsens by this measure largely seems to be a crapshoot.


Our “time-spent-beyond-X” graphs do suggest definite improvements in smoothness for the Radeon R9 Fury and R9 Fury X. Both of those cards spend less time working on difficult frames in DX12 mode than they do in DX11. As we’d expect from the results above, the other Radeons exhibit either no change in performance or a slight worsening. The GeForce cards, including the GeForce GTX 1060s, generally perform slightly worse than they do in DX11. Once again, switching DX12 on probably isn’t a good idea for owners of the green team’s cards.

 

Hitman (DirectX 11)

Among the games we tested for this review, Hitman (along with Doom) will lock out certain graphics settings if a card doesn’t have enough memory. We threw caution to the wind and disabled Hitman‘s safeguards, since they prevented us from testing the game with our chosen settings on the GTX 1060 3GB. We’ll be scrutinizing the GTX 1060 3GB’s performance in this title to see whether performance degrades if one disobeys the game’s advice.


Going by the results above, flipping Hitman‘s safeguards off produces performance results similar to those we saw with Doom‘s Vulkan renderer. The GTX 1060 3GB doesn’t exhibit any untoward spikiness in its frame-time plot, and nothing felt amiss with the card during our test run, but its performance does trail that of the cards with 4GB or more of memory. The GTX 1060 6GB’s extra muscle and extra memory seem to give it a substantial performance advantage in both our average-FPS and 99th-percentile frame-time measures. At least these results add credence to the idea that when the GTX 1060 3GB does run into its memory limits, the resulting performance degradation isn’t catastrophic—things just run slower.


A dive into our “time-spent-beyond-X” charts helps characterize the GTX 1060 3GB’s apparent slowdown. The GTX 1060 6GB spends about sixteen seconds working on frames that take longer than 16.7 ms to render in our one-minute test run, and the GTX 1060 3GB spends about eight seconds more. It’s important to note, however, that the 3GB card only spends an imperceptible amount of time past 33.3 ms, and it doesn’t get knotted up with any frames that take longer than 50 ms to render. If you choose to push the GTX 1060 3GB past its limits, it seems the only punishment for that willfulness will be reduced frame rates, not a stuttery mess.

 

Hitman (DirectX 12)




You should know what’s in store from Hitman‘s DX12 renderer by now. The Radeons in our test suite see improvements in performance, while the GeForces fall back. Interestingly, the GeForce GTX 970 joins the GTX 1060 3GB at the back of the pack.


Looking at our “time spent beyond” numbers, both the GTX 970 and the GTX 1060 3GB spend considerably more time on frames that take more than 16.7 ms to render than the rest of the cards in our suite. Perhaps not coincidentally, these two cards have the most unusual memory configurations of the bunch. Regardless, the story here remains the same as it has over the last few pages: if you own a GeForce card, DX11 and OpenGL are your friends. If you have a Radeon, you should enable DirectX 12. Moving on.

 

Crysis 3


Here’s a good old DirectX 11 bone for our cards to chew on. Surprisingly, the Radeon R9 Fury and R9 Fury X shadow the GeForce GTX 1070, while the Polaris cards trail the Pascal competition a bit. Crysis 3 doesn’t seem to perturb the GTX 1060 3GB, though—both it and the GTX 1060 6GB deliver solid FPS averages and 99th-percentile frame times.


Crysis 3 also continues our test suite’s run of general smoothness. The 16.7-ms threshold is the only one where any of our cards spend any time of note working on tough frames. Surprisingly, the R9 Furies deliver a much smoother experience than anything save the GTX 1070 here. Meanwhile, the GTX 1060 3GB spends about 14 seconds working on those tough frames, while the GTX 1060 6GB spends about 12. The RX 480 8GB is on par with the GTX 970, and the RX 470 is about five seconds further in the hole.

 

Far Cry 4


Far Cry 4 is another demanding DirectX 11 classic. Here, the Radeon RX 480 pulls even with the GTX 1060 6GB, but the RX 470 can’t catch the GTX 1060 3GB. The Radeon R9 Furies turn in a surprising performance, too.


Discounting a tiny handful of difficult frames that cause our cards to spend a few milliseconds beyond 33.3 ms, the Radeon RX 480 actually beats out all of the GeForce competition in its price range when we consider the 16.7-ms threshold. The GTX 1060 6GB is close behind, though, and the GTX 1060 3GB turns in a better result than both the GTX 970 and the RX 470.

 

Deus Ex: Mankind Divided (DirectX 11)
Deus Ex: Mankind Divided is a thoroughly modern title with complex lighing effects, highly detailed textures, and multiple rendering paths. It’s a challenging game for any card to run well.


In Deus Ex‘s DX11 mode, the RX 480 8GB and the GTX 1060 6GB end up just one frame per second apart in that measure of performance potential. Likewise with the GTX 1060 3GB and the RX 470. Even in those close quarters, the RX 480 delivers a better 99th-percentile frame time than the GTX 1060 6GB. The RX 470 and the GTX 1060 3GB are about as smooth by this measure.


Deus Ex has the dubious honor of causing some of our cards to put meaningful numbers on our time-spent-beyond-50-ms charts for the first time in this review. For some reason, some of the Radeons exhibit a major hitch near the beginning of our test run. At least it’s the only place the cards fall victim to that kind of lag.

We’re mostly interested in the time-spent-beyond-16.7-ms mark, where the RX 480 8GB and the GeForce GTX 1060 are neck-and-neck. The RX 470 secures a substantial victory over the GTX 1060 3GB here, though. Like we saw with Hitman‘s DX11 mode, it seems Deus Ex isn’t kidding when it asks for 4GB of video RAM to work with at 2560×1440—the GTX 1060 3GB and (surprisingly) the GTX 970 spend quite a bit more time than the rest of our test suite on frames that take more than 16.7 ms to render.

 

Deus Ex: Mankind Divided (DirectX 12)




Despite some noticeable improvements in smoothness since we first examined its preview, Deus Ex‘s DX12 mode still doesn’t match its DirectX 11 render path for smooth gameplay. If we ignore the frequent spikes in the frame-time plots above, the story remains much the same as it has in every head-to-head DirectX 12 test in this review. Radeons advance, and GeForces fall back. The GeForce GTX 1060 3GB suffers even more here than it did under DirectX 11 mode, and the GTX 970 fares even worse.


Deus Ex‘s DX12 mode eradicates the major hang we saw with Radeons under DX11, but our “badness” graphs collect lots of the spikiness from our DX12 results in exchange at 50 ms and 33.3 ms. Interestingly, the GeForce GTX 1060 6GB spends less time past 33.3 ms than the GTX 980 does, possibly thanks to its 6GB of RAM.

Move the goalposts to the 16.7 ms threshold, though, and even that 6GB of memory doesn’t seem to help. The Radeon RX 480 8GB leads the midrange pack, while the GTX 980, RX 470, and GTX 1060 6GB all spend about four more seconds working on tough frames here than the RX 480. At the rear, the GTX 1060 3GB and the GTX 970 struggle mightily. The HBM-equipped Furies have a much better time of things than even the RX 480 8GB, though, suggesting that there may be more to this story than memory capacity alone. Still, our accumulating advice about API choice still holds.

 

The Witcher 3






Back to the DirectX 11 classics. In The Witcher 3, both GTX 1060s perform admirably, slotting right in between the GTX 980 above and the RX 480 8GB below. The RX 470 falls to the rear of the pack in both our average-FPS and 99th-percentile frame-time metrics, and the GTX 1070 extends its freakish lead. In this apparently non-VRAM-limited title, we can see that the GTX 1060 6GB and GTX 1060 3GB are pretty closely matched despite the 3GB card’s spec-sheet deficits.


Happily, none of the cards put any meaningful time on the board past our 50-ms and 33.3-ms thresholds, so we can look at the critical 16.7-ms mark straight away. Here, the GTX 1060 duo ends up mid-pack, besting the Polaris-powered Radeon competition.

 

Gears of War 4

The just-released Gears of War 4 offers an intriguing way to examine DirectX 12 performance without the influence of either major graphics-card vendor clouding the proceedings. This game comes straight from Microsoft Studios, and it doesn’t offer a DirectX 11 mode to fall back on. Gamers need Windows 10 to make Gears turn, too, so it’s a thoroughly modern title. We couldn’t capture video of our test run, but we chose a city environment early in the game to see how Gears runs.


With the rather extreme group of settings we chose, Gears of War 4 doesn’t seem to favor one particular GPU vendor or architecture over the other. Gears delivers a clean set of frame-time plots, as well. The Fiji-powered Radeons lead everything except for the GTX 1070 in average-FPS performance, and in turn, the GTX 980 and the GTX 1060 duo pull slightly ahead of the Polaris-powered Radeon cards. Each card matches its average-FPS number with a reasonably solid 99th-percentile frame time.


Even with this DirectX 12 title, we can happily skip right to our time-spent-beyond-16.7-ms results. The GTX 1060 6GB spends considerably less time churning on tough frames than its 3GB counterpart, and both cards lead the Polaris-powered competition for fluid gameplay if maximizing the time spent at or above 60 FPS is your goal. Open and shut.

 

Noise levels

High FPS averages and consistently low frame times don’t mean much if a graphics card sounds like a tornado while churning them out. We used the Faber Acoustical SoundMeter app running on an iPhone 6S Plus to measure the noise levels of each graphics card on our test bench from a distance of 18″ (45.7 cm). The noise floor in our testing environment is 31.1 dBA. We tested each card at idle using the Windows desktop and under load with our Doom test area.

At idle, only the RX 480 8GB reference card and the Radeon R9 Fury X make any noise—all of our other test subjects have semi-silent modes that allow them to turn off their fans. The Fury X card we have on hand still makes an annoying, prominent whine that’s not accounted for in these graphs, though. That piercing sound spoils an otherwise excellent performance.

Under our Doom load, the GTX 1060 6GB and the MSI GTX 1070 card add barely any noise to the ambient levels of our testing environment. Given the levels of performance on tap from each card, I’m over the moon with these results. The GTX 1060 3GB card is still quiet, but its more-aggressive fan profile does lead to some slight-but-noticeable fan noise under load. The Fury X and the Maxwell cards we’re working with are all about as loud, and only the R9 Fury, the RX 480 8GB, and the XFX Radeon RX 470 truly make themselves known while running all out.

Power consumption

At idle, the GTX 1060 cards both draw very little power, but the differences on display here aren’t that drastic—there’s only a 17W delta between the most- and least-power-hungry cards here at idle. Still, the GTX 1060s draw just that little bit less power than their Polaris Radeon competition.

Fire up Doom in all its glory, though, and the differences between process technologies, process generations, and architectures becomes much more evident. The GTX 1060s draw 35-44W less than the Radeon RX cards under load. Not much more to be said here.

GPU temperatures

The stubby coolers on the pair of EVGA 1060s we tested seem up to the task of keeping them within reasonable temperature ranges. Ambitious overclockers might want to spring for a GTX 1060 with a beefier cooler on board, but Mini-ITX and microATX builders should be thrilled with these cards’ blend of small size, performance, low noise levels, and power efficiency.

 

Conclusions

Before we share our thoughts on the GeForce GTX 1060s, it’s time for another round of TR’s famous value scatter plots, where we chart the performance each card delivers relative to its price. We’re offering up our data in three ways: DirectX 11 and OpenGL results only, DirectX 12 and Vulkan results only, and as a “best API” chart that pulls together the best performance numbers for each card from each game we tested. We’re presenting the “best API” results by default, since we think they offer the best picture of real-world performance. The curious can click around and see how each card did with each API, though. To make our higher-is-better presentation work with 99th-percentile frame times, we’ve converted those figures into average FPS.


Going by our latency-sensitive 99th-percentile frame-time measure, the GeForce GTX 1060 3GB comes in slightly ahead of the Radeon RX 470 we have on hand, and it demands slightly less money for the privilege. Not bad, especially when one considers that the GTX 1060 3GB consumes about 35W less power to get there. The GTX 1060 6GB also delivers somewhat smoother gameplay than the Radeon RX 480 8GB, though reference RX 480s sell for slightly less than the EVGA GTX 1060 6GB we tested. Like its little brother, the GTX 1060 6GB contributes less to overall system power draw to do its thing—about 44W less than the RX 480 in our testing. Though the smoothness gap between Radeons and GeForces has tightened considerably in this generation, the green team still holds a slight edge.


Those 99th-percentile numbers are slightly disappointing to see, because it’s clear from our average-FPS-per-dollar measure that the performance potential of these cards isn’t all that different. The RX 480 8GB pulls even with the GTX 1060 6GB by this measure, and the RX 470 only slightly trails the GTX 1060 3GB. With some further polish on its drivers, AMD might be able to make the 99th-percentile graph above look even more like its average-FPS results.

All told, the GeForce GTX 1060 3GB is basically a $200 GeForce GTX 970 (or an even cheaper one, once the rebate on the EVGA card we tested is taken into account). That would be great news for this price point, save for the fact that it seems rather easy to run over that 3GB of RAM with today’s games. Hitman locks out certain graphics settings when it detects less than 4GB of RAM to work with, and Deus Ex: Mankind Divided warns against using our test settings on cards with less than 4GB of RAM—seemingly with good reason, in both cases. The GTX 1060 3GB doesn’t become unusable in those situations, but its performance does drop, even if its frame delivery remains smooth.

Even then, we think it’s hard to pick a winner between the Radeon RX 470 and the GTX 1060 3GB from these results. We were pushing all of our cards to the limit at a 2560×1440 resolution. At those settings, the GTX 1060 3GB is still a faster, smoother card in general than the RX 470. What’s more, the vast majority of gamers still use 1920×1080 monitors, and we think it’ll be harder to run into the 3GB card’s limits there. The RX 470’s extra gig of RAM does seem to be useful today if 2560×1440 gaming is your thing, but we’re not sure that extra memory offsets the card’s higher power consumption and less-smooth delivered gameplay versus the GTX 1060 3GB.

A more concerning threat for the GTX 1060 3GB might be the $200-ish Radeon RX 480 4GB. That card is powered by a fully-enabled Polaris 10 chip. It’s been hard to find RX 480 4GB cards for AMD’s $200 suggested price of late, but Newegg actually has a couple such cards going for near $200 right now. We’d expect availability to improve as time goes on, too. If you really want to have some RAM in reserve for the future, the RX 480 4GB seems like the real foil for the GTX 1060 3GB.

We say as much because of the close race between the Radeon RX 480 8GB and the GTX 1060 6GB. Nvidia didn’t quite deliver a $250 GeForce GTX 980 with its better-endowed GTX 1060, but it came really close—and the RX 480 is right there with it. To emphasize how evenly matched these cards are, the GTX 980’s 52.8-FPS average is just 4% faster than the GTX 1060 6GB’s 50.7-FPS figure. The GTX 980’s 23.1-ms 99th-percentile frame time is just a hair better than the GTX 1060 6GB card’s 24.3-ms result, as well. Take the geometric mean of the RX 480’s results, and you get a 50.9 FPS average and 25.2-ms 99th-percentile frame time, as well. I can’t put a sheet of paper between those numbers, really: these cards are all quite satisfying to game with.

EVGA GeForce GTX 1060 6GB SC Gaming

MSI GeForce GTX 1070 Gaming Z

September 2016

The story doesn’t end with performance alone, though. The GTX 1060 6GB takes the all-around crown with its impressive efficiency and polite manners. Even with its stubby single-fan cooler, the EVGA GTX 1060 6GB SC is practically silent under load, and it draws much less power than the Radeon RX 480 while gaming. Neither of these cards are power hogs, to be fair, but this (or most any) GTX 1060 could easily slip into a tiny Mini-ITX PC without taxing a modest power supply or creating a racket. That’s a prospect that should elate all PC builders, but living-room gamers and dorm-room dwellers should be especially happy with this news. The EVGA GTX 1060 6GB SC card we tested embodies every advance we’ve been led to expect from the move to next-generation process technologies, and I’m happy to send it home with an Editor’s Choice award.

While we’re handing out trophies, MSI’s GeForce GTX 1070 Gaming Z card can also step forward. We already reviewed this particular card in-depth, but our initial review didn’t give it the full credit it deserves. If you’ve read even some of the preceding pages, you’ll know this card never stumbled once in our tests, and it never took anything other than first place for potential or delivered performance. Most impressively, the Gaming Z delivered that stellar performance without sucking power or making more than the barest peep under load. All those virtues make the GTX 1070 Gaming Z a superlative example of what’s possible with next-generation graphics cards, and I’m happy to extend it a TR Editor’s Choice award, too. (Be sure to check out MSI’s less blingy Gaming X card, as well.)

On another side note, AMD should take pride in the fact that its Radeon R9 Fury and R9 Fury X cards have more or less closed the smoothness gap with the GeForce competition from years past. Take The Witcher 3, for example. Dial in the same settings we used for that game in our initial Fury review, and that card’s 99th-percentile frame time falls from the 37.7 ms it turned in a year ago to 22.4 ms today. That’s a remarkable improvement from AMD’s driver team. Take a look at any one of our DirectX 11 titles, in fact, and the R9 Fury offers better potential performance (as measured by average FPS) and gameplay that’s as smooth or smoother (as measured by 99th-percentile frame times) than every card in our test suite save the GeForce GTX 1070 and the R9 Fury X.

Speaking of the Fury X, it’s enjoyed similar smoothness improvements, but it still can’t catch the GeForce GTX 1070. Owners of either Fury can enjoy much smoother gameplay now than they did a year ago, however, and that’s a big boost for fans of the red team. Whenever AMD’s nascent Vega GPUs arrive, it seems the company is poised to deliver maximum performance from those products on day one, and that’s sorely-needed progress. For now, we wait.

If you enjoyed this review, please consider becoming a TR subscriber. Your contribution helps us to independently obtain hardware like the GeForce GTX 1060 3GB graphics card featured in this review, and it also makes it possible for us to pursue the hours of testing and analysis necessary to deliver the in-depth discussions of performance and value you just enjoyed. TR subscribers get exclusive site benefits, and our Silver subscription tier lets you chip in as little as you like for the privilege. We appreciate your support.

Comments closed
    • gerryg
    • 3 years ago

    I think I’m missing something about the “world famous scatter plots”. When I change the API options on the plots, the prices (x-axis) are changing, or at least look like they are. The price should be a fixed number, right? I would expect only the y-axis performance number to change, so the marks would go up or down, but not move horizontally. Is there someplace where it’s explained why this is happening? I’ll assume for the moment it’s me being dumb and not a bug.

    • torquer
    • 3 years ago

    wait what

    • Meadows
    • 3 years ago

    Very nice card indeed. I got an ASUS flavour of these in early August due to necessity.

    Its power use is very reasonable, despite the fact I upped the boost clock to a round 2100 MHz. It already came with a factory setting of 1950 MHz to begin with. The card doesn’t even need extra voltage so far.

    It’s the first time I’m able to play Witcher 3 with Ultra detail.

    • Srsly_Bro
    • 3 years ago

    Why wasn’t GTA V tested? It has been on steam’s top selling page since it launched. It’s more likely a person is going to play GTA V over any of the other games tested. I’m scratching my head.

    Was one of the most played and purchased games omitted over political reasons?

    Fallout 4 also wasn’t included. Is a donation needed to help fund the purchase?

    I would like to see these games tested by TR and not rely on other sites for information.

      • Jeff Kampman
      • 3 years ago

      GTA V runs well on anything these days and Fallout 4 doesn’t like running with its framerate uncapped. That’s all there is to it, really.

        • chuckula
        • 3 years ago

        [quote<]GTA V runs well on anything these days[/quote<] WE'LL MAKE YOU EAT THOSE WORDS KAMPMAN!! -- Intel IGP team.

          • Chrispy_
          • 3 years ago

          Hah!

          Despite the progress made, Intel’s IGP team still defines “runs well” as:

          – Can execute the code without crashing.
          – Can render successive images without errors.

          There’s nothing there about framerates or resolution though.

        • Srsly_Bro
        • 3 years ago

        Thanks for the response, Jeff. Still, I would like to see it! GTA V and fallout 4 are the games I play the most.

          • DPete27
          • 3 years ago

          By that mindset, they should be testing TF2, LoL, CSGO, Overwatch, etc. But that would be pointless because those games aren’t demanding enough.

          You realize that the importance of a GPU review benchmark is to show comparative performance BETWEEN GPUs right? It’s not about what fps you can get in the game of the month for every individual person on the planet.

            • synthtel2
            • 3 years ago

            I think there’s a place for emphasis in each direction. A world without the kind of reviews done most places (heavy stuff) wouldn’t make much sense, but a lot of people are trying to figure out how well different cards will run [popular game] and don’t care how well it stacks up when loaded as much as practical.

            I mostly don’t play particularly demanding games, but I’m very interested in pushing higher resolutions and framerates. When I next upgrade my GPU, it won’t be so I can play any games I at present can’t, it’ll be so I can enjoy everything in 1440p 85+ fps glory. I can interpolate from this style of review to get an idea of what card will do what I want, but it’s not optimal.

            So long as a whole bunch of games are being tested in a review anyway, I’d welcome a bit more variety in those games.

    • revcrisis
    • 3 years ago

    This is great stuff. I have a 1070 and appreciate the inclusion of the card in the tests. What fascinates me is how absolutely AWFUL DX12 is on Nvidia cards. Title after title show regression compared to DX11. Has there ever been an DX API this terrible for Nvidia cards? It’s really disappointing and I wonder if Microsoft or the actual game developers are to blame? You can’t blame Pascal architecture, because it’s not like the 10xx series are breaking even on DX11 vs DX12 benches. You’re actually LOSING performance by switching to DX12. That makes zero sense to me and it leads me to believe Microsoft’s API just isn’t up to par at the moment.

      • barich
      • 3 years ago

      Were you around for DirectX 9 on the GeForce FX series? It was a disaster.

      • Chz
      • 3 years ago

      It’s more the case that NV’s DX12 drivers are average. AMD’s DX11 drivers *were* awful (they’re average now) and NV’s DX11 drivers are, for all intents and purposes, godlike.

      • Flapdrol
      • 3 years ago

      In this review nvidia dx12 is 3% slower than dx11. It’s not awful, just pointless.

      • jts888
      • 3 years ago

      The effect you’re seeing is Nvidia’s software team being better at shader optimizations than the game studios.

      They can drop in replacement code in DX11 that has ostensibly indistinguishable output with less effort required by the GPU, but with DX12 the driver just passes along thin compiled shader fragments with all the state managed by the client engine.

      It’s not that DX11 is inherently more efficient in any form, it’s that Nvidia can’t leverage it’s massive software team in the same way with Vulkan/DX12 games.

        • chuckula
        • 3 years ago

        DX12 uses shaders just like in DX11 with minimal differences inside the code of each shader BTW.

        The shaders are one area of the new APIs that have changed very little compared to the old APIs, it’s everything else around how the pipelines are formed and how memory is managed that has changed significantly.

    • highlandr
    • 3 years ago

    Did anyone else notice the 470 and 1070 moved horizontally (price) on the “Best API” scatterplots? A minor thing, but important if you aren’t paying attention.

    • djayjp
    • 3 years ago

    AMD cards performance goes off a cliff when paired with weaker CPUs, unlike the green team (or has this been fixed with both newer APIs?).

    • flip-mode
    • 3 years ago

    [quote<]On another side note, AMD should take pride in the fact that its Radeon R9 Fury and R9 Fury X cards have more or less closed the smoothness gap with the GeForce competition from years past. [/quote<] I find it continually amusing how persistent this pattern is with AMD. That is not a criticism, either; it could be viewed as good or bad depending on one's perspective. But it seems to be a pretty reliable bet that when you buy a Radeon it's almost like getting two cards - the card you bought and the card you end up with 2 years later that's generally 15% to 20% faster. From one perspective it's a free upgrade, from the other perspective it's like having to wait two years to get the full performance of your card. The logical problem with the latter perspective, though, is that if you bought the card at launch you must have been happy enough with its performance at the time so the complaint seems hollow. Performance of Nvidia's cards seems much more static over time. I don't know how true that is - it is just the impression I get based upon the fact that the competing Radeon often winds up catching up to or passing the competing Geforce over time. That leads to the same interesting choice as ever with these cards. Buy a GTX 1060 now knowing it is what it is. Or buy the essentially equivalent RX 480 now and hope that over the next 2 years that pattern holds that the RX 480 will gradually outpace the GTX 1060 as the drivers mature. Wash, rinse, repeat with the RX 580 and GTX 1160.

      • MrJP
      • 3 years ago

      It seems that AMD’s drivers are never as mature as Nvidia’s at time of launch, but on the flip side they tend to keep putting effort into the drivers for longer into a product’s lifespan. Given that pricing tends to settle towards price/performance equivalency soon after launch, it therefore always feels like you’re getting more powerful hardware for your money with AMD and you’ll get to see that benefit in the long run.

      This effect might be weaker in future if there is a shift towards the lower-level APIs, but add this onto the current G-Sync/Freesync situation and Nvidia would need to have a big price/performance advantage to get me to choose one of their cards

      • cegras
      • 3 years ago

      Finally, a relevant place for this excellent post I saved:

      [url<]https://www.reddit.com/r/hardware/comments/526q0f/opinion_polaris_is_so_far_a_huge_disappointment/d7ip3c7/[/url<]

      • Meadows
      • 3 years ago

      The last time I saw a significant performance “increase” with a Geforce due to drivers alone was back when Vista was still brand new. Back then, windowed and borderless windowed modes were just on their way to becoming fashionable, and the NVidia drivers you had for Vista’s launch were, well, quite poop for that kind of usage with the new DWM, which is why I put the word “increase” in quotes. However, several weeks later a new driver had popped up to fix those same issues, and it literally increased performance across the board by 30% or so, at least for me.

        • JustAnEngineer
        • 3 years ago

        [quote=”Meadows”<] The NVidia drivers you had for Vista's launch were, well, quite poop. [/quote<] [url<]http://gizmodo.com/373076/nvidia-responsible-for-nearly-30-of-vista-crashes-in-2007[/url<]

    • Ninjitsu
    • 3 years ago

    Really detailed and well planned review! Good to see lots of new APIs too, and the scatter plot is really nice too.

    Now that there’s more data, it’s troubling to see that Nvidia regresses in DX12/Vulkan. Even though the DX11 code path is clearly good enough, it would have been better to at least have the same level of performance in DX12/Vulkan.

    It is also interesting to see that Nvidia leads in GoW4…I wonder if that’s not because of MS not doing too much with DX12? So it ends up performing similarly to how it would with DX11…speculating here, of course.

    That said, if possible, in cases where the GPUs are clearly not meant for 1440p, would be good to see at least one test with 1080p.

    BTW in Crysis, SMAA is turned to 1x – afaik SMAA is postprocessing. Should be fine to turn it up.

      • synthtel2
      • 3 years ago

      SMAA 1x means just the post-processing step, T2x adds a temporal component, S2x adds some MSAA, and 4x adds both. 1Tx is a temporal algo with improvements over T2x. (I don’t know which of these options Crysis 3 has available.)

    • Klimax
    • 3 years ago

    First: Does even Doom has official support for 1060 3GB? Those results are what one could expect from engine which doesn’t know what to do with that card and there is no driver to fi it up.

    Second: Unsurprisingly, DX 12 only shows benefits when drivers are bad. It is too resource intensive to develop and too expensive to maintain. I’d rather see bug fixes and optimizations then chasing FAD like low level APIs and let driver teams handle handling if GPUs..
    Lets see: Hitman doesn’t even bother apparently to support 1060. Tomb Rider struggles to even make it a wash and Deus Ex is inferior experience.

    All are symptoms of facts that low level APIs never made sense on PC and don’t make sense now. And won’t make sense in future.

    Well ,it seems that not many games support them, so it seems most devs are at least borderline sane…
    And reminder: DX12 is parallel API, not successor! DX 11 is still primary API for graphics.

      • Ninjitsu
      • 3 years ago

      Nvidia cards do clearly show a regression, though, which is more concerning. It’s not even AoTS that I can put it down to a quirky engine or benchmark.

      • Krogoth
      • 3 years ago

      Doom 3 works with 1060 3GiB just fine. You may run into video memory capacity issues with want to enabled all of the eye candy with AA/AF on top. You also have to remember that 1060 3GiB silicon is more gimped then its 6GiB brethren.

      • NimaV
      • 3 years ago

      You are absoloutly right. It’s really simple but Idk why most people can’t understand this simple fact that Low level APIs are not suitable for PC. it seems like PC gamers forgot that there was a reason PC always used High level APIs. Low level APIs need single fixed hardware for best result, they just don’t work with varied and constantly changing hardware of PCs.

    • busmaster@gmail.com
    • 3 years ago

    Odd how he says can’t pick a winner between the 470 and the 1060 3GB just before admitting the 1060 is better…

    And I guess no surprise there’s no mention of AMD’s bait and switch tactic what with announcing the 200$ RX480 which was nowhere to be found for months while a month later they released the 470 for… 200$…

    • HERETIC
    • 3 years ago

    Jeff,any idea how hot the VR’s were getting.
    At such a low power probably not a problem.
    Some think EVGA’s higher cards are running too hot thru lack of thermal pads.
    [url<]http://www.eteknix.com/evga-gtx-1070-1080-suffering-overheating-issues/[/url<]

    • Krogoth
    • 3 years ago

    Awesome review.

    1060 3GiB is perfect for 2Megapixel gaming and can do some 4Megapixel gaming if you are willing to do some compromises to keep the video memory usage under 3GiB. 1060 6GiB is pretty much the same thing except it can handle 4Megapixel gaming if you don’t care for AA.

    The 480 and 470 are trading blows with their competition as far as performance is concerned but eat-up a little more power but the difference isn’t as large as the previous generation.

    All of these cards are worth the upgrade if you are running Kepler and Tahiti-era hardware and older. If you are a running 970/980 or 290/290X (390/390X). You can probably hold onto them until next-generation of silicon comes around. At that point, hopefully the entry cost into VR gaming will be far more forgiving for those who are interested.

      • Ninjitsu
      • 3 years ago

      [quote<]Awesome review.[/quote<] [i<]gasp[/i<]

    • sweatshopking
    • 3 years ago

    GUIZE. I HAVE A 1060 6GB.

      • chuckula
      • 3 years ago

      YOU SHOULD HAVE BOUGHT THE GCN CARD WITH ASYNCHRONOUS CAPITALIZATION SUPPORT!

        • Flapdrol
        • 3 years ago

        Don’t you mean AYUNROSCHNOSIT ACAZAIONTPLUPPI SOTR?

    • travbrad
    • 3 years ago

    Maybe you just test at 1440p so you have other data/cards to compare it to without having to retest a million things, but almost none of these games have acceptable framerates at 1440p IMO on the 1060 or 480. Most people buying them will have 1080p monitors also. My big takeaway from this is “these cards suck for 1440p”.

      • EzioAs
      • 3 years ago

      Disagree. You dial the settings down a bit, and voila!

      • Jeff Kampman
      • 3 years ago

      We test at 2560×1440 because it puts more demands on GPU performance exclusively, rather than things upstream of the GPU. We get a better picture of GPU performance as a result.

      If you’re concerned about absolute fluidity, maybe these cards will fall short at 2560×1440, but our 99th-percentile frame times and frame-time plots suggest they’re at least pretty consistent in the experience they offer (that is, they’re smooth even at these frame rates).

      I would say the R9 Fury is the lowest-spec card to go with among these if you want a truly fluid experience, but the 1060/480 never felt hitchy or otherwise unpleasant during gameplay, just a bit slow. The wonders of consistent frame times 🙂

        • DoomGuy64
        • 3 years ago

        If you got a 1440p monitor that only did 60hz like the classic 1080p, it would suck. But 1440p monitors today support adaptive sync, which clearly makes the 480 and 1060 viable cards provided you matched them with the right monitor.

          • sweatshopking
          • 3 years ago

          1060 with gsync monitor seems like a strange setup. since gsync costs one million dollars.

          • Ifalna
          • 3 years ago

          Umm. You do realize that the GSync tax alone (around 200€ over here) is pretty much the entire price of a 1060?

          I seriously doubt that people who buy el-cheapo cards start spending north of 600€ for a monitor. 😀

            • Chrispy_
            • 3 years ago

            Yeah, AMD are cleaning up on the “best affordable performance” when you pair something like a $200 RX 470 with a Freesync monitor that has almost no premium. Total cost is maybe $500 for a GPU+adaptive monitor combo, whilst $500 won’t even buy you a G-Sync monitor and you’re still missing a GPU.

            • Ninjitsu
            • 3 years ago

            But why wouldn’t you just spend the $500 for a GPU?

            • Ifalna
            • 3 years ago

            Hey, that’s what I did. Screw adaptive sync if I can just buy a card that hits 60+ 95% of the time. 😀

            • Kretschmer
            • 3 years ago

            It’s tough to peg a 144Hz monitor at cap.

            • Ifalna
            • 3 years ago

            Is there really a perceivable difference between a solid 60FPS and 120+ or are people entering the realm of placebo?

            Granted, going from 30 to 60 is HUGE in terms of feels, but I am not sure whether one can feel more.

            I agree though. One needs ridiculous overkill GPUs (or old games) in order to maintain 144+ FPS.

            • synthtel2
            • 3 years ago

            60 versus 144 is a massive difference, easily distinguishable at a glance. I wouldn’t be surprised if 90-ish could provide most of the benefits at a lot less cost, though. (I’ve seen HFR monitors in action and used them briefly, but haven’t got a chance to test in detail.)

            • Krogoth
            • 3 years ago

            It depends on the material in question.

            For twitchy shooters, fast-pace racing games it is a night and day difference. For other genres and materials, it is noticeable but not earth-shattering.

            • synthtel2
            • 3 years ago

            The question was if it was “perceivable”, which it definitely is, even just moving the mouse around on the desktop or something. How much it’s going to improve your computing experience is a different and much more complex question, and not one I made (or intend to make) any attempt to answer.

            • Ifalna
            • 3 years ago

            Good for me that I am playing slow MMOs then. 😀
            Though the curious bit of me is sad that I have no way or actually testing one of these babies. (short of ordering one and sending it back, which imho is not a nice thing to do)

            • Chrispy_
            • 3 years ago

            Yeah, going from 60 to 144 is one of those, “oh hey, that feels smooth” but you don’t necessarily appreciate it since 60 already seemed smooth if that’s all you’re used to.

            It’s only when you go back to 60 after using 144 that you realise just how bad it really is. 60fps with fixed vsync is a good baseline to aim for these days, but we’ve moved on from the bad old days when you had to choose TN panels to get high refresh rates. Now you can have your cake (wide viewing angles and great colours of IPS and AMVA) [i<]and[/i<] eat it (144Hz or higher).

            • DoomGuy64
            • 3 years ago

            Yes, which makes the 1060 an incredibly stupid purchase for a 1440p 60hz monitor. In that case I’ll wholeheartedly agree with Anandtech in saying the 1060 is more of a 1080p card, especially when it doesn’t support SLI.

            If you still want 1440p with midrange, then the only thing I can say is get the 480 with a freesync monitor. That’s the only reasonable option available in the price range. Otherwise, you’ll have to pay the premium.

      • Ninjitsu
      • 3 years ago

      Yeah, except the 1070 and maybe in certain cases the Fury X, everything else was clearly meant to run 1080p.

      I do get Jeff’s point about 1440p highlighting differences better, but I also think that it would be clear from a 1080p test as well. Bottlenecks can be shifted using higher settings, and in tests where there’s clearly a CPU bottleneck, 1440p can be tested too.

      OTOH TR isn’t abundant on manpower or time these days, so I’m sorta more relaxed about this compared to when Scott was still here.

    • Billstevens
    • 3 years ago

    Anyone find the latest results for the Fury X and 1070 more interesting than the 1060? AMD drivers have brought a lot of classic games up to snuff. Replay through the Witcher 3 has been flawless.

      • Krogoth
      • 3 years ago

      That extra memory bandwidth on the Furies do help out under certain workloads too. You will see a similar case for Big Pascal chips too.

      • NovusBogus
      • 3 years ago

      The 1060 is certainly a nice GPU but I gotta say it does get completely demolished by the next tier up.

      • geniekid
      • 3 years ago

      It took me awhile to get through the review because I kept thinking to myself “I don’t remember the Fury X being this smooth” and looking up older reviews to confirm what this review ultimately concluded as well – driver improvements are real. Given current market prices, the Fury X is surprisingly convincing if you need a water-cooled card for an SFF build or something.

    • Firestarter
    • 3 years ago

    is it still true that Nvidia cards beat AMD cards in CPU constrained situations?

      • DoomGuy64
      • 3 years ago

      Not under dx12/vulkan, and they improved dx11 with driver updates. Nvidia still does better there, but it’s not the issue it once was.

    • DoomGuy64
    • 3 years ago

    [quote<]the uninformed builder picking one of these cards off the shelf probably won't notice that more is missing from the GTX 1060 3GB than 3GB of RAM[/quote<] That's not all that's missing: [url<]http://www.anandtech.com/show/10540/the-geforce-gtx-1060-founders-edition-asus-strix-review[/url<] [quote<]The GPU’s 10 SMs are divided up into two GPCs, half the configuration of GP104. This means that GP106 can rasterize 32 pixels per clock on the frontend, but the backend ROPs can accept 48 pixels per clock. [/quote<] No doubt this is yet another one of the reasons why the 1060 can't quite catch up to the previous gen. edit more details: [quote<]Peak pixel fill rate (Gpixels/s) 82[/quote<] [url<]http://www.anandtech.com/show/10540/the-geforce-gtx-1060-founders-edition-asus-strix-review/15[/url<] [quote<]Beyond3D Pixel Fill Rate: At 54.8 GPixels/second, GTX 1060 trails GTX 980 significantly. The card not only has fewer ROPs, but it has half of the rasterizer throughput (32 pixels/clock) as GTX 980. As we’ve seen in our gaming benchmarks the real-world impact isn’t nearly as great as what happens under these synthetic tests, but it helps to explain why sometimes GTX 1060 is tied with GTX 980, and other times it’s several percent behind. If nothing else, at an architectural level this is what makes GTX 1060 a better 1080p card than a 1440p card.[/quote<]

    • synthtel2
    • 3 years ago

    I’m wondering how you go about choosing settings for games, not just for this review, but in general. In this review, most games are pretty clear, but Hitman and DX:MD are a mix of high/very high/ultra settings that I can’t make much sense of. DX:MD and RoTR also have DoF off (in contrast to everything else), and RoTR has PureHair at very high (compared to the rest of the settings at high) – it operates a lot like HairWorks, and [url=https://images.nvidia.com/geforce-com/international/images/rise-of-the-tomb-raider/rise-of-the-tomb-raider-purehair-gameplay-performance.png<]can[/url<] [url=https://images.nvidia.com/geforce-com/international/images/rise-of-the-tomb-raider/rise-of-the-tomb-raider-purehair-cutscene-performance.png<]be[/url<] expensive enough to be concerning. To be clear, I don't think there's any intentional bias here, I'm just asking out of curiosity.

    • vaultboy101
    • 3 years ago

    I have to say reading between the lines its clear.

    Since Scott went to AMD, NVIDIA has frozen Tech Report out. Its pretty obvious to me.

    I feel for Jeff and the Team but there is clearly nothing they can do. I’m guessing NVIDIA don’t trust that TR can be impartial whilst the founder is in a prominent position at AMD…

      • Pancake
      • 3 years ago

      So, let’s try to get Jeff a job at NVidia to even things out.

      • HERETIC
      • 3 years ago

      At the moment NV are far enough in front they can afford to be bullish/cocky.
      Let’s hope next year AMD can make a nice comeback,and we’ll see how cocky
      they are then……………………….

        • Klimax
        • 3 years ago

        Without driver developers no hardware can fix it. And it would be ultimate idiocy to bet on low level APIs.

          • DoomGuy64
          • 3 years ago

          Disagree since game developers have to optimize more with low level API’s, and those benefits are quite clear.

          It’s also evident you can increase hardware efficiency like with Polaris and Pascal, so that the hardware itself is more efficient at running code that required stronger driver optimization on older cards.

          There is no replacement for good driver optimization, which AMD has clearly done over the last year or so, but it is still possible to improve efficiency with both new hardware and low level APIs. Claiming otherwise is delusional.

            • cygnus1
            • 3 years ago

            Another way to look at it is that the downside of the low level API’s shifting optimization effort away from driver devs and onto game devs is that the graphical optimization skill level of game devs can vary wildly leading to some pretty poorly optimized games that will likely never get better. Whereas the driver devs are already known to be pretty good on both green and red teams and their optimizations can sometimes apply to more than one game.

            While I do realize that the low level APIs are necessary in order to get the most out of your other non-GPU hardware resources, they will lead to a sorting of the truly competent from the incompetent game devs that hasn’t happened in a while.

            • Freon
            • 3 years ago

            This is now just a battle of Gaming Evolved vs. TWIMTBP. I’m not sure how much difference there really is, it could be worse because either side’s driver team may not be able to optimize it at all, and if the other brand got to the devs first and told them to write their code one way or another we’ll just see an indefinite vendor preference.

      • drfish
      • 3 years ago

      Well, clearly they are right to be concerned, just look at all the Editor’s Choice awards the RX series… Oh wait…

        • Tirk
        • 3 years ago

        Indeed, even an after the fact Editor’s Choice for any 480 or 470 card but an Editor’s Choice goes to a 1060 retailing close to $300 at newegg what?

        [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16814487261[/url<] I liked the review, but that Editor's Choice seems garbage. Although I will say they did at least mention the RX cards in the system build options so there is at least that. But if you read the sweet spot description it almost sounds like teeth were being pulled for them to even do that. Its like, "If you must have freesync then I guess buying the 480 is ok......" [url<]https://techreport.com/review/30606/the-tech-report-system-guide-october-2016-edition/4[/url<] Keep up with the good reviews but those recommendations might need some cleaning up.

          • Jeff Kampman
          • 3 years ago

          The EVGA 1060 6GB we reviewed is actually $240 from Newegg. Not sure where you’re getting the $300 figure from unless you’re looking at sellers that aren’t Newegg.

            • Tirk
            • 3 years ago

            That’s a bit misleading as the Newegg $240 price is not available. Now maybe it was available as of your writing but when I searched it it wasn’t. I guess you could get it if it were in stock and criticism has been made countless times against the 480 and lower price point availability which is warranted when you wish to get it at the lower price. When you look at what IS AVAILABLE as of 10/26/2016 to buy as my link in my comment directs your recommended card IS NOT $240, lowest price available for the card is closer to $300, when you look at all the resellers for the card on the Newwegg site. Maybe you should clarify the choice award at a certain price point if someone is able to get it at that price. Because if we went by all the out of stock item prices on the Newegg site the cost value proposition would always be at the MSRP whether its available or not.

            I don’t see why adding clarification to the price point for your choice award would be detrimental in any way, or adding choice awards to any of the RX reviews.

            • Jeff Kampman
            • 3 years ago

            For one, prices on cards vary all the time at Newegg. I don’t pay attention to non-Newegg sellers because they often apply outlandish markups (as you’ve discovered). Your point is invalid regardless because Newegg will accept a backorder for the card we reviewed at the $240 discounted price.

            If you still don’t believe I’m going off a reliable source of pricing info, [url=https://www.amazon.com/EVGA-GeForce-Support-Graphics-06G-P4-6163-KR/dp/B01IPVSLTC/<]Amazon has the same card for $260[/url<]—the price Newegg was selling the card for before it apparently marked it down, and the price I used as the reference point for my decision-making while I was writing this review. Given that context, I stand by my recommendation. Furthermore, [url=http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&IsNodeId=1&N=100007709%20601205646%20600358543%208000<]there are plenty of GTX 1060 6GB cards selling for $250-$260 on Newegg right now[/url<], even if one was to choose a different card than the one we ended up recommending. I'm not sure why you believe I should retroactively apply an award to the RX 470 or the RX 480 we tested; neither one outperformed the comparable GeForces in this review by any measure. Cheers.

            • Tirk
            • 3 years ago

            Unless I’m misreading your graph, the GTX 1060 6GB fell right in line with the price performance average as every other recent GPU on your graph which is great as it means currently all the gpus are priced according to their performance with the prices that you found.

            I’ll accept your price explanation, but if they are all comparably competitive at their respective price points according to your graph, then why only mention 2 cards from Nvidia as an editor’s choice but leave out any inclusion of a card from the competitor that by your own graph is competing equally at their price point? None of the current cards on your graph seem to have a great enough deviation to say they are outperforming in any substantial way according to their price point so your explanation of why you’d leave out any recommendation of an AMD card still has me perplexed.

            Please don’t take these comments as an attack against you. I appreciate that you took the time to respond to my concerns and I hope exchanges like this just continue to improve the work you are doing here at techreport.

            • MOSFET
            • 3 years ago

            I bought the very 6GB card reviewed from Newegg on August 9th for $259.

      • Shinare
      • 3 years ago

      I must admit I had no idea about this NVIDIA freezing TR out shenanigans. If this is the case then I guess I need to reevaluate my purchases moving forward as TR is my only place for TRUSTED reviews.

      • K-L-Waster
      • 3 years ago

      Anyone find this ironic given that before Scott left to join AMD he was repeatedly accused of a pro-Nvidia bias? And that with the Fury cards it was actually AMD that withheld review cards from TR?

      My how things change.

      It wasn’t right for AMD to withhold cards then, and it isn’t right for Nvidia to do so now. I tend to buy NV cards, but dirty pool is dirty pool no matter who is playing it.

      • I.S.T.
      • 3 years ago

      Might be a freeze out, but no one got cards on the 1050/1050 Ti launch.

      I suspect Nvidia is not sending out cards to certain sites instead of a freezeout. PCPer didn’t have a review of the 1050 /1050 Ti either, and they use the same review methodology as TR. I don’t know of any other site that does. If other sites like that exist, it might prove or disprove my hypothesis.

      • Lans
      • 3 years ago

      I don’t know how much it would help but I am inclined to write NVIDIA an e-mail to the effect that I will absolutely not buy any video card without reading TR’s review.

    • crystall
    • 3 years ago

    There’s a typo on the first page: delta-color-correction instead of delta-color-compression

      • Jeff Kampman
      • 3 years ago

      Embarrassing! Fixed.

    • Wesmo
    • 3 years ago

    I’m surprised civ 6 hasn’t been added to the benchmark suite yet.

      • ArdWar
      • 3 years ago

      It’s only out for less than a week. I think this reviews’ data gathering process already underway at the time.

      • EzioAs
      • 3 years ago

      It’s pretty new, so it might make the next GPU review.

      • Krogoth
      • 3 years ago

      Civilization 6 isn’t that demanding on the GPU though. It runs effortlessly on my old-fangled 660Ti. It is far more depended on the CPU in the late-game like most strategy games. Besides, the genre doesn’t really benefit that much from having super-smooth framerates.

    • mdkathon
    • 3 years ago

    As an owner of a few cards, GTX 1070, 970, 750, using in gaming PC and HTPC/console-PC I’m still looking forward to some improvements from team green on the software and feature side of things.

    My main rig has triple monitors which I could display port daisy chain with eyefinity on an R9 290 no problem. AMD has better driver support to get this running in my opinion and it’s easier to switch on/off as I have a second PC I use for work so there’s a lot of switching going on. It’s not even possible to display port daisy chain with nVidia cards at the moment. In addition every time nVidia drivers update on my HTPCs (all TVs in the house) the resolutions need to be adjusted. Small annoyances, but they are really annoying! I moved from red to green about a year ago all at once and it’s been okay, at best.

    Really hoping AMD has something up their sleeves. I moved to the 970, and then 1070 due to being so much more power efficient in my little ITX case running a Silverstone 450W. It’s amazing, it really is.

    • cygnus1
    • 3 years ago

    Jeff,

    Sorry for nitpick.

    [quote<] the GTX 980's 52.8-FPS average is just 2.1% faster than the GTX 1060 6GB's 50.7-FPS figure. [/quote<] That's a 4% diff. 2.1 is the raw FPS diff.

      • Jeff Kampman
      • 3 years ago

      I swear, one of these days I’m going to be held at gunpoint and that person will ask me to correctly calculate a percentage. Thanks for pointing that out.

        • cygnus1
        • 3 years ago

        No worries dude. I know you’re in Excel hell while working on these reviews, numbers get pretty tiring pretty quick.

          • morphine
          • 3 years ago

          You absolutely Have. No. Idea.

            • anotherengineer
            • 3 years ago

            MatLab >>>>>>>>>>>>>>>>>>>>>>>>> Excel

    • Raymond Page
    • 3 years ago

    Any chance of getting laptops with a GTX 1060 and 1070 to show a comparison between the laptop and desktop variations with heat and clock speed?

      • Jeff Kampman
      • 3 years ago

      We have a GTX 1070 laptop with one of our writers now. Stay tuned.

    • DragonDaddyBear
    • 3 years ago

    I agree with this card being the superior card. I wouldn’t blame anyone for buying it. What spoils it for me is G-Sync.

    Intel has said they will support DisplayPort’s adaptive sync (AMD FreeSync) in the future. The cost savings between G-Sync and FreeSync monitors and outlook makes it more attractive than a 30W savings, especially when gerbils are writing in the forums about significant efficiency gains when under volting.

    • Kretschmer
    • 3 years ago

    Please bench against the 290X or 390X on any review that includes the GTX 970 or Fury X. Hawaii outsold Fuji and will be the cause of many upgrades over the next year.

      • AnotherReader
      • 3 years ago

      I second this though I think a reference 480 is usually equivalent to the 290X.

        • Chrispy_
        • 3 years ago

        Absolutely. GCN is pretty much GCN, so the math backs this up:

        R9 290X =
        1GHz x 2816 GCN CUs = 2816 CUGHz (pronounced “cougar hertz”)

        RX 480 =
        1.27GHz x 2304 GCN CUs = 2926 cougarhertz.

        There are about the same number of injured cougars for each card, difference being some backend optimisations and whatnot that games don’t care about because they’re all optimised for GCN 1.2 that the consoles use.

          • cygnus1
          • 3 years ago

          +3 for cougarhertz, lol

          • tipoo
          • 3 years ago

          How many overwatts per cougarhertz?

          • VinnyC
          • 3 years ago

          Yeah, the GameCube is pretty great

            • derFunkenstein
            • 3 years ago

            Never understood why that was abbreviated GCN. Always seemed to me like NGC made more sense.

      • Jeff Kampman
      • 3 years ago

      I’ll see what I can do about it.

        • moose17145
        • 3 years ago

        As a R9 290 owner…

        I too would also like to express interest in seeing a 290(X) in these reviews, as it would provide a decent point of reference. It would also be interesting to see how the 290(X)’s compare to the 480’s and current cards now that they have a couple of years of driver improvements backing them up vs. when the cards were first reviewed and still had fresh drivers.

        tl:dr – insert mile long wish list here.

          • Chrispy_
          • 3 years ago

          As a 290X > GTX970 > RX480 owner, they’re all very similar indeed with the AMD getting an advantage in newer engines and Nvidia getting an advantage in TWIMTBP engines and legacy stuff.

            • I.S.T.
            • 3 years ago

            Honest question: Why would you go from a GTX 970 to an RX 480? They’re awful close in speed.

            • sweatshopking
            • 3 years ago

            I have a 290 system and a 1060 system. The performance in each is enough. I wouldn’t have sidegraded, but for 1080p they are able to push almost enough fps.

            • DoomGuy64
            • 3 years ago

            Theoretically there’s a number of reasons: Ram, dx12/vulkan, driver optimization (maxwell=dead horse), freesync.

            Having a 390 myself, I would say that none of those cards are really an upgrade from each other, and the 290X was likely the fastest. There probably was some buyers remorse over the 970, therefore the 480 which is the 390’s replacement.

            • Chrispy_
            • 3 years ago

            Very close, but I’m a performance/$ man (silence at adequate speed is better than faster but noisy). The 480 is no Pascal, but it’s the best of the three in terms of power efficiency, and also the Freesync card played it’s hand, as per my post below.

            G-Sync is so expensive that it basically nullifies all arguments where cost is any consideration whatsoever.

            • rahulahl
            • 3 years ago

            Maybe because of a cheap freesync monitor?

            • Chrispy_
            • 3 years ago

            Well, yes.
            I had the Predator Z35 monitor on pricewatch for a few months but it never dipped below €600

            Then a Z271 appeared for £191.65 (27″ curved 144Hz AMVA Freesnyc) and that’s like £400 cheaper than the Nvidia alternative. I could buy two RX480s for the monitor cost difference alone, but I sold the 970 for not a lot less than I paid for my 480.

      • DragonDaddyBear
      • 3 years ago

      I’d like to see a 7950 or 7970 included and similar from Nvidia from that era, too.

      • DPete27
      • 3 years ago

      Granted , 1.5 years of driver optimizations have passed since [url=https://techreport.com/review/28612/asus-strix-radeon-r9-fury-graphics-card-reviewed/12<]the 390X review[/url<] but the 390X was about 15% faster than the GTX970 back then. Considering [url=https://techreport.com/review/30328/amd-radeon-rx-480-graphics-card-reviewed/13<]the RX 480 is within 5% of the GTX970[/url<] I think it's safe to say the 390X will hold the crown until Vega.

      • anotherengineer
      • 3 years ago

      And the last gen before GCN. I’m still looking to upgrade my Radeon HD6850 🙂

      Just waiting for the price drops and for cnd prices to come in at the actual exchange and not 1.6.

    • Ifalna
    • 3 years ago

    As an owner of the 1070 GTX (Asus Strix) I can indeed attest to it being a beast. At 1080p … well I do sometimes have the feeling of it looking at me, asking “seriously?!” while being bored to tears b/c my poor 3570K (now OCd to 4.6) still bottlenecks everything. ._.

      • rudimentary_lathe
      • 3 years ago

      Out of curiosity, at what FPS is the 3570K bottlenecking? I would think that four full Ivy Bridge cores at 4.6GHz wouldn’t bottleneck until well past 60PFS at 1080p in the vast majority, if not all, game.

        • Ifalna
        • 3 years ago

        Depends on the game. In Witcher 3 I get around 70-80.
        In WoW and FF-XIV I easily dip down into the 4x range (only in select places with tons of players or when flying across the open world).
        GTA V was around 70 when I tested it unsynched.

    • AnotherReader
    • 3 years ago

    Even though it is late, I always wait for your frametime based reviews. It looks like the RX 480 is closer to the 1060 than was expected. Do you have any custom 480s in the pipeline? Looking at these results, I suspect that a custom 480 would be slightly faster than a custom 1060.

    • Chrispy_
    • 3 years ago

    Oh man, I clicked on the picture without reading the title and it took me a few seconds to realise this was totally unrelated to today’s official launch of the 1050 line.

    Still, it’s good to finally have the TR testing metrics and scatterplots applied to everything from the RX470 to the GTX1070.

    I hate to state the obvious but I’ve been using [b<]other sites[/b<] for the last [i<]four months[/i<] to get updated GPU reviews on various 1060 comparisons, which is a bit of a shame because TR cut its teeth as the "best place on the web for CPU and GPU reviews". The term "best" is unfortunately less and less relevant when a sizeable chunk of any new product's annual lifespan has passed before TR publishes :'( The GTX1060/RX480 matchup has perhaps been the single most important mainstream launch pair in the last half decade, so the pain of writing the above paragraph is doubly saddening.

      • Jeff Kampman
      • 3 years ago

      I’m sorry it took so long to get this out, I really am. We have a GTX 1050 en route and I’m hoping to get it done much closer to launch than this review ended up being.

        • Kretschmer
        • 3 years ago

        I’m not at all critical of your timing (buying Rev1 products is a fool’s errand, after all), but would you be able to enlighten us with your review challenges? Can the community help out at all?

          • derFunkenstein
          • 3 years ago

          Back when the 1060 launched, Jeff said TR didn’t get a pre-release review unit. If the 1050 Ti is currently on the way, then that means nobody supplied them with a pre-release unit there, either. Not much the community can do about that, I’m afraid. :-/

          edit: except to lobby Nvidia and the company’s board partners to treat TR like the first-class review site that it is.

            • DragonDaddyBear
            • 3 years ago

            Thanks for reminding us that TR got stiffed at launch.

            • derFunkenstein
            • 3 years ago

            I only brought it up because Kretschmer asked. Another thing to remember is that TR did get the 480 and 470 reviews out on time (and the revised 470 results, too). And part of that, I’m sure, is due to the fact that the 1060 review was already post-launch so it was easier to let it slide and get these other reviews out at launch.

            Fact is, depending on how Vega does and what it looks like as a price/performance ratio, I’d be interested in buying AMD in part due to the fact they haven’t spent the last 10 months ignoring TR’s existence.

            • DragonDaddyBear
            • 3 years ago

            It helps to have an insider 😉

            • synthtel2
            • 3 years ago

            [quote<]Fact is, depending on how Vega does and what it looks like as a price/performance ratio, I'd be interested in buying AMD in part due to the fact they haven't spent the last 10 months ignoring TR's existence.[/quote<] Hear, hear. AMD + Linux would result in notably worse gaming performance for me, but Nvidia's history of obnoxiousness just keeps expanding, to the point that I'd consider paying a lot more for a given performance level just so my money doesn't go to Nvidia.

        • Chrispy_
        • 3 years ago

        No need to apologise to us, I’m always happy to read content here no matter the timing and this is yet another solid and high-quality review.

        I just can’t help feel that whatever the site’s troubles are, reader numbers = ads served = finance to keep the site running. As sympathetic and patient as existing readers likely are, new readers are attracted by big news on big news days. Google is still a popularity-based search engine and if TR’s articles aren’t getting hits (because they haven’t been published yet) it’s also going to vanish even further down the list of sites that potential new users look for news and articles on.

        No doubt all TR staff are acutely aware of this and I’m restating the obvious but if there isn’t a long term plan in place to ensure launch day (or at least launch week) articles, the younger generations with their seemingly miniscule attention spans will just click on past TR.

        Sadly, this worsens the existing financial difficulties facing all ad-based media and can [i<]only[/i<] result in one of two horrible fates that I don't want to see befall a site like TR - one that has contributed to the PC/tech demographic so much more than most on the web.

          • Jeff Kampman
          • 3 years ago

          Not to diminish your well-argued points, but all the youngins are on the YouTubes anyway.

            • Chrispy_
            • 3 years ago

            Tis true. They will be the generation that can’t read 🙁

            Old farts like us still use Google too though, we’re not all using Bing on our IE6 kerputer web navigators!

            • drfish
            • 3 years ago

            @ lEst dey rED txts

            • synthtel2
            • 3 years ago

            I haven’t the slightest what my YouTube-favoring peers are on about, though it’s not the first time I’ve disagreed with much of my generation. Keep up the good work! 🙂

            • Ninjitsu
            • 3 years ago

            The one or two odd YT “reviews” i’ve ventured to watch were absolutely nonsense.

            • Anovoca
            • 3 years ago

            Indeed. Linus used to be great for cases and unboxing, but then he started getting that YT cash and started pushing out plop as fast as he could film it. The only clips I can find now that even qualify as a review and not free advertisement are HardwareCanucks

            • Anovoca
            • 3 years ago

            Beat them at their own game, live streamed-unboxing with celebrity twitch streamers as guests to benchmark the games.

            • DragonDaddyBear
            • 3 years ago

            So, you’re saying there’s a chance of a TechReport YouTube channel?

            • llisandro
            • 3 years ago

            How about a video of the static price/performance graph with yakety sax playing in the background?

            • cygnus1
            • 3 years ago

            Ha! I would hijack every chromecast I came across and put that up.

            • ColeLT1
            • 3 years ago

            There is one:

            [url<]https://www.youtube.com/channel/UC9RfLrITn5n2XPKRaWgZXaA[/url<]

            • Mr Bill
            • 3 years ago

            If TR had a YouTubes channel, NVIDIA would give us a review copy. TR could immediately release a blabbity blab video review and then get it into the lab for a more thoughtful point/counterpoint review.

            • DragonDaddyBear
            • 3 years ago

            I was mostly being sarcastic. Unfortunately, you have to go where the people are.

    • deruberhanyok
    • 3 years ago

    I bought one of the EVGA 3GB 1060 SC cards when they launched for use in a system I have hooked up to an HDTV, so I wasn’t concerned about resolutions higher than “2k”. It’s smooth as silk for 1080p gaming.

    At the time the only 6GB 1060s I could find were selling for over $300. It didn’t make any sense to spend $100 more than the 3GB card, and looking at these numbers, [i<]on a system that doesn't go higher than 1080p[/i<], I don't know that it would even be worth the $50 difference now that prices have stabilized. Going to 1440p, yeah, there it would make sense. I also have a 4GB RX 470 in another system which is also running a 1080p display, and I'm very happy with that card as well. Really good time to be looking for a $200-$250 video card, plenty of good options. Thanks for spending all the time on this article, Jeff, all of the extra API testing and the wide range of cards used was really great. 🙂

    • chµck
    • 3 years ago

    Looks like the 1060 is the price:performance king now. AMD will likely respond soon with a 480 price cut.
    Also the 480 has a green bar in the hitman DX11 average fps graph.

      • Voldenuit
      • 3 years ago

      480 and 1060 look pretty much on-par in price:performance. If anything, AMD seems to have edged closer in performance compared to the 10-15% performance lead the 1060 used to command at launch.

        • Chrispy_
        • 3 years ago

        Yep. I’m loving the “best API” chart too. DX12/Vulkan performance isn’t a big deal yet, but you have to remember that both Sony and Microsoft consoles can take advantage of it and they are the [i<]most profitable[/i<] platforms for games developers and therefore the most relevant for developers to focus on.

          • Kretschmer
          • 3 years ago

          Is Vulkan a no-go with Freesync or is that just a Doom quirk? I purposely sacrificed performance in Doom to remove tearing.

            • DoomGuy64
            • 3 years ago

            Works fine on my 390. Dunno how you’re using it, but you have to match your desktop/game refresh rate with the top freesync range or it won’t turn on. Freesync also must be enabled on your monitor, and in the control panel.

            • Kretschmer
            • 3 years ago

            Freesync worked for you in Doom *with Vulkan*? It works like a charm under OpenGL, but Vulkan would not utilize VRR. It’s a shame, as Vulkan performance was great!

            (Perfect name for a response, by the by.)

            • DoomGuy64
            • 3 years ago

            OK, I’ve found your problem. But first let me state the obvious. GIYF. Nobody else is having this issue, and it is not a bug. The issue is 100% PEBKAC, and you should have done your homework before falsely claiming it doesn’t work.

            If you want to enable adaptive sync, the option must be enabled in the game itself in addition to the monitor, control panel, and desktop refresh rate.

            The option is under:
            VIDEO
            Vertical Sync: Adaptive

            Enable the option in the game, and it will work perfectly fine with Vulkan.

    • southrncomfortjm
    • 3 years ago

    I’m running Gears of War 4 on medium settings with my GTX 760 2GB card and it runs at a pretty solid 60fps. That said, it would be nice to run it at Ultra and 60fps since the textures are a bit muddy and uninteresting at medium. Makes me feel like I’m playing on a console again. The limiting factor really seems to be the VRAM, so a step up to a 1060 6gb may be in order.

    Thanks for the review!

      • Ninjitsu
      • 3 years ago

      Yeah, VRAM is a funny thing. It seemed unimportant at one point in time, but now I realise even a mid range GPU with lots of VRAM can make things look much better simply because of being able to hold higher res textures and shadows.

    • DPete27
    • 3 years ago

    The Hitman Average FPS bar graphs are miscolored according to vendor. Just an FYI.

    • chuckula
    • 3 years ago

    [quote<]The stubby coolers on the pair of EVGA 1060s we tested seem up to the task of keeping them within reasonable temperature ranges. [/quote<] As an owner of an EVGA GTX-1080 that's basically a bigger-brother to these cards I note that at least for my card the default fan profile leaves the fans completely turned off unless & until the GPU hits 60C*. Above that temperature the fans gradually spin up and appear to target a temperature of about 75C while still remaining pretty quiet. If these cards are using a similar fan profile, that should be taken into account with this review. * Basically, normal non-3D game use including a composited desktop and even accelerated video playback don't require the fans to turn on. Kick off a game, 3D design application, or 3D benchmark and they do fire off though.

      • DPete27
      • 3 years ago

      My MSI RX480 does the same thing. I haven’t had it long enough to put it though it’s paces, but even while folding (100% CPU & GPU usage) the fans hover between 20% and 30% and temps seem to target 70C (+/- 5C). Granted I’m still running on a 1080p monitor, but I’d be willing to bet that even at 1440p, the fans could stay off for some less demanding games/settings.

Pin It on Pinterest

Share This