Nvidia is on quite the roll this year. The GeForce GTX 1070 and GeForce GTX 1080 remain the uncontested performance champions of the high-end graphics card market, thanks in part to AMD’s more mainstream ambitions for its Polaris-powered graphics cards. If that dominance wasn’t enough, Nvidia did itself one better and advanced its single-GPU performance lead with the GP102 chip in the Pascal Titan X. Y’know, just because.
Of course, the green team didn’t ignore the average Joe while it was busy pushing the limits of graphics performance. Back in July, Nvidia introduced the GeForce GTX 1060, its response to the Radeon RX 480 8GB. The $250 GTX 1060 was the first card to play host to a more wallet-friendly Pascal GPU: GP106. The GTX 1060 6GB, as we now know it, immediately went to work against the hard-to-get RX 480 with a slew of readily-available aftermarket cards that stickered near Nvidia’s $250 suggested price tag. The mainstream onslaught didn’t stop there, however. A couple weeks later, Team Green took the wraps off a GTX 1060 with 3GB of RAM that rang in at $200.
The GP106 GPU.
While the GTX 1060 3GB’s name might imply a simple halving of its RAM versus its bigger brother, there’s more going on under the hood of that card than its innocuous name might suggest. The GTX 1060 3GB sustained some cuts to its graphics-processing resources to hit its price target. Nvidia disabled one of the card’s shader multiprocessor (SM) blocks, dropping the GTX 1060 3GB’s resource allocation to 1152 stream processors and 72 texture units. Contrast that approach with AMD’s Radeon RX 480 4GB, whose only difference from its 8GB cousin is that 4GB of missing RAM. Here’s how the two “GTX 1060s” compare on paper, in convenient tabular form:
|GTX 1060 3GB||1506||1708||48||72/72||3.9||1152||192||8.0||192||120W|
|GTX 1060 6GB||1506||1708||48||80/80||4.4||1280||192||8.0||192||120W|
A block diagram of the GP106 GPU. Source: Nvidia
Nvidia has played this kind of name game with its cards before. Recall that the GeForce GTX 460 came in 768MB and 1GB flavors. Despite the identical name on the box, the lesser GTX 460 was down eight ROPs and had a narrower path to memory than its better-endowed counterpart. We complained about that false equivalency then, and we’re complaining about it now. AMD isn’t ashamed of putting a smaller number on its cut-down Polaris 10 card, the Radeon RX 470, and we don’t think calling the GTX 1060 3GB… well, anything other than a GTX 1060 would have hurt its perception in the marketplace that much.
Thanks to that questionable naming scheme, the uninformed builder picking one of these cards off the shelf probably won’t notice that more is missing from the GTX 1060 3GB than 3GB of RAM—assuming those specs are clearly spelled out on the box at all. In the case of the EVGA cards we have on hand, we found no mention of stream processor or texturing unit counts on the cards’ packaging. We think that Nvidia’s board partners should be more upfront about what buyers are getting if there’s as substantial a difference between cards as there is between these two, even if that information is a little arcane.
|GeForce GTX 1060 3GB||82||123/123||3.9||3.4||192|
|GeForce GTX 1060 6GB||82||137/137||4.4||3.4||192|
|GeForce GTX 960||38||75/75||2.4||2.5||112|
|GeForce GTX 970||61||130/130||3.9||4.7||224|
|GeForce GTX 980||78||156/156||5.3||5.0||224|
|GeForce GTX 1070||108||202/202||7.0||5.0||259|
Some quick math shows that a full-fat GP106 chip has slightly more raw pixel throughput and slightly less texturing muscle than the GTX 980. It’s also slightly less capable than GM204 in sheer number-crunching power and memory bandwidth, although the Pascal architecture’s improved delta-color-compression facility might help make up some of that gap. Of course, both GTX 1060s utterly wipe the floor with the GM206 chip that powered the GTX 960 and the GTX 950. It’s a testament to the power of Pascal that we’re comparing the $250 GTX 1060 6GB to cards that used to cost $350 to $500-ish.
In another potentially controversial move, Nvidia removed SLI support from both GTX 1060s. At least one set of SLI fingers has been available on every GeForce card in recent memory except for the GTX 750 Ti and below, so this move marks a new era for the spec sheets of budget-friendly GeForces. Some DirectX 12 multi-adapter modes might let gamers harness multiple GTX 1060s in the future, but the option is no longer available in DirectX 11 titles, full stop.
In recent years, we’ve suggested that gamers get the best single graphics card they can afford for the most consistent and smoothest possible performance in games, so we’re not bothered much by this move. Folks willing to tolerate SLI’s inconsistent performance scaling and potential frame-pacing issues will be disappointed by this omission, however, especially considering that the price for a pair of GTX 1060 6GB cards ends up somewhere in between a GTX 1070 and a GTX 1080. If the GTX 1060 6GB delivers on Nvidia’s promise of GTX 980-class performance, a pair of those cards might have approached a GTX 1080 in raw speed, so it’s not hard to imagine why the green team made this choice. Those brave folks willing to pair multiple budget graphics cards for a potential performance boost will need to stick with Radeons for now.
Now that we have the lay of the land for the GTX 1060, let’s see how EVGA has chosen to put the chip to work on a pair of its graphics cards.
EVGA’s diminuitive GTX 1060 duo
The GTX 1060’s modest TDP means that monster dual-fan coolers aren’t needed to keep GP106 in check. Nvidia’s board partners have released a number of compact, single-fan versions of the GTX 1060 alongside the usual barrage of dual- and triple-fan beasts. EVGA kindly sent over one of its $260 GTX 1060 6GB SC Gaming cards when we began shaking the trees, but we weren’t as lucky securing a 3GB card for this review. Eventually, we threw in the towel and picked up the logical counterpart to our 6GB test subject from retail: the $210 EVGA GTX 1060 3GB SC Gaming.
Outwardly, these cards seem identical. They use the same cooler, the same 6.8″-long PCB, and the same six-pin power connector. You’d have a hard time telling them apart without squinting at their labels. Just because these cards are tiny doesn’t mean they’re cheaply made, though. A look under the understated plastic shroud of each card reveals a dense aluminum fin array and plenty of copper making contact with the GP106 chip itself. That’s reassuring given EVGA’s factory clock speed boosts over Nvidia’s reference numbers. Here’s a full rundown of each card’s specs compared to the GTX 1060 Founders Edition:
|EVGA GeForce GTX 1060 3GB SC Gaming||1607||1835||3GB GDDR5||8 GT/s||1x 6-pin||120W||$209.99|
|EVGA GeForce GTX 1060 6GB SC Gaming||1607||1835||6GB GDDR5||8 GT/s||1x 6-pin||120W||$259.99|
|GeForce GTX 1060 Founders Edition||1506 MHz||1683 MHz||6GB GDDR5||8 GT/s||1x 6-pin||120W||$299.99|
Four screws are mercifully all that stands in the way of removing these cards’ heatsinks. Once those screws are out and the single four-pin fan connector is unplugged, the heatsink flips off to reveal a neat application of thermal paste on a copper contact plate. Two beefy copper heatpipes run over this plate and into the aluminum fin array above. Simple, clean, and effective. EVGA’s engineers didn’t include a contact plate for cooling either card’s voltage regulators or memory chips, but the blow-down fan should keep enough air moving over those critical components to make sure that design choice isn’t an issue.
Not everything about these cards is the same, though. Once we started testing this duo, we noticed that our GTX 1060 6GB card ran considerably quieter than the 3GB version under load. It seems our 6GB card came flashed with EVGA’s “silent” firmware, while the 3GB card we grabbed off the shelf wasn’t so lucky. EVGA used to offer this special firmware to owners of its cards on a case-to-case basis, but no longer. Seeing as how one can set custom fan curves for either card in EVGA’s PrecisionX OC software, that’s probably not a big deal. We didn’t perform any such tweaking before testing either card, however, so the noise and thermal results you see in this review represent straight-from-the-factory performance.
Our testing methods
As always, we did our best to deliver clean benchmarking runs. We ran each of our test cycles three times on each graphics card tested, and our final numbers incorporate the median of those results. Aside from each vendor’s graphics drivers, our test system remained in the same configuration throughout the entire test.
|Processor||Intel Core i7-6700K|
|Motherboard||ASRock Z170 Extreme7+|
|Memory size||16GB (2 DIMMs)|
|Memory type||16GB (2x8GB) G.Skill DDR4-3000|
|Chipset drivers||Intel Management Engine 184.108.40.2065
Intel Rapid Storage Technology V 220.127.116.111
|Audio||Integrated Z170/Realtek ALC1150
Realtek 18.104.22.16825 drivers
|Storage||Two Kingston HyperX 480GB SSDs|
|Power supply||SeaSonic SS-660XP2|
|OS||Windows 10 Pro with Anniversary Update|
Our thanks to ASRock, G.Skill, Kingston, and Intel for their contributions to our test system, and to EVGA, MSI, AMD, and XFX for contributing the graphics cards we’re reviewing today.
|Driver revision||GPU base
|XFX Radeon RX 470 RS 4GB||Radeon Software 16.10.1||–||1256||1750||4096|
|Radeon RX 480 8GB||1120||1266||2000||8192|
|Asus Strix Radeon R9 Fury||—||1000||500||4096|
|AMD Radeon R9 Fury X||—||1050||500||4096|
|MSI GeForce GTX 970 Gaming 4G||GeForce 373.06||1114||1253||1753||4096|
|MSI GeForce GTX 980 Gaming 4G||1190||1291||1753||4096|
|MSI GeForce GTX 1070 Gaming Z 8G||1632||1835||2027||8192|
|EVGA GeForce GTX 1060 3GB SC Gaming||1607||1835||2000||3072|
|EVGA GeForce GTX 1060 6GB SC Gaming||1607||1835||2000||6144|
For our “Inside the Second” benchmarking techniques, we now use a software utility called PresentMon to collect frame-time data from DirectX 11, DirectX 12, OpenGL, and Vulkan games alike. We sometimes use a more advanced tool called FCAT to capture exactly when frames arrive at the display, but our testing has shown that it’s not usually necessary to use this tool in order to generate good results for single-GPU setups.
You’ll note that aside from the Radeon RX 480 and Radeon R9 Fury X, our test card stable is made up of non-reference designs with boosted clock speeds and beefy coolers. Many readers have called us out on this practice in the past for some reason, so we want to be upfront about it here. We bench non-reference cards because we feel they provide the best real-world representation of performance for the graphics card in question. They’re the type of cards we recommend in our System Guides, and we think they provide the most relatable performance numbers for our reader base. When we mention a “GTX 1060” or “Radeon RX 470” in our review, for example, just be sure to remember that we’re referring to the custom cards in the table above.
With that exposition out of the way, let’s talk results.
id Software’s 2016 Doom revival is a blast to play, and it’s also plenty capable of putting the hurt on today’s graphics cards. We selected the game’s Ultra preset with a couple of tweaks and dialed up the resolution to 2560×1440 to try and figure out whether any of the graphics cards on the bench had made a deal with the devil.
Both GTX 1060s are off to a solid start under Doom‘s OpenGL renderer. The GTX 1060 6GB is just a hair off the GTX 980 in average FPS, and its 99th-percentile frame time is only a bit higher than the fully-enabled GM204 card’s. Meanwhile, the GTX 1060 3GB turns in results indistinguishable from the GeForce GTX 970. As we’ve come to expect, however, the Radeons fare poorly with Doom‘s OpenGL mode—the R9 Fury can’t even best the GeForce GTX 1060 3GB, and the R9 Fury X likewise can’t get past the GTX 1060 6GB.
And no, the GTX 1070’s performance above is no mistake. It’s just that much swifter than everything else here. We had to double-check, too.
These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. And 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame. And since it matters for Doom, 8.3 ms works out to 120 FPS, a figure that folks with high-refresh-rate monitors will want to be hitting more often than not.
None of the cards in our test spend any time beyond 50 ms or 33 ms, so the most meaningful result is to consider how much time they spend working on tough frames that would drop frame rates below 60 FPS. By this measure, only the Radeon R9 Fury, the RX 480, and the RX 470 spend noticeable amounts of time under 60 FPS. The GeForces all turn in fine performances here, even at 2560×1440 with settings cranked to ultra. The GTX 1070, overachiever that it is, only spends eight seconds of our one-minute test run past 8.3 ms, too. It’s clear from our results that OpenGL is not the ideal API for Radeons, however, so let’s turn the tables and see how our competitors perform with Doom‘s Vulkan implementation.
As we’ve come to expect, Doom‘s Vulkan renderer benefits Radeons and hobbles GeForces a bit. The R9 Fury X rockets into second place in both our average-FPS and 99th-percentile frame-time measures, followed closely by the R9 Fury. Both GTX 1060s fall toward the back of the pack, but the GTX 1060 3GB card takes the switch especially hard. Its average frame rate drops behind the GTX 970’s, and its 99th-percentile frame time is also significantly worse than the Maxwell card’s.
The 3GB card also happens to have the smallest amount of RAM here. We can’t conclusively say that the size of the GTX 1060’s 3GB memory pool is the cause of this larger-than-expected performance drop, but it’s the only explanation that seems to make sense. That said, the GTX 1060 3GB’s frame-time plot doesn’t exhibit any large or repeated frame-time spikes, so its performance at least degrades in a way that doesn’t harm the user experience much. Of course, the real answer here is to stick with OpenGL if you’re a GeForce owner.
Another sign that the GTX 1060 3GB might be struggling because of the amount of RAM onboard is the relatively large amount of time it spends past 16.7 ms in our measures of “badness.” The GTX 1060 6GB has no such issues—it spends under a second working on similarly-challenging frames. Even the GTX 970 and its unusual memory configuration fare better. Regardless, the numbers above tell a simple story: GeForce owners shouldn’t bother with Vulkan if they want maximum Doom performance, but the Radeon faithful should absolutely enable it.
Rise of the Tomb Raider (DirectX 11)
All of the cards we tested end up delivering relatively similar performance in Rise of the Tomb Raider‘s DX11 mode, save the GeForce GTX 1070 at the front of the pack and the RX 470 at the rear. Strangely, the Radeon R9 Fury and R9 Fury X deliver practically the same average frame rate and 99th-percentile frame times, despite the Fury X’s resource advantage. Still, none of the cards’ frame-time plots offer any reason for concern. Rise of the Tomb Raider just happens to be a demanding game, and both average frame rates and the 99th-percentile frame times we collected attest to that fact.
Our time-spent-beyond-X graphs offer a bit more insight into the tightly-packed results above. The GTX 1070 almost never drops below 60 FPS, of course. Both Radeon R9 Furies turn in respectable results, as well. The GTX 1060 6GB offers a smoother gaming experience than the RX 480, and the GTX 1060 3GB spends about five fewer seconds than the RX 470 on difficult scenes that would drop frame rates below the 60-FPS threshold. Let’s see if a switch to RoTR‘s DirectX 12 renderer puts some space between these cards.
Rise of the Tomb Raider (DirectX 12)
The move to DirectX 12 with this version of Rise of the Tomb Raider and the latest drivers from AMD and Nvidia actually helps some cards for the first time we can recall. The R9 Fury and R9 Fury X both get small FPS boosts from the API switch, though the other Radeons aren’t as fortunate. Meanwhile, the GeForce cards either regress slightly or see minor improvements, depending on the card. The differences are quite small either way, though. 99th-percentile frame times barely change between APIs, and whether a given card improves or worsens by this measure largely seems to be a crapshoot.
Our “time-spent-beyond-X” graphs do suggest definite improvements in smoothness for the Radeon R9 Fury and R9 Fury X. Both of those cards spend less time working on difficult frames in DX12 mode than they do in DX11. As we’d expect from the results above, the other Radeons exhibit either no change in performance or a slight worsening. The GeForce cards, including the GeForce GTX 1060s, generally perform slightly worse than they do in DX11. Once again, switching DX12 on probably isn’t a good idea for owners of the green team’s cards.
Hitman (DirectX 11)
Among the games we tested for this review, Hitman (along with Doom) will lock out certain graphics settings if a card doesn’t have enough memory. We threw caution to the wind and disabled Hitman‘s safeguards, since they prevented us from testing the game with our chosen settings on the GTX 1060 3GB. We’ll be scrutinizing the GTX 1060 3GB’s performance in this title to see whether performance degrades if one disobeys the game’s advice.
Going by the results above, flipping Hitman‘s safeguards off produces performance results similar to those we saw with Doom‘s Vulkan renderer. The GTX 1060 3GB doesn’t exhibit any untoward spikiness in its frame-time plot, and nothing felt amiss with the card during our test run, but its performance does trail that of the cards with 4GB or more of memory. The GTX 1060 6GB’s extra muscle and extra memory seem to give it a substantial performance advantage in both our average-FPS and 99th-percentile frame-time measures. At least these results add credence to the idea that when the GTX 1060 3GB does run into its memory limits, the resulting performance degradation isn’t catastrophic—things just run slower.
A dive into our “time-spent-beyond-X” charts helps characterize the GTX 1060 3GB’s apparent slowdown. The GTX 1060 6GB spends about sixteen seconds working on frames that take longer than 16.7 ms to render in our one-minute test run, and the GTX 1060 3GB spends about eight seconds more. It’s important to note, however, that the 3GB card only spends an imperceptible amount of time past 33.3 ms, and it doesn’t get knotted up with any frames that take longer than 50 ms to render. If you choose to push the GTX 1060 3GB past its limits, it seems the only punishment for that willfulness will be reduced frame rates, not a stuttery mess.
Hitman (DirectX 12)
You should know what’s in store from Hitman‘s DX12 renderer by now. The Radeons in our test suite see improvements in performance, while the GeForces fall back. Interestingly, the GeForce GTX 970 joins the GTX 1060 3GB at the back of the pack.
Looking at our “time spent beyond” numbers, both the GTX 970 and the GTX 1060 3GB spend considerably more time on frames that take more than 16.7 ms to render than the rest of the cards in our suite. Perhaps not coincidentally, these two cards have the most unusual memory configurations of the bunch. Regardless, the story here remains the same as it has over the last few pages: if you own a GeForce card, DX11 and OpenGL are your friends. If you have a Radeon, you should enable DirectX 12. Moving on.
Here’s a good old DirectX 11 bone for our cards to chew on. Surprisingly, the Radeon R9 Fury and R9 Fury X shadow the GeForce GTX 1070, while the Polaris cards trail the Pascal competition a bit. Crysis 3 doesn’t seem to perturb the GTX 1060 3GB, though—both it and the GTX 1060 6GB deliver solid FPS averages and 99th-percentile frame times.
Crysis 3 also continues our test suite’s run of general smoothness. The 16.7-ms threshold is the only one where any of our cards spend any time of note working on tough frames. Surprisingly, the R9 Furies deliver a much smoother experience than anything save the GTX 1070 here. Meanwhile, the GTX 1060 3GB spends about 14 seconds working on those tough frames, while the GTX 1060 6GB spends about 12. The RX 480 8GB is on par with the GTX 970, and the RX 470 is about five seconds further in the hole.
Far Cry 4
Far Cry 4 is another demanding DirectX 11 classic. Here, the Radeon RX 480 pulls even with the GTX 1060 6GB, but the RX 470 can’t catch the GTX 1060 3GB. The Radeon R9 Furies turn in a surprising performance, too.
Discounting a tiny handful of difficult frames that cause our cards to spend a few milliseconds beyond 33.3 ms, the Radeon RX 480 actually beats out all of the GeForce competition in its price range when we consider the 16.7-ms threshold. The GTX 1060 6GB is close behind, though, and the GTX 1060 3GB turns in a better result than both the GTX 970 and the RX 470.
Deus Ex: Mankind Divided (DirectX 11)
Deus Ex: Mankind Divided is a thoroughly modern title with complex lighing effects, highly detailed textures, and multiple rendering paths. It’s a challenging game for any card to run well.
In Deus Ex‘s DX11 mode, the RX 480 8GB and the GTX 1060 6GB end up just one frame per second apart in that measure of performance potential. Likewise with the GTX 1060 3GB and the RX 470. Even in those close quarters, the RX 480 delivers a better 99th-percentile frame time than the GTX 1060 6GB. The RX 470 and the GTX 1060 3GB are about as smooth by this measure.
Deus Ex has the dubious honor of causing some of our cards to put meaningful numbers on our time-spent-beyond-50-ms charts for the first time in this review. For some reason, some of the Radeons exhibit a major hitch near the beginning of our test run. At least it’s the only place the cards fall victim to that kind of lag.
We’re mostly interested in the time-spent-beyond-16.7-ms mark, where the RX 480 8GB and the GeForce GTX 1060 are neck-and-neck. The RX 470 secures a substantial victory over the GTX 1060 3GB here, though. Like we saw with Hitman‘s DX11 mode, it seems Deus Ex isn’t kidding when it asks for 4GB of video RAM to work with at 2560×1440—the GTX 1060 3GB and (surprisingly) the GTX 970 spend quite a bit more time than the rest of our test suite on frames that take more than 16.7 ms to render.
Deus Ex: Mankind Divided (DirectX 12)
Despite some noticeable improvements in smoothness since we first examined its preview, Deus Ex‘s DX12 mode still doesn’t match its DirectX 11 render path for smooth gameplay. If we ignore the frequent spikes in the frame-time plots above, the story remains much the same as it has in every head-to-head DirectX 12 test in this review. Radeons advance, and GeForces fall back. The GeForce GTX 1060 3GB suffers even more here than it did under DirectX 11 mode, and the GTX 970 fares even worse.
Deus Ex‘s DX12 mode eradicates the major hang we saw with Radeons under DX11, but our “badness” graphs collect lots of the spikiness from our DX12 results in exchange at 50 ms and 33.3 ms. Interestingly, the GeForce GTX 1060 6GB spends less time past 33.3 ms than the GTX 980 does, possibly thanks to its 6GB of RAM.
Move the goalposts to the 16.7 ms threshold, though, and even that 6GB of memory doesn’t seem to help. The Radeon RX 480 8GB leads the midrange pack, while the GTX 980, RX 470, and GTX 1060 6GB all spend about four more seconds working on tough frames here than the RX 480. At the rear, the GTX 1060 3GB and the GTX 970 struggle mightily. The HBM-equipped Furies have a much better time of things than even the RX 480 8GB, though, suggesting that there may be more to this story than memory capacity alone. Still, our accumulating advice about API choice still holds.
The Witcher 3
Back to the DirectX 11 classics. In The Witcher 3, both GTX 1060s perform admirably, slotting right in between the GTX 980 above and the RX 480 8GB below. The RX 470 falls to the rear of the pack in both our average-FPS and 99th-percentile frame-time metrics, and the GTX 1070 extends its freakish lead. In this apparently non-VRAM-limited title, we can see that the GTX 1060 6GB and GTX 1060 3GB are pretty closely matched despite the 3GB card’s spec-sheet deficits.
Happily, none of the cards put any meaningful time on the board past our 50-ms and 33.3-ms thresholds, so we can look at the critical 16.7-ms mark straight away. Here, the GTX 1060 duo ends up mid-pack, besting the Polaris-powered Radeon competition.
Gears of War 4
The just-released Gears of War 4 offers an intriguing way to examine DirectX 12 performance without the influence of either major graphics-card vendor clouding the proceedings. This game comes straight from Microsoft Studios, and it doesn’t offer a DirectX 11 mode to fall back on. Gamers need Windows 10 to make Gears turn, too, so it’s a thoroughly modern title. We couldn’t capture video of our test run, but we chose a city environment early in the game to see how Gears runs.
With the rather extreme group of settings we chose, Gears of War 4 doesn’t seem to favor one particular GPU vendor or architecture over the other. Gears delivers a clean set of frame-time plots, as well. The Fiji-powered Radeons lead everything except for the GTX 1070 in average-FPS performance, and in turn, the GTX 980 and the GTX 1060 duo pull slightly ahead of the Polaris-powered Radeon cards. Each card matches its average-FPS number with a reasonably solid 99th-percentile frame time.
Even with this DirectX 12 title, we can happily skip right to our time-spent-beyond-16.7-ms results. The GTX 1060 6GB spends considerably less time churning on tough frames than its 3GB counterpart, and both cards lead the Polaris-powered competition for fluid gameplay if maximizing the time spent at or above 60 FPS is your goal. Open and shut.
High FPS averages and consistently low frame times don’t mean much if a graphics card sounds like a tornado while churning them out. We used the Faber Acoustical SoundMeter app running on an iPhone 6S Plus to measure the noise levels of each graphics card on our test bench from a distance of 18″ (45.7 cm). The noise floor in our testing environment is 31.1 dBA. We tested each card at idle using the Windows desktop and under load with our Doom test area.
At idle, only the RX 480 8GB reference card and the Radeon R9 Fury X make any noise—all of our other test subjects have semi-silent modes that allow them to turn off their fans. The Fury X card we have on hand still makes an annoying, prominent whine that’s not accounted for in these graphs, though. That piercing sound spoils an otherwise excellent performance.
Under our Doom load, the GTX 1060 6GB and the MSI GTX 1070 card add barely any noise to the ambient levels of our testing environment. Given the levels of performance on tap from each card, I’m over the moon with these results. The GTX 1060 3GB card is still quiet, but its more-aggressive fan profile does lead to some slight-but-noticeable fan noise under load. The Fury X and the Maxwell cards we’re working with are all about as loud, and only the R9 Fury, the RX 480 8GB, and the XFX Radeon RX 470 truly make themselves known while running all out.
At idle, the GTX 1060 cards both draw very little power, but the differences on display here aren’t that drastic—there’s only a 17W delta between the most- and least-power-hungry cards here at idle. Still, the GTX 1060s draw just that little bit less power than their Polaris Radeon competition.
Fire up Doom in all its glory, though, and the differences between process technologies, process generations, and architectures becomes much more evident. The GTX 1060s draw 35-44W less than the Radeon RX cards under load. Not much more to be said here.
The stubby coolers on the pair of EVGA 1060s we tested seem up to the task of keeping them within reasonable temperature ranges. Ambitious overclockers might want to spring for a GTX 1060 with a beefier cooler on board, but Mini-ITX and microATX builders should be thrilled with these cards’ blend of small size, performance, low noise levels, and power efficiency.
Before we share our thoughts on the GeForce GTX 1060s, it’s time for another round of TR’s famous value scatter plots, where we chart the performance each card delivers relative to its price. We’re offering up our data in three ways: DirectX 11 and OpenGL results only, DirectX 12 and Vulkan results only, and as a “best API” chart that pulls together the best performance numbers for each card from each game we tested. We’re presenting the “best API” results by default, since we think they offer the best picture of real-world performance. The curious can click around and see how each card did with each API, though. To make our higher-is-better presentation work with 99th-percentile frame times, we’ve converted those figures into average FPS.
Going by our latency-sensitive 99th-percentile frame-time measure, the GeForce GTX 1060 3GB comes in slightly ahead of the Radeon RX 470 we have on hand, and it demands slightly less money for the privilege. Not bad, especially when one considers that the GTX 1060 3GB consumes about 35W less power to get there. The GTX 1060 6GB also delivers somewhat smoother gameplay than the Radeon RX 480 8GB, though reference RX 480s sell for slightly less than the EVGA GTX 1060 6GB we tested. Like its little brother, the GTX 1060 6GB contributes less to overall system power draw to do its thing—about 44W less than the RX 480 in our testing. Though the smoothness gap between Radeons and GeForces has tightened considerably in this generation, the green team still holds a slight edge.
Those 99th-percentile numbers are slightly disappointing to see, because it’s clear from our average-FPS-per-dollar measure that the performance potential of these cards isn’t all that different. The RX 480 8GB pulls even with the GTX 1060 6GB by this measure, and the RX 470 only slightly trails the GTX 1060 3GB. With some further polish on its drivers, AMD might be able to make the 99th-percentile graph above look even more like its average-FPS results.
All told, the GeForce GTX 1060 3GB is basically a $200 GeForce GTX 970 (or an even cheaper one, once the rebate on the EVGA card we tested is taken into account). That would be great news for this price point, save for the fact that it seems rather easy to run over that 3GB of RAM with today’s games. Hitman locks out certain graphics settings when it detects less than 4GB of RAM to work with, and Deus Ex: Mankind Divided warns against using our test settings on cards with less than 4GB of RAM—seemingly with good reason, in both cases. The GTX 1060 3GB doesn’t become unusable in those situations, but its performance does drop, even if its frame delivery remains smooth.
Even then, we think it’s hard to pick a winner between the Radeon RX 470 and the GTX 1060 3GB from these results. We were pushing all of our cards to the limit at a 2560×1440 resolution. At those settings, the GTX 1060 3GB is still a faster, smoother card in general than the RX 470. What’s more, the vast majority of gamers still use 1920×1080 monitors, and we think it’ll be harder to run into the 3GB card’s limits there. The RX 470’s extra gig of RAM does seem to be useful today if 2560×1440 gaming is your thing, but we’re not sure that extra memory offsets the card’s higher power consumption and less-smooth delivered gameplay versus the GTX 1060 3GB.
A more concerning threat for the GTX 1060 3GB might be the $200-ish Radeon RX 480 4GB. That card is powered by a fully-enabled Polaris 10 chip. It’s been hard to find RX 480 4GB cards for AMD’s $200 suggested price of late, but Newegg actually has a couple such cards going for near $200 right now. We’d expect availability to improve as time goes on, too. If you really want to have some RAM in reserve for the future, the RX 480 4GB seems like the real foil for the GTX 1060 3GB.
We say as much because of the close race between the Radeon RX 480 8GB and the GTX 1060 6GB. Nvidia didn’t quite deliver a $250 GeForce GTX 980 with its better-endowed GTX 1060, but it came really close—and the RX 480 is right there with it. To emphasize how evenly matched these cards are, the GTX 980’s 52.8-FPS average is just 4% faster than the GTX 1060 6GB’s 50.7-FPS figure. The GTX 980’s 23.1-ms 99th-percentile frame time is just a hair better than the GTX 1060 6GB card’s 24.3-ms result, as well. Take the geometric mean of the RX 480’s results, and you get a 50.9 FPS average and 25.2-ms 99th-percentile frame time, as well. I can’t put a sheet of paper between those numbers, really: these cards are all quite satisfying to game with.
MSI GeForce GTX 1070 Gaming Z
The story doesn’t end with performance alone, though. The GTX 1060 6GB takes the all-around crown with its impressive efficiency and polite manners. Even with its stubby single-fan cooler, the EVGA GTX 1060 6GB SC is practically silent under load, and it draws much less power than the Radeon RX 480 while gaming. Neither of these cards are power hogs, to be fair, but this (or most any) GTX 1060 could easily slip into a tiny Mini-ITX PC without taxing a modest power supply or creating a racket. That’s a prospect that should elate all PC builders, but living-room gamers and dorm-room dwellers should be especially happy with this news. The EVGA GTX 1060 6GB SC card we tested embodies every advance we’ve been led to expect from the move to next-generation process technologies, and I’m happy to send it home with an Editor’s Choice award.
While we’re handing out trophies, MSI’s GeForce GTX 1070 Gaming Z card can also step forward. We already reviewed this particular card in-depth, but our initial review didn’t give it the full credit it deserves. If you’ve read even some of the preceding pages, you’ll know this card never stumbled once in our tests, and it never took anything other than first place for potential or delivered performance. Most impressively, the Gaming Z delivered that stellar performance without sucking power or making more than the barest peep under load. All those virtues make the GTX 1070 Gaming Z a superlative example of what’s possible with next-generation graphics cards, and I’m happy to extend it a TR Editor’s Choice award, too. (Be sure to check out MSI’s less blingy Gaming X card, as well.)
On another side note, AMD should take pride in the fact that its Radeon R9 Fury and R9 Fury X cards have more or less closed the smoothness gap with the GeForce competition from years past. Take The Witcher 3, for example. Dial in the same settings we used for that game in our initial Fury review, and that card’s 99th-percentile frame time falls from the 37.7 ms it turned in a year ago to 22.4 ms today. That’s a remarkable improvement from AMD’s driver team. Take a look at any one of our DirectX 11 titles, in fact, and the R9 Fury offers better potential performance (as measured by average FPS) and gameplay that’s as smooth or smoother (as measured by 99th-percentile frame times) than every card in our test suite save the GeForce GTX 1070 and the R9 Fury X.
Speaking of the Fury X, it’s enjoyed similar smoothness improvements, but it still can’t catch the GeForce GTX 1070. Owners of either Fury can enjoy much smoother gameplay now than they did a year ago, however, and that’s a big boost for fans of the red team. Whenever AMD’s nascent Vega GPUs arrive, it seems the company is poised to deliver maximum performance from those products on day one, and that’s sorely-needed progress. For now, we wait.
If you enjoyed this review, please consider becoming a TR subscriber. Your contribution helps us to independently obtain hardware like the GeForce GTX 1060 3GB graphics card featured in this review, and it also makes it possible for us to pursue the hours of testing and analysis necessary to deliver the in-depth discussions of performance and value you just enjoyed. TR subscribers get exclusive site benefits, and our Silver subscription tier lets you chip in as little as you like for the privilege. We appreciate your support.