Home Nvidia’s GeForce GTX 960 graphics card reviewed
Reviews

Nvidia’s GeForce GTX 960 graphics card reviewed

Scott Wasson Former Editor-in-Chief Author expertise
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

If you haven’t been around here for long, you may not be familiar with the world’s smallest chainsaw. This crucial, mythic piece of equipment is what the smart folks at GPU companies use to cut down their chips to more affordable formats.

Say that you have a fairly robust graphics chip like the GM204 GPU that powers the GeForce GTX 980, with four Xbox Ones worth of shader processing power on tap. That’s pretty good, right? But it’s also expensive; the GTX 980 lists for $549. Say you want to make a more affordable product based on this same tech. That’s when you grab the tiny starter pull cord between your thumb and index finger and give it an adorable little tug. The world’s smallest chainsaw sputters to life. Saw the GM204 chip exactly in half, blow away the debris with a puff of air, and you get the GM206 GPU that powers Nvidia’s latest graphics card, the GeForce GTX 960.

For just under two hundred bucks, the GTX 960 gives you half the power of a GeForce GTX 980—or, you know, two Xbox Ones worth of shader processing grunt. Better yet, because it’s based on Nvidia’s ultra-efficient Maxwell architecture, the GTX 960 ought to perform much better than its specs would seem to indicate. Can the GTX 960 live up to the standards set by its older siblings? Let’s have a look.


There’s lots o’ GTX 960-based trouble brewing in Damage Labs

The sawed-off Maxwell: GM206
I suppose there are several things to be said at this juncture.

First, although “half a GTX 980” might not sound terribly sexy, I think this product is a pretty significant one for Nvidia. The GeForce GTX 970 and 980 have been a runaway success, so much so that the company uncharacteristically shared some sales numbers with us: over a million GTX 970 and 980 cards have been sold to consumers, a huge number in the realm of high-end graphics. What’s more, we have reason to believe that estimate is already pretty dated. The current number could be nearly twice that.

But most people don’t buy high-end graphics cards. Even among PC gamers, less expensive offerings are more popular. And the GTX 960 lands smack-dab in the “sweet spot” where most folks like to buy. If the prospect of “way more performance for about $200” sounds good to you, well, you’re definitely not alone.

Also, there is no chainsaw. I probably made an awful lot of hard-working chip guys cringe with my massive oversimplification above. Although the GM206 really does have half of nearly all key graphics resources compared to the GM204, it’s not just half the chip. These things aren’t quite that modular—not that you’d know that from this block diagram, which looks for all the world like half a GM204.


A simplified block diagram of the GM206. Source: Nvidia.

The GM206 has two graphic processing clusters, almost complete GPUs unto themselves, with eight shader multiprocessor (SM) cores per cluster. Here’s how the chip stacks up to other current GPUs.

ROP
pixels/
clock
Texels
filtered/
clock
(int/fp16)
Shader
processors
Rasterized
triangles/
clock
Memory
interface
width (bits)
Estimated
transistor
count
(Millions)
Die
size
(mm²)
Fab
process
GK106 24 80/80 960 3 192 2540 214 28 nm
GK104 32 128/128 1536 4 256 3500 294 28 nm
GK110 48 240/240 2880 5 384 7100 551 28 nm
GM206 32 64/64 1024 2 128 2940 227 28 nm
GM204 64 128/128 2048 4 256 5200 398 28 nm
Pitcairn 32 80/40 1280 2 256 2800 212 28 nm
Tahiti 32 128/64 2048 2 384 4310 365 28 nm
Tonga 32 (48) 128/64 2048 4 256 (384) 5000 359 28 nm
Hawaii 64 176/88 2816 4 512 6200 438 28 nm

As you can see, the GM206 is a lightweight. The chip’s area is only a little larger than the GK106 GPU that powers the GeForce GTX 660 and the Pitcairn chip from the Radeon R9 270X. Compared to those two, though, the GM206 has a narrower memory interface. In fact, the GM206 is the only chip in the table above with a memory path that narrow. Typically, GPUs of this size have wider interfaces.

The GM206 may be able to get away with less thanks to the Maxwell architecture’s exceptional efficiency. Maxwell GPUs tend to like high memory frequencies, and the GTX 960 follows suit with a 7 GT/s transfer rate for its GDDR5 RAM. So there’s more throughput on tap than one might think. Beyond that, this architecture makes very effective use of its memory bandwidth thanks to a new compression scheme that can, according to Nvidia’s architects, reduce memory bandwidth use between 17% and 29% in common workloads based on popular games.

Interestingly, Nvidia identifies the Radeon R9 285 as the GTX 960’s primary competitor. The R9 285 is based on a much larger GPU named Tonga, which is the only new graphics chip AMD introduced in 2014. (The R9 285 ships with a 256-bit memory interface, although I still believe that the Tonga chip itself probably is capable of a 384-bit memory config. For whatever reason—perhaps lots of inventory of the existing Hawaii and Tahiti chips—AMD has chosen not to ship a card with a fully-enabled Tonga onboard.) Even with several bits disabled, the R9 285 has a much wider memory path and more resources of nearly every kind at its disposal than the GM206 does. If the GTX 960’s performance really is competitive with the R9 285, it will be a minor miracle of architectural efficiency.

The new GeForce GTX 900 series
With the addition of the 960, the GeForce GTX 900 series now extends from $199 to $549. Like its big brothers, the GTX 960 inherits the benefits of being part of the Maxwell generation. Those include support for Nvidia’s nifty Dynamic Super Resolution feature and some special rendering capabilities that should be accessible via DirectX 12. Furthermore, with a recent driver update, Nvidia has made good on its promise to deliver a new antialiasing mode called MFAA. MFAA purports to achieve the same quality as 4X multisampling, the most common AA method, with about half the performance overhead.

Also, the GTX 960 has one exclusive new feature: full hardware support for decoding H.265 video. Hardware acceleration of H.265 decoding should make 4K video playback smoother and more power-efficient. This feature didn’t make the cut for the GM204, so only the GTX 960 has it.

GPU
base
clock
(MHz)
GPU
boost
clock
(MHz)
ROP
pixels/
clock
Texels
filtered/
clock
Shader
pro-
cessors
Memory
path
(bits)
GDDR5
transfer
rate
Memory
size
Peak
power
draw
Intro
price
GTX
960
1126 1178 32 64 1024 128 7 GT/s 2 GB 120W $199
GTX
970
1050 1178 64 104 1664 256 7 GT/s 4 GB 145W $329
GTX
980
1126 1216 64 128 2048 256 7 GT/s 4 GB 165W $549

While the GTX 970 and 980 have 4GB of memory, the GTX 960 has 2GB. That’s still a reasonable amount for a graphics card in this class, although the creeping memory requirements for games ported from the Xbone and PS4 do make us worry a bit.

Notice that the GTX 960’s peak power draw, at least in its most basic form as Nvidia has specified, is only 120W. That’s down from 140W in this card’s most direct spiritual predecessor, the GeForce GTX 660. Maxwell-based products just tend to require less power to achieve even higher performance.

A card for every style
After the success of the GTX 970 and 980, Nvidia’s partners are understandably eager to hop on the GTX 960 bandwagon. As a result, Damage Labs is currently swimming in GTX 960 cards. Five of ’em, to be exact. The interesting thing is that each one of them is different, so folks are sure to find something to suit their style.

In many ways, the Asus Strix GTX 960 is the most sensible of the cards we have on hand. It’s the shortest one—only 8.5″ in length—and is the only one with a single six-pin PCIe aux power input, which is technically all the GTX 960 requires. Even so, the Strix has higher-than-stock GPU clock speeds and is the lone product in the bunch with a tweaked memory clock.

The Strix 960 was also first to arrive on our doorstep, so we’ve tested it most extensively versus competing GPUs.

Pictured above is EVGA’s GTX 960 SSC, or Super Superclocked. I suppose that name does the work you need. True to expectations, the SSC has the highest GPU frequencies of any of these GTX 960 cards. At 1279MHz base and 1342MHz boost, it’s well above Nvidia’s reference clocks.

The SSC also has an unusual dual BIOS capability. The default BIOS has a fan profile similar to the rest of these cards: it tells the fans to stop completely below a certain temperature, like ~60°C. Above that, the fan ramps up to keep things cool. That’s smart behavior, but it’s not particularly aggressive. If you’d like to overclock, you can flip a DIP switch on the card to get the other BIOS, which has a more traditional (and aggressive) fan speed profile.

Also, notice the port config on the EVGA card. There are three DisplayPort outputs, one HDMI, and one DVI. A lot of GTX 970 cards shipped with dual DVI and only two DisplayPort outputs, which seemed like a raw deal to me. Most GTX 960s are like this one. Via those three DP outputs, they can drive a trio of G-Sync or 4K (or both!) monitors.

Gigabyte sent us a pair of very different GTX 960 offerings, both clad in striking flat black. The shorter and more practical of the two is the GTX 960 Windforce, with a more-than-adequate dual-fan cooler. The G1 Gaming version of the GTX 960 ups the ante with a massive cooler sporting triple fans and a max cooling capacity of 300W. That’s total overkill—of exactly the sort I like to see.

Both of these cards have six-phase power with a 150W limit. Gigabyte says they’ll deliver higher boost clocks, even under an extreme load like Furmark, as a result. We’ll have to test that claim.

Another distinctive Gigabyte feature is the addition of a second DVI output in conjunction with triple DisplayPort outputs. Gigabyte calls this setup Flex Display. Although the GPU can’t drive all five outputs simultaneously for 3D gaming, I like the extra flexibility with respect to port types.

Last, but by no means least, is the MSI GeForce GTX 960 Gaming 2G. This puppy has a gorgeous Twin Frozr cooler very similar to the one used on MSI’s GTX 970, and that card took home a TR Editor’s Choice award for good reason. In addition to fully passive, fan-free operation below a temperature threshold—a feature all of these GTX 960 cards share—the Gaming 2G’s two fans are controlled independently. The first fan spins up to keep the GPU cool, while the other responds to the temperature of the power delivery circuitry.

Also, notice that these cards have only a single SLI connector at the front. That means the GTX 960 is limited to dual-GPU operation; it can’t participate in three- and four-way teams.

GPU
base
clock
(MHz)
GPU
boost
clock
(MHz)
GDDR5
clock
speed
(MHz)
Power
connector
Length Intro
price
GTX
960 reference
1126 1178 1753 6-pin N/A $199
Asus
Strix GTX 960
1253 1317 1800 6-pin 8.5″ $209
EVGA
GTX 960 SSC
1279 1342 1753 8-pin 10.25″ $209
Gigabyte
Windforce
GTX
960
1216 1279 1753 Dual 6-pin 10″ $209
Gigabyte
G1 Gaming
GTX
960
1241 1304 1753 Dual 6-pin 11.25″ $229
MSI
GTX
960 Gaming 2G
1216 1279 1753 8-pin 10.75″ $209-219

Here’s a summary of the GTX 960s pictured above. Although Nvidia has set the GTX 960’s base price at $199, each of these products offers a little extra for a bit more dough. I’d certainly be willing to spring another 10 or 15 bucks to avoid the somewhat noisy blower from reference versions of the GeForce GTX 660 and 760.

Our testing methods
We’ve tested as many different competing video cards against the new GeForces as was practical. However, there’s no way we can test everything our readers might be using. A lot of the cards we used are renamed versions of older products with very similar or even identical specifications. Here’s a quick table that will decode some of these names for you.

Original Closest
current
equivalent
GeForce GTX 670 GeForce GTX 760
GeForce GTX 680 GeForce GTX 770
Radeon HD 7870 GHz Radeon R9 270X
Radeon HD 7950 Boost Radeon R9 280
Radeon HD 7970 GHz Radeon R9 280X

Most of the numbers you’ll see on the following pages were captured with Fraps, a software tool that can record the rendering time for each frame of animation. We sometimes use a tool called FCAT to capture exactly when each frame was delivered to the display, but that’s usually not necessary in order to get good data with single-GPU setups. We have, however, filtered our Fraps results using a three-frame moving average. This filter should account for the effect of the three-frame submission queue in Direct3D. If you see a frame time spike in our results, it’s likely a delay that would affect when the frame reaches the display.

We didn’t use Fraps with Civ: Beyond Earth. Instead, we captured frame times directly from the game engine itself using the game’s built-in tools. We didn’t use our low-pass filter on those results.

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-3820
Motherboard Gigabyte
X79-UD3
Chipset Intel X79
Express
Memory size 16GB (4 DIMMs)
Memory type Corsair
Vengeance CMZ16GX3M4X1600C9
DDR3 SDRAM at 1600MHz
Memory timings 9-9-9-24
1T
Chipset drivers INF update
9.2.3.1023
Rapid Storage Technology Enterprise 3.6.0.1093
Audio Integrated
X79/ALC898
with Realtek 6.0.1.7071 drivers
Hard drive Kingston
HyperX 480GB SATA
Power supply Corsair
AX850
OS Windows
8.1 Pro
Driver
revision
GPU
base
core clock
(MHz)
GPU
boost
clock
(MHz)
Memory
clock
(MHz)
Memory
size
(MB)
Radeon
R9 270X
Catalyst 14.12
Omega
925 1250 3072
Radeon
HD 7950 Boost
Catalyst 14.12
Omega
925 1250 3072
Radeon
R9 285
Catalyst 14.12
Omega
973 1375 2048
XFX Radeon
R9 280X
Catalyst 14.12
Omega
1000 1500 3072
Radeon
R9 290
Catalyst 14.12
Omega
947 1250 4096
GeForce
GTX 560 Ti
GeForce
347.25
900 1050 1024
GeForce
GTX 660
GeForce 347.25 980 1033 1502 2048
GeForce
GTX 760
GeForce 347.25 980 1033 1502 2048
GeForce
GTX 770
GeForce
347.25
1046 1085 1753 2048
Asus
Strix GTX 960
GeForce
347.25
1253 1317 1800 2048
Asus Strix
GTX 970
GeForce
347.25
1114 1253 1753 4096

Thanks to Intel, Corsair, Kingston, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Also, our FCAT video capture and analysis rig has some pretty demanding storage requirements. For it, Corsair has provided four 256GB Neutron SSDs, which we’ve assembled into a RAID 0 array for our primary capture storage device. When that array fills up, we copy the captured videos to our RAID 1 array, comprised of a pair of 4TB Black hard drives provided by WD.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Sizing ’em up
Do the math involving the clock speeds and per-clock potency of these cards, and you’ll end up with a comparative table that looks something like this:

Peak pixel
fill rate
(Gpixels/s)
Peak
bilinear
filtering
int8/fp16
(Gtexels/s)
Peak
shader
arithmetic
rate
(tflops)
Peak
rasterization
rate
(Gtris/s)
Memory
bandwidth
(GB/s)
Radeon
R9 270X
34 84/42 2.7 2.1 179
Radeon
HD 7950 Boost
30 104/52 3.3 1.9 240
Radeon
R9 285
29 103/51 3.3 3.7 176
Radeon
R9 280X
32 128/64 4.1 2.0 288
Radeon
R9 290
61 152/76 4.8 3.8 320
GeForce GTX
560 Ti
26 53/53 1.3 1.6 128
GeForce GTX
660
25 83/83 2.0 3.1 144
GeForce GTX
760
25 99/99 2.4 4.1 192
GeForce GTX 770 35 139/139 3.3 4.3 224
GeForce GTX
960
38 75/75 2.4 2.5 112
Asus
Strix GTX 960
42 84/84 2.7 2.6 115
Asus Strix GTX
970
61 130/130 4.2 5.0 224

Sorry it’s so huge. I tested a lot of graphics cards.

As you can see, even with relatively high memory clocks, the GTX 960 just has less memory bandwidth than anything else we tested. Versus its closest competitor from the Radeon camp, the R9 285, the GTX 960 has less of everything except for peak pixel fill rate. (Maxwell GPUs tend to have pretty high pixel rates, likely because Nvidia has tuned them for the new era of high-PPI displays.)

Of course, the numbers above are just theoretical. We can measure how these graphics cards actually perform by running some directed tests.

Traditionally, performance in this color fill test has been limited by memory bandwidth. Few cards reach their theoretical peaks. But check this out: even though it has less memory bandwidth on paper than anything else here, the GTX 960 takes third place in this test. That’s Maxwell’s color compression tech at work. In fact, the only two cards ahead of the GTX 960 are based on GPUs that also have delta-based color compression.

Texture sampling rate is another weakness of the GTX 960 on paper. In this case, the card’s delivered performance is also relatively weak.

The GTX 960’s tessellation and particle manipulation performance, however, is pretty much stellar. This is a substantial improvement over the GTX 660 and 760. In TessMark, the GTX 960 is faster than any Radeon we tested, up to the R9 290.

On paper, the GTX 960 gives up nearly a full teraflop to the Radeon R9 285—2.4 versus 3.3 teraflops of peak compute throughput. In these directed tests, though, the GTX 960 only trails the R9 285 slightly. That’s Maxwell’s vaunted architectural efficiency shining through again.

Now, let’s see how these numbers translate into performance in real games.

Far Cry 4


Click the buttons above to cycle through the plots. Each card’s frame times are from one of the three test runs we conducted for that card.

The GTX 960 starts off with a pretty astounding performance in Far Cry 4. However you want to cut it—the frame time plots, the simple FPS average, or our time-sensitive 99th percentile frame time metric—the GTX 960 looks to be every bit the equal of the GeForce GTX 770. To be clear, the GTX 770 is based on a fully-enabled GK104 chip and is essentially the same thing as the GeForce GTX 680, which was selling for $499 a few short years ago. The GTX 770 has almost double the memory bandwidth of the 960, yet here we are.

The R9 285 nearly ties the GTX 960 in terms of average FPS. But have a look at the R9 285’s frame time plot. Thanks to some intermittent but frequent spikes common to all of the Radeons, the R9 285 falls well behind the GTX 960 in our 99th percentile metric. That means animation just isn’t as fluid on the Radeon.

We can better understand in-game animation fluidity by looking at the “tail” of the frame time distribution for each card, which shows us what happens in the most difficult frames.


The Radeon cards’ frame times start to trend upward at around the 92% mark. By contrast, the curves for most of the GeForce cards only trend upward during the last two to three percent of frames. Some of the frames are difficult for any GPU to render, but the Radeons struggle more often.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. And 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

For the most part, these results confirm what we’ve already seen above. The R9 285 encounters more “badness” at each threshold than the GTX 960 does. The GTX 960 is just delivering smoother animation here. However, revisit the comparison between the GTX 960 and the GeForce GTX 770. Although these two cards have almost identical 99th percentile frame times, the GTX 960 actually fares better in our “badness” metrics. So even though the GTX 960 is based on a smaller chip with less memory bandwidth, its performance is less fragile than the GTX 770’s.

DOTA 2
The two ultra-popular MOBAs don’t really require much GPU power. I tried League of Legends and found that it ran at something crazy like 350 FPS on the GTX 960. Still, I wanted to test one of these games to see how the various GPUs handled them, so I cranked up the quality options in DOTA 2 and raised the display resolution to 2560×1440. Then I played back a portion of a game from the Asia championship tourney, since I lack the skillz to perform at this level myself.


The big numbers in the FPS averages above are nice, but pay attention to the 99th percentile frame time for a better sense of the animation smoothness. Everything down to the GeForce GTX 760 renders all but the last one percent of frames in less than 16.7 milliseconds. That means nearly every single frame comes in at 60 FPS. That’s not just an average, but the general case.

I’m not sure what happened with the Radeon HD 7950. This is the same basic product as the Radeon R9 280, and it should be more than fast enough to perform well here. For some reason, it ran into some slowdowns, as the frame time plots show.



I don’t think the GTX 960 and the Radeon R9 285 could be more evenly matched than they are in our DOTA 2 test. Then again, you don’t need much video card to run at a constant 60 FPS in this game.

What’s interesting, for those of us with high-refresh gaming displays, is the time spent beyond 8.3 milliseconds. (At 120Hz, the frame-to-frame interval is 8.3 ms.) As you can see, getting a faster video card will buy you more time below the 8.3-ms threshold. That means smoother animation and more time at 120 FPS, which is DOTA 2‘s frame rate cap.

Civilization: Beyond Earth
Since this game’s built-in benchmark simply spits out frame times, we were able to give it a full workup without having to resort to manual testing. That’s nice, since manual benchmarking of an RTS with zoom is kind of a nightmare.

Oh, and the Radeons were tested with the Mantle API instead of Direct3D. Only seemed fair, since the game supports it.




The contest between the R9 285 and the GTX 960 is again incredibly close, but the R9 285 has a slight edge overall in the numbers above, perhaps due in part to the lower overhead of the Mantle API.

Borderlands: The Pre-Sequel


The R9 285 is clearly ahead in the FPS average, but it trails the GTX 960 quite a bit in our time-sensitive 99th percentile metric. Flip over to the Radeons’ frame time plots and you’ll see why: regular, intermittent frame time spikes as high as 45 milliseconds. The magnitude of these spikes isn’t too great, and they don’t totally ruin the fluidity of the game when you’re playing, remarkably enough. Still, they seem to affect about four to five percent of frames rendered, and they’re pretty much constant. Interestingly enough, we’ve seen this sort of problem before on Radeon cards in Borderlands 2. AMD eventually fixed this problem with the Catalyst 13.2 update. It’s a shame to see it return in the Catalyst Omega drivers.



As one would expect, the frame time curves and badness metrics reflect the frame time spikes on the Radeons.

Alien: Isolation


Here’s another case where the R9 285 wins the FPS sweeps but trails in the 99th percentile frame time score. You can see the “fuzzy” nature of the frame times on the mid-range Radeons’ plots, but really it’s nothing to write home about. We’re talking about a couple of milliseconds worth of difference in the 99th percentile frame time.



The Talos Principle
I wanted to include one more game that would let me do some automated testing, so I could include more cards. So I did. Then I tested the additional cards manually in all of the games you saw on the preceding pages. Since I am functionally insane. Anyhow, here are the results from the nicely automated FPS benchmark in The Talos Principle public test, which is freely available on Steam. This new game from Croteam looks good and, judging by the first few puzzles, makes Portal seem super-easy.

Oh, also, I’ve included all five of the GTX 960 cards we talked about up front here, to give us a glimpse of how they compare.

Wow, so there’s not much daylight between the different variants of the GTX 960. We’re talking a total range of less than a single frame per second at 2560×1440.

What this tells me is that the differences between the GTX 960 cards, such as they are, probably won’t be apparent without overclocking. The fastest card of the bunch, by a smidgen, is the Asus Strix GTX 960, which also happens to have a higher memory clock than the rest of the bunch. Hmm. My plan is to overclock these puppies in a follow-up article, so we can see how they differ when pushed to their limits. Stay tuned for that.

Power consumption
Please note that our “under load” tests aren’t conducted in an absolute peak scenario. Instead, we have the cards running a real game, Crysis 3, in order to show us power draw with a more typical workload.

Here’s where the GM206’s smaller size and narrower memory interface pays positive dividends: power consumption. Our test rig’s power use at the wall socket with an Asus Strix GTX 960 installed is 78W lower than with a Radeon R9 285 in the same slot. As you’ve seen, the two cards perform pretty much equivalently, so we’re looking at a major deficit in power efficiency.

Noise levels and GPU temperatures
These new video card coolers are so good, they’re causing us testing problems. You see, the noise floor in Damage Labs is about 35-36 dBA. It varies depending on things I can’t quite pinpoint, but one notable contributor is the noise produced by the lone cooling fan always spinning on our test rig, the 120-mm fan on the CPU cooler. Anyhow, what you need to know is that any of the noise results that range below 36 dBA are running into the limits of what we can test accurately. Don’t make too much of differences below that level.

Yeah, these big coolers are pretty much overkill for the GTX 960 at stock clocks. The smallest one of them all, on the Asus Strix GTX 960, is quiet enough under load to hit our noise floor—we’re talking whisper quiet here—while keeping the GPU at 60°C. Jeez.

The one bit of good news for the AMD camp is just how very much these coolers are overkill for the likes of the GTX 960. MSI uses essentially the same Twin Frozr cooler on its R9 285, and that card also reaches our noise floor while drawing substantially more power and turning it into heat.

Conclusions
As usual, we’ll sum up our test results with a couple of value scatter plots. The best values tend toward the upper left corner of each plot, where performance is highest and prices are lowest.


Although it started life at $249, recent price cuts have dropped the Radeon R9 285’s price on Newegg down to $209.99, the same price as the Asus Strix GTX 960 card we used for the bulk of our testing.

At price parity, the GTX 960 and R9 285 are very evenly matched. The R9 285 has a slight advantage in the overall FPS average, but it falls behind the GeForce GTX 960 in our time-sensitive 99th percentile metric. We’ve seen the reasons why the R9 285 falls behind in the preceding pages. I’d say the 99th percentile result is a better indicator of overall performance—and the GTX 960 leads slightly in that case. That makes the GTX 960 a good card to buy, and for a lot of folks, that will be all they need to know.

It’s a close race overall, though. Either card is a decent choice on a pure price-performance basis. AMD and its partners have slashed prices recently, perhaps in anticipation of the GTX 960’s introduction, without making much noise about it. Heck, the most eye-popping thing on the plot above is that R9 290 for $269.99. Good grief. In many of these cases, board makers are offering mail-in rebates that effectively take prices even lower. Those don’t show up in our scatter plots, since mail-in rebates can be unreliable and kind of shady. Still, AMD apparently has decided to move some inventory by chopping prices, and that has made the contest between the GTX 960 and the R9 285 very tight indeed.

That’s not quite the whole story. Have a look at this plot of power consumption versus performance.

In virtually every case, you’ll pay more for the Radeon than for the competing GeForce in other ways—whether it be on your electric bill, in terms of PSU requirements, or in the amount of heat and noise produced by your PC. The difference between the R9 285 and the GeForce GTX 960 on this front is pretty dramatic.

Another way to look at the plot above is in terms of progress. From the GTX 660 to the GTX 960, Nvidia improved performance substantially with only a tiny increase in measured power draw. From the GTX 770 to 970, we see a similar performance increase at almost identical power use. By contrast, from the R9 280X to the R9 285—that is, from Tahiti to Tonga—AMD saw a drop in power efficiency. Granted, the R9 285 probably isn’t Tonga’s final form, but the story still isn’t a good one.

Nvidia has made big strides in efficiency with the introduction of Maxwell-based GPUs, and the GeForce GTX 960 continues that march. Clearly, Nvidia has captured the technology lead in GPUs. Only steep price cuts from AMD have kept the Radeons competitive—and only then if you don’t care about your PC’s power consumption.

I’ll have more to say about the different flavors of the GTX 960 we have on hand here in Damage Labs after I’ve had a little more time to put them through the wringer. For now, though, if you aren’t interested in overclocking, you might as well take your pick. They’re all blessed with more cooling than strictly necessary, and they’re all whisper-quiet. What’s not to like?

Enjoy our work? Pay what you want to subscribe and support us.

The Tech Report - Editorial ProcessOur Editorial Process

The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.

Scott Wasson Former Editor-in-Chief

Scott Wasson Former Editor-in-Chief

Scott Wasson is a veteran in the tech industry and the former Editor-in-Chief at Tech Report. With a laser focus on tech product reviews, Wasson's expertise shines in evaluating CPUs and graphics cards, and much more.

Latest News

Polygon (MATIC) Finds Support at $0.60; Further Decline Ahead or Rebounding?
Crypto News

Polygon (MATIC) Finds Support at $0.60; Further Decline Ahead or Rebounding?

WienerAI Surpasses $5.5M as ASI Token Merger Pushes AI Further
Crypto News

WienerAI Surpasses $5.5M as ASI Token Merger Pushes AI Further

Initially scheduled to be finalized on June 13, the ASI Alliance has postponed its token merger to July 15.  ASI’s token merger will bring together three AI tokens:  Fetch.ai ($FET),...

Trump Doubles Down on Crypto Advocacy, Promises to Be the ‘Crypto President’
Crypto News

Trump Doubles Down on Crypto Advocacy, Promises to Be the ‘Crypto President’

Donald Trump declared he’d be the ‘crypto president’ at Craft Ventures’ fundraising event in San Francisco.  This intensifying crypto advocacy opposes the Democratic Party’s regulation-heavy approach. President Biden recently proposed...

WienerAI’s Trading Bot: The ChatGPT of Crypto – WienerAI Breaks $5.5M in Presale
Crypto News

WienerAI’s Trading Bot: The ChatGPT of Crypto – WienerAI Breaks $5.5M in Presale

Everything Apple Announced At WWDC 2024
News

All Apple WWDC24 Announcements: iOS 18, Apple Intelligence (AI), Calculator App, and more

Elon Musk Threatens to Ban Apple Devices at His Companies Citing Privacy Risks with New Apple-ChatGPT Collaboration
News

Elon Musk Threatens to Ban Apple Devices at His Companies Citing Privacy Risks with New Apple-ChatGPT Collaboration

Solana (SOL) Price Prediction: Can Solana Price Touch $200 Amid Market Volatility?
Crypto News

Solana (SOL) Price Prediction: Can Solana Price Touch $200 Amid Market Volatility?