Nvidia’s GeForce GTX 960 graphics card reviewed

If you haven’t been around here for long, you may not be familiar with the world’s smallest chainsaw. This crucial, mythic piece of equipment is what the smart folks at GPU companies use to cut down their chips to more affordable formats.

Say that you have a fairly robust graphics chip like the GM204 GPU that powers the GeForce GTX 980, with four Xbox Ones worth of shader processing power on tap. That’s pretty good, right? But it’s also expensive; the GTX 980 lists for $549. Say you want to make a more affordable product based on this same tech. That’s when you grab the tiny starter pull cord between your thumb and index finger and give it an adorable little tug. The world’s smallest chainsaw sputters to life. Saw the GM204 chip exactly in half, blow away the debris with a puff of air, and you get the GM206 GPU that powers Nvidia’s latest graphics card, the GeForce GTX 960.

For just under two hundred bucks, the GTX 960 gives you half the power of a GeForce GTX 980—or, you know, two Xbox Ones worth of shader processing grunt. Better yet, because it’s based on Nvidia’s ultra-efficient Maxwell architecture, the GTX 960 ought to perform much better than its specs would seem to indicate. Can the GTX 960 live up to the standards set by its older siblings? Let’s have a look.


There’s lots o’ GTX 960-based trouble brewing in Damage Labs

The sawed-off Maxwell: GM206

I suppose there are several things to be said at this juncture.

First, although “half a GTX 980” might not sound terribly sexy, I think this product is a pretty significant one for Nvidia. The GeForce GTX 970 and 980 have been a runaway success, so much so that the company uncharacteristically shared some sales numbers with us: over a million GTX 970 and 980 cards have been sold to consumers, a huge number in the realm of high-end graphics. What’s more, we have reason to believe that estimate is already pretty dated. The current number could be nearly twice that.

But most people don’t buy high-end graphics cards. Even among PC gamers, less expensive offerings are more popular. And the GTX 960 lands smack-dab in the “sweet spot” where most folks like to buy. If the prospect of “way more performance for about $200” sounds good to you, well, you’re definitely not alone.

Also, there is no chainsaw. I probably made an awful lot of hard-working chip guys cringe with my massive oversimplification above. Although the GM206 really does have half of nearly all key graphics resources compared to the GM204, it’s not just half the chip. These things aren’t quite that modular—not that you’d know that from this block diagram, which looks for all the world like half a GM204.


A simplified block diagram of the GM206. Source: Nvidia.

The GM206 has two graphic processing clusters, almost complete GPUs unto themselves, with eight shader multiprocessor (SM) cores per cluster. Here’s how the chip stacks up to other current GPUs.

ROP

pixels/

clock

Texels

filtered/

clock

(int/fp16)

Shader

processors

Rasterized

triangles/

clock

Memory

interface

width (bits)

Estimated

transistor

count

(Millions)

Die
size

(mm²)

Fab

process

GK106 24 80/80 960 3 192 2540 214 28 nm
GK104 32 128/128 1536 4 256 3500 294 28 nm
GK110 48 240/240 2880 5 384 7100 551 28 nm
GM206 32 64/64 1024 2 128 2940 227 28 nm
GM204 64 128/128 2048 4 256 5200 398 28 nm
Pitcairn 32 80/40 1280 2 256 2800 212 28 nm
Tahiti 32 128/64 2048 2 384 4310 365 28 nm
Tonga 32 (48) 128/64 2048 4 256 (384) 5000 359 28 nm
Hawaii 64 176/88 2816 4 512 6200 438 28 nm

As you can see, the GM206 is a lightweight. The chip’s area is only a little larger than the GK106 GPU that powers the GeForce GTX 660 and the Pitcairn chip from the Radeon R9 270X. Compared to those two, though, the GM206 has a narrower memory interface. In fact, the GM206 is the only chip in the table above with a memory path that narrow. Typically, GPUs of this size have wider interfaces.

The GM206 may be able to get away with less thanks to the Maxwell architecture’s exceptional efficiency. Maxwell GPUs tend to like high memory frequencies, and the GTX 960 follows suit with a 7 GT/s transfer rate for its GDDR5 RAM. So there’s more throughput on tap than one might think. Beyond that, this architecture makes very effective use of its memory bandwidth thanks to a new compression scheme that can, according to Nvidia’s architects, reduce memory bandwidth use between 17% and 29% in common workloads based on popular games.

Interestingly, Nvidia identifies the Radeon R9 285 as the GTX 960’s primary competitor. The R9 285 is based on a much larger GPU named Tonga, which is the only new graphics chip AMD introduced in 2014. (The R9 285 ships with a 256-bit memory interface, although I still believe that the Tonga chip itself probably is capable of a 384-bit memory config. For whatever reason—perhaps lots of inventory of the existing Hawaii and Tahiti chips—AMD has chosen not to ship a card with a fully-enabled Tonga onboard.) Even with several bits disabled, the R9 285 has a much wider memory path and more resources of nearly every kind at its disposal than the GM206 does. If the GTX 960’s performance really is competitive with the R9 285, it will be a minor miracle of architectural efficiency.

The new GeForce GTX 900 series

With the addition of the 960, the GeForce GTX 900 series now extends from $199 to $549. Like its big brothers, the GTX 960 inherits the benefits of being part of the Maxwell generation. Those include support for Nvidia’s nifty Dynamic Super Resolution feature and some special rendering capabilities that should be accessible via DirectX 12. Furthermore, with a recent driver update, Nvidia has made good on its promise to deliver a new antialiasing mode called MFAA. MFAA purports to achieve the same quality as 4X multisampling, the most common AA method, with about half the performance overhead.

Also, the GTX 960 has one exclusive new feature: full hardware support for decoding H.265 video. Hardware acceleration of H.265 decoding should make 4K video playback smoother and more power-efficient. This feature didn’t make the cut for the GM204, so only the GTX 960 has it.

GPU

base

clock

(MHz)

GPU

boost

clock

(MHz)

ROP

pixels/

clock

Texels

filtered/

clock

Shader

pro-

cessors

Memory

path

(bits)

GDDR5
transfer

rate

Memory

size

Peak

power

draw

Intro

price

GTX
960
1126 1178 32 64 1024 128 7 GT/s 2 GB 120W $199
GTX
970
1050 1178 64 104 1664 256 7 GT/s 4 GB 145W $329
GTX
980
1126 1216 64 128 2048 256 7 GT/s 4 GB 165W $549

While the GTX 970 and 980 have 4GB of memory, the GTX 960 has 2GB. That’s still a reasonable amount for a graphics card in this class, although the creeping memory requirements for games ported from the Xbone and PS4 do make us worry a bit.

Notice that the GTX 960’s peak power draw, at least in its most basic form as Nvidia has specified, is only 120W. That’s down from 140W in this card’s most direct spiritual predecessor, the GeForce GTX 660. Maxwell-based products just tend to require less power to achieve even higher performance.

A card for every style

After the success of the GTX 970 and 980, Nvidia’s partners are understandably eager to hop on the GTX 960 bandwagon. As a result, Damage Labs is currently swimming in GTX 960 cards. Five of ’em, to be exact. The interesting thing is that each one of them is different, so folks are sure to find something to suit their style.

In many ways, the Asus Strix GTX 960 is the most sensible of the cards we have on hand. It’s the shortest one—only 8.5″ in length—and is the only one with a single six-pin PCIe aux power input, which is technically all the GTX 960 requires. Even so, the Strix has higher-than-stock GPU clock speeds and is the lone product in the bunch with a tweaked memory clock.

The Strix 960 was also first to arrive on our doorstep, so we’ve tested it most extensively versus competing GPUs.

Pictured above is EVGA’s GTX 960 SSC, or Super Superclocked. I suppose that name does the work you need. True to expectations, the SSC has the highest GPU frequencies of any of these GTX 960 cards. At 1279MHz base and 1342MHz boost, it’s well above Nvidia’s reference clocks.

The SSC also has an unusual dual BIOS capability. The default BIOS has a fan profile similar to the rest of these cards: it tells the fans to stop completely below a certain temperature, like ~60°C. Above that, the fan ramps up to keep things cool. That’s smart behavior, but it’s not particularly aggressive. If you’d like to overclock, you can flip a DIP switch on the card to get the other BIOS, which has a more traditional (and aggressive) fan speed profile.

Also, notice the port config on the EVGA card. There are three DisplayPort outputs, one HDMI, and one DVI. A lot of GTX 970 cards shipped with dual DVI and only two DisplayPort outputs, which seemed like a raw deal to me. Most GTX 960s are like this one. Via those three DP outputs, they can drive a trio of G-Sync or 4K (or both!) monitors.

Gigabyte sent us a pair of very different GTX 960 offerings, both clad in striking flat black. The shorter and more practical of the two is the GTX 960 Windforce, with a more-than-adequate dual-fan cooler. The G1 Gaming version of the GTX 960 ups the ante with a massive cooler sporting triple fans and a max cooling capacity of 300W. That’s total overkill—of exactly the sort I like to see.

Both of these cards have six-phase power with a 150W limit. Gigabyte says they’ll deliver higher boost clocks, even under an extreme load like Furmark, as a result. We’ll have to test that claim.

Another distinctive Gigabyte feature is the addition of a second DVI output in conjunction with triple DisplayPort outputs. Gigabyte calls this setup Flex Display. Although the GPU can’t drive all five outputs simultaneously for 3D gaming, I like the extra flexibility with respect to port types.

Last, but by no means least, is the MSI GeForce GTX 960 Gaming 2G. This puppy has a gorgeous Twin Frozr cooler very similar to the one used on MSI’s GTX 970, and that card took home a TR Editor’s Choice award for good reason. In addition to fully passive, fan-free operation below a temperature threshold—a feature all of these GTX 960 cards share—the Gaming 2G’s two fans are controlled independently. The first fan spins up to keep the GPU cool, while the other responds to the temperature of the power delivery circuitry.

Also, notice that these cards have only a single SLI connector at the front. That means the GTX 960 is limited to dual-GPU operation; it can’t participate in three- and four-way teams.

GPU

base

clock

(MHz)

GPU

boost

clock

(MHz)

GDDR5
clock

speed

(MHz)

Power

connector

Length Intro

price

GTX
960 reference
1126 1178 1753 6-pin N/A $199
Asus
Strix GTX 960
1253 1317 1800 6-pin 8.5″ $209
EVGA
GTX 960 SSC
1279 1342 1753 8-pin 10.25″ $209
Gigabyte
Windforce
GTX
960
1216 1279 1753 Dual 6-pin 10″ $209
Gigabyte
G1 Gaming
GTX
960
1241 1304 1753 Dual 6-pin 11.25″ $229
MSI
GTX
960 Gaming 2G
1216 1279 1753 8-pin 10.75″ $209-219

Here’s a summary of the GTX 960s pictured above. Although Nvidia has set the GTX 960’s base price at $199, each of these products offers a little extra for a bit more dough. I’d certainly be willing to spring another 10 or 15 bucks to avoid the somewhat noisy blower from reference versions of the GeForce GTX 660 and 760.

Our testing methods

We’ve tested as many different competing video cards against the new GeForces as was practical. However, there’s no way we can test everything our readers might be using. A lot of the cards we used are renamed versions of older products with very similar or even identical specifications. Here’s a quick table that will decode some of these names for you.

Original Closest

current

equivalent

GeForce GTX 670 GeForce GTX 760
GeForce GTX 680 GeForce GTX 770
Radeon HD 7870 GHz Radeon R9 270X
Radeon HD 7950 Boost Radeon R9 280
Radeon HD 7970 GHz Radeon R9 280X

Most of the numbers you’ll see on the following pages were captured with Fraps, a software tool that can record the rendering time for each frame of animation. We sometimes use a tool called FCAT to capture exactly when each frame was delivered to the display, but that’s usually not necessary in order to get good data with single-GPU setups. We have, however, filtered our Fraps results using a three-frame moving average. This filter should account for the effect of the three-frame submission queue in Direct3D. If you see a frame time spike in our results, it’s likely a delay that would affect when the frame reaches the display.

We didn’t use Fraps with Civ: Beyond Earth. Instead, we captured frame times directly from the game engine itself using the game’s built-in tools. We didn’t use our low-pass filter on those results.

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-3820
Motherboard Gigabyte
X79-UD3
Chipset Intel X79
Express
Memory size 16GB (4 DIMMs)
Memory type Corsair
Vengeance CMZ16GX3M4X1600C9
DDR3 SDRAM at 1600MHz
Memory timings 9-9-9-24
1T
Chipset drivers INF update
9.2.3.1023

Rapid Storage Technology Enterprise 3.6.0.1093

Audio Integrated
X79/ALC898

with Realtek 6.0.1.7071 drivers

Hard drive Kingston
HyperX 480GB SATA
Power supply Corsair
AX850
OS Windows
8.1 Pro
Driver
revision
GPU
base

core clock

(MHz)

GPU
boost

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

Radeon
R9 270X
Catalyst 14.12
Omega
925 1250 3072
Radeon
HD 7950 Boost
Catalyst 14.12
Omega
925 1250 3072
Radeon
R9 285
Catalyst 14.12
Omega
973 1375 2048
XFX Radeon
R9 280X
Catalyst 14.12
Omega
1000 1500 3072
Radeon
R9 290
Catalyst 14.12
Omega
947 1250 4096
GeForce
GTX 560 Ti
GeForce
347.25
900 1050 1024
GeForce
GTX 660
GeForce 347.25 980 1033 1502 2048
GeForce
GTX 760
GeForce 347.25 980 1033 1502 2048
GeForce
GTX 770
GeForce
347.25
1046 1085 1753 2048
Asus
Strix GTX 960
GeForce
347.25
1253 1317 1800 2048
Asus Strix
GTX 970
GeForce
347.25
1114 1253 1753 4096

Thanks to Intel, Corsair, Kingston, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Also, our FCAT video capture and analysis rig has some pretty demanding storage requirements. For it, Corsair has provided four 256GB Neutron SSDs, which we’ve assembled into a RAID 0 array for our primary capture storage device. When that array fills up, we copy the captured videos to our RAID 1 array, comprised of a pair of 4TB Black hard drives provided by WD.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Sizing ’em up

Do the math involving the clock speeds and per-clock potency of these cards, and you’ll end up with a comparative table that looks something like this:

Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear

filtering

int8/fp16

(Gtexels/s)

Peak

shader

arithmetic

rate

(tflops)

Peak

rasterization

rate

(Gtris/s)

Memory
bandwidth
(GB/s)
Radeon
R9 270X
34 84/42 2.7 2.1 179
Radeon
HD 7950 Boost
30 104/52 3.3 1.9 240
Radeon
R9 285
29 103/51 3.3 3.7 176
Radeon
R9 280X
32 128/64 4.1 2.0 288
Radeon
R9 290
61 152/76 4.8 3.8 320
GeForce GTX
560 Ti
26 53/53 1.3 1.6 128
GeForce GTX
660
25 83/83 2.0 3.1 144
GeForce GTX
760
25 99/99 2.4 4.1 192
GeForce GTX 770 35 139/139 3.3 4.3 224
GeForce GTX
960
38 75/75 2.4 2.5 112
Asus
Strix GTX 960
42 84/84 2.7 2.6 115
Asus Strix GTX
970
61 130/130 4.2 5.0 224

Sorry it’s so huge. I tested a lot of graphics cards.

As you can see, even with relatively high memory clocks, the GTX 960 just has less memory bandwidth than anything else we tested. Versus its closest competitor from the Radeon camp, the R9 285, the GTX 960 has less of everything except for peak pixel fill rate. (Maxwell GPUs tend to have pretty high pixel rates, likely because Nvidia has tuned them for the new era of high-PPI displays.)

Of course, the numbers above are just theoretical. We can measure how these graphics cards actually perform by running some directed tests.

Traditionally, performance in this color fill test has been limited by memory bandwidth. Few cards reach their theoretical peaks. But check this out: even though it has less memory bandwidth on paper than anything else here, the GTX 960 takes third place in this test. That’s Maxwell’s color compression tech at work. In fact, the only two cards ahead of the GTX 960 are based on GPUs that also have delta-based color compression.

Texture sampling rate is another weakness of the GTX 960 on paper. In this case, the card’s delivered performance is also relatively weak.

The GTX 960’s tessellation and particle manipulation performance, however, is pretty much stellar. This is a substantial improvement over the GTX 660 and 760. In TessMark, the GTX 960 is faster than any Radeon we tested, up to the R9 290.

On paper, the GTX 960 gives up nearly a full teraflop to the Radeon R9 285—2.4 versus 3.3 teraflops of peak compute throughput. In these directed tests, though, the GTX 960 only trails the R9 285 slightly. That’s Maxwell’s vaunted architectural efficiency shining through again.

Now, let’s see how these numbers translate into performance in real games.

Far Cry 4


Click the buttons above to cycle through the plots. Each card’s frame times are from one of the three test runs we conducted for that card.

The GTX 960 starts off with a pretty astounding performance in Far Cry 4. However you want to cut it—the frame time plots, the simple FPS average, or our time-sensitive 99th percentile frame time metric—the GTX 960 looks to be every bit the equal of the GeForce GTX 770. To be clear, the GTX 770 is based on a fully-enabled GK104 chip and is essentially the same thing as the GeForce GTX 680, which was selling for $499 a few short years ago. The GTX 770 has almost double the memory bandwidth of the 960, yet here we are.

The R9 285 nearly ties the GTX 960 in terms of average FPS. But have a look at the R9 285’s frame time plot. Thanks to some intermittent but frequent spikes common to all of the Radeons, the R9 285 falls well behind the GTX 960 in our 99th percentile metric. That means animation just isn’t as fluid on the Radeon.

We can better understand in-game animation fluidity by looking at the “tail” of the frame time distribution for each card, which shows us what happens in the most difficult frames.


The Radeon cards’ frame times start to trend upward at around the 92% mark. By contrast, the curves for most of the GeForce cards only trend upward during the last two to three percent of frames. Some of the frames are difficult for any GPU to render, but the Radeons struggle more often.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. And 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

For the most part, these results confirm what we’ve already seen above. The R9 285 encounters more “badness” at each threshold than the GTX 960 does. The GTX 960 is just delivering smoother animation here. However, revisit the comparison between the GTX 960 and the GeForce GTX 770. Although these two cards have almost identical 99th percentile frame times, the GTX 960 actually fares better in our “badness” metrics. So even though the GTX 960 is based on a smaller chip with less memory bandwidth, its performance is less fragile than the GTX 770’s.

DOTA 2

The two ultra-popular MOBAs don’t really require much GPU power. I tried League of Legends and found that it ran at something crazy like 350 FPS on the GTX 960. Still, I wanted to test one of these games to see how the various GPUs handled them, so I cranked up the quality options in DOTA 2 and raised the display resolution to 2560×1440. Then I played back a portion of a game from the Asia championship tourney, since I lack the skillz to perform at this level myself.


The big numbers in the FPS averages above are nice, but pay attention to the 99th percentile frame time for a better sense of the animation smoothness. Everything down to the GeForce GTX 760 renders all but the last one percent of frames in less than 16.7 milliseconds. That means nearly every single frame comes in at 60 FPS. That’s not just an average, but the general case.

I’m not sure what happened with the Radeon HD 7950. This is the same basic product as the Radeon R9 280, and it should be more than fast enough to perform well here. For some reason, it ran into some slowdowns, as the frame time plots show.



I don’t think the GTX 960 and the Radeon R9 285 could be more evenly matched than they are in our DOTA 2 test. Then again, you don’t need much video card to run at a constant 60 FPS in this game.

What’s interesting, for those of us with high-refresh gaming displays, is the time spent beyond 8.3 milliseconds. (At 120Hz, the frame-to-frame interval is 8.3 ms.) As you can see, getting a faster video card will buy you more time below the 8.3-ms threshold. That means smoother animation and more time at 120 FPS, which is DOTA 2‘s frame rate cap.

Civilization: Beyond Earth

Since this game’s built-in benchmark simply spits out frame times, we were able to give it a full workup without having to resort to manual testing. That’s nice, since manual benchmarking of an RTS with zoom is kind of a nightmare.

Oh, and the Radeons were tested with the Mantle API instead of Direct3D. Only seemed fair, since the game supports it.




The contest between the R9 285 and the GTX 960 is again incredibly close, but the R9 285 has a slight edge overall in the numbers above, perhaps due in part to the lower overhead of the Mantle API.

Borderlands: The Pre-Sequel


The R9 285 is clearly ahead in the FPS average, but it trails the GTX 960 quite a bit in our time-sensitive 99th percentile metric. Flip over to the Radeons’ frame time plots and you’ll see why: regular, intermittent frame time spikes as high as 45 milliseconds. The magnitude of these spikes isn’t too great, and they don’t totally ruin the fluidity of the game when you’re playing, remarkably enough. Still, they seem to affect about four to five percent of frames rendered, and they’re pretty much constant. Interestingly enough, we’ve seen this sort of problem before on Radeon cards in Borderlands 2. AMD eventually fixed this problem with the Catalyst 13.2 update. It’s a shame to see it return in the Catalyst Omega drivers.



As one would expect, the frame time curves and badness metrics reflect the frame time spikes on the Radeons.

Alien: Isolation


Here’s another case where the R9 285 wins the FPS sweeps but trails in the 99th percentile frame time score. You can see the “fuzzy” nature of the frame times on the mid-range Radeons’ plots, but really it’s nothing to write home about. We’re talking about a couple of milliseconds worth of difference in the 99th percentile frame time.



The Talos Principle

I wanted to include one more game that would let me do some automated testing, so I could include more cards. So I did. Then I tested the additional cards manually in all of the games you saw on the preceding pages. Since I am functionally insane. Anyhow, here are the results from the nicely automated FPS benchmark in The Talos Principle public test, which is freely available on Steam. This new game from Croteam looks good and, judging by the first few puzzles, makes Portal seem super-easy.

Oh, also, I’ve included all five of the GTX 960 cards we talked about up front here, to give us a glimpse of how they compare.

Wow, so there’s not much daylight between the different variants of the GTX 960. We’re talking a total range of less than a single frame per second at 2560×1440.

What this tells me is that the differences between the GTX 960 cards, such as they are, probably won’t be apparent without overclocking. The fastest card of the bunch, by a smidgen, is the Asus Strix GTX 960, which also happens to have a higher memory clock than the rest of the bunch. Hmm. My plan is to overclock these puppies in a follow-up article, so we can see how they differ when pushed to their limits. Stay tuned for that.

Power consumption

Please note that our “under load” tests aren’t conducted in an absolute peak scenario. Instead, we have the cards running a real game, Crysis 3, in order to show us power draw with a more typical workload.

Here’s where the GM206’s smaller size and narrower memory interface pays positive dividends: power consumption. Our test rig’s power use at the wall socket with an Asus Strix GTX 960 installed is 78W lower than with a Radeon R9 285 in the same slot. As you’ve seen, the two cards perform pretty much equivalently, so we’re looking at a major deficit in power efficiency.

Noise levels and GPU temperatures

These new video card coolers are so good, they’re causing us testing problems. You see, the noise floor in Damage Labs is about 35-36 dBA. It varies depending on things I can’t quite pinpoint, but one notable contributor is the noise produced by the lone cooling fan always spinning on our test rig, the 120-mm fan on the CPU cooler. Anyhow, what you need to know is that any of the noise results that range below 36 dBA are running into the limits of what we can test accurately. Don’t make too much of differences below that level.

Yeah, these big coolers are pretty much overkill for the GTX 960 at stock clocks. The smallest one of them all, on the Asus Strix GTX 960, is quiet enough under load to hit our noise floor—we’re talking whisper quiet here—while keeping the GPU at 60°C. Jeez.

The one bit of good news for the AMD camp is just how very much these coolers are overkill for the likes of the GTX 960. MSI uses essentially the same Twin Frozr cooler on its R9 285, and that card also reaches our noise floor while drawing substantially more power and turning it into heat.

Conclusions

As usual, we’ll sum up our test results with a couple of value scatter plots. The best values tend toward the upper left corner of each plot, where performance is highest and prices are lowest.


Although it started life at $249, recent price cuts have dropped the Radeon R9 285’s price on Newegg down to $209.99, the same price as the Asus Strix GTX 960 card we used for the bulk of our testing.

At price parity, the GTX 960 and R9 285 are very evenly matched. The R9 285 has a slight advantage in the overall FPS average, but it falls behind the GeForce GTX 960 in our time-sensitive 99th percentile metric. We’ve seen the reasons why the R9 285 falls behind in the preceding pages. I’d say the 99th percentile result is a better indicator of overall performance—and the GTX 960 leads slightly in that case. That makes the GTX 960 a good card to buy, and for a lot of folks, that will be all they need to know.

It’s a close race overall, though. Either card is a decent choice on a pure price-performance basis. AMD and its partners have slashed prices recently, perhaps in anticipation of the GTX 960’s introduction, without making much noise about it. Heck, the most eye-popping thing on the plot above is that R9 290 for $269.99. Good grief. In many of these cases, board makers are offering mail-in rebates that effectively take prices even lower. Those don’t show up in our scatter plots, since mail-in rebates can be unreliable and kind of shady. Still, AMD apparently has decided to move some inventory by chopping prices, and that has made the contest between the GTX 960 and the R9 285 very tight indeed.

That’s not quite the whole story. Have a look at this plot of power consumption versus performance.

In virtually every case, you’ll pay more for the Radeon than for the competing GeForce in other ways—whether it be on your electric bill, in terms of PSU requirements, or in the amount of heat and noise produced by your PC. The difference between the R9 285 and the GeForce GTX 960 on this front is pretty dramatic.

Another way to look at the plot above is in terms of progress. From the GTX 660 to the GTX 960, Nvidia improved performance substantially with only a tiny increase in measured power draw. From the GTX 770 to 970, we see a similar performance increase at almost identical power use. By contrast, from the R9 280X to the R9 285—that is, from Tahiti to Tonga—AMD saw a drop in power efficiency. Granted, the R9 285 probably isn’t Tonga’s final form, but the story still isn’t a good one.

Nvidia has made big strides in efficiency with the introduction of Maxwell-based GPUs, and the GeForce GTX 960 continues that march. Clearly, Nvidia has captured the technology lead in GPUs. Only steep price cuts from AMD have kept the Radeons competitive—and only then if you don’t care about your PC’s power consumption.

I’ll have more to say about the different flavors of the GTX 960 we have on hand here in Damage Labs after I’ve had a little more time to put them through the wringer. For now, though, if you aren’t interested in overclocking, you might as well take your pick. They’re all blessed with more cooling than strictly necessary, and they’re all whisper-quiet. What’s not to like?

Enjoy our work? Pay what you want to subscribe and support us.

Comments closed
    • deruberhanyok
    • 5 years ago

    Several days after the release, I’m left with this thought:

    These look nice, but it feels like $210 is too expensive. I’d rather have seen 4GB for this price, and a 2GB version for $170-$180ish.

      • sschaem
      • 5 years ago

      I see the same going forward, even at 1080p, 2GB will be a limit with game art.

      And when places like newegg is selling 4GB R9-290 for $240, and TR showing those card are 50% to twice faster in games as the GTX 960, I cant see myself recommending this card to anyone beside thermally limited HTPC situations.

    • TheEmrys
    • 5 years ago

    Didn’t OpenCL used to be a part of gpu testing?

    • sweatshopking
    • 5 years ago

    Let me sum up discussion:

    ARRRGFFHHH NERD FAN BOY RAGE ABOUT MY FAV COMPANY!!!!

      • Anovoca
      • 5 years ago

      You forgot to slander people who disagree.

    • ClickClick5
    • 5 years ago

    Nice write up. I do however enjoy my new 980!

    After reading through the comments…..why is there fighting over such a mid level card? I expect fanboys from each side to fight with the high end cards, but not the mid level cards.

    Sheeshe people, stop whining and go be productive!

      • Terra_Nocuus
      • 5 years ago

      What gets me are the people blasting the 960 for being terrible at 4K.

      Seriously?

    • Rza79
    • 5 years ago

    A couple things to note:

    – You review the Asus Strix as it is the normal GTX 960. It’s not. It’s around 10% faster than a normally clocked GTX 960. In your benchmark graphs you just write GTX 960, making it for anyone who didn’t check the ‘our testing methods’ page seem like you’re testing the normal GTX 960.

    – The power usage of your R9 285 seems unrealistically high. Even higher than the 280X which doesn’t make any sense. If I compare this to a tech website of which I’m 100% sure they’re unbiased ([url<]http://www.computerbase.de/2015-01/nvidia-geforce-gtx-960-im-test/9/[/url<] - spiele last), then the 285 uses less power than the 280X as is to be expected. Maybe it's because you're also using an overclocked R9 285 and calling it everywhere a R9 285 (973Mhz vs 918Mhz). To compare your testing methods to those of Computerbase. - You have a habit of using OC cards and leaving them at their respective OC frequencies and some are even overvolted, losing any possibility for a correct power consumption graph. - Computerbase: They always use both standard and OC clocks (and mention it accordingly). For this particular review, they took the average of all OC cards as the OC frequency.

    • Froz
    • 5 years ago

    I have a couple of questions about the test.

    Please mind that I’m not some crazy person believing TR conspiracy with Nvidia, I don’t even think that TR is particularly sided with Nvidia versus AMD.

    However, I still have those questions.

    1) I wonder why is the Far Cry 4 test just walking around, without shooting anything? No explosions, no flaming arrows, nothing like that. I haven’t played the game, but in FC3 my framerate would drop during the combat (R9 270) by quite high margin, so if I wanted fluid game I had to lower graphic settings.

    It is similar with the Alien game test.

    I suspect that’s because it’s hard to set up a test that would be highly repetitive and would involve fighting. However, you could still shoot something, throw explosives or something like that (in FC4, don’t know about Alien).

    Additionally, in Borderlands you do fight and shoot a lot. It even seems like it’s pretty much impossible to play this test in the exactly same way. So – why do you do that in Borderlands and not in other games?

    2) It seems to me that low overal 285 scores in 99th percentile frame time come from Borderlands results. It’s quite obvious it’s a bug (maybe bad design in game or something in drivers, not important). Not only there is a chance it could be fixed (we can’t be sure about that), but it’s just one game. Personally I would throw away this result from the overal results as not representative to average game.
    Anyway, my question is – can you calculate and post overal results without Borderlands results? I wonder if 285 would still be worse then 960 or maybe the difference would be neglible.

    Thank you in advance. I appreciate your work, really nice review. Although I agree with most commenters that 960 is not worth buying right now, with such low 290 price.

    • ronch
    • 5 years ago

    AMD really needs to fix their load power consumption numbers.. both with CPUs and GPUs.

      • smilingcrow
      • 5 years ago

      It does look bad to be so far behind on both fronts. Doubly so with GPUs as they are on the same process node.

    • thecoldanddarkone
    • 5 years ago

    Not fast enough for me to ditch the 7950 boost. That was almost half the cost :/

    • DragonDaddyBear
    • 5 years ago

    Poor AMD can’t catch a break. The charts don’t look to good in frames/$ or frames/watt. I hope they have something special planned to “Maxwell” (shrink their die and improve performance/watt) their tech. Their GPU is all they really have at the moment that is worth buying.

      • DragonDaddyBear
      • 5 years ago

      So, why the down votes? Are the AMD fans that mad?

      Note: I do own a 7950 and am happy with it.

    • Chrispy_
    • 5 years ago

    This is amazing stuff. Tonga does cool things with only 2GB of RAM and a 256-bit bus. Maxwell is keeping up with a 128-bit bus. What happens when we get 512-bit cards with quadruple the core counts?

      • derFunkenstein
      • 5 years ago

      [quote<]What happens when we get 512-bit cards with quadruple the core counts?[/quote<] You'll suddenly have a reason for 2160p @ 120Hz

        • auxy
        • 5 years ago

        Hype HYPE [b<]HYYYYPEE[/b<]!!!

          • derFunkenstein
          • 5 years ago

          I have a feeling that 2160p @ 120Hz would be something to behold, though so would the cost. 😆

      • Ninjitsu
      • 5 years ago

      Your bank account becomes empty.

        • Chrispy_
        • 5 years ago

        Becomes?
        I have a computer, snowboarding, mountain biking, and consumer electronics addiction.
        It’s never not empty 😛

          • anotherengineer
          • 5 years ago

          I have 2 kids and a mortgage.

          It’s always empty lol

            • derFunkenstein
            • 5 years ago

            PREACH

    • Orb
    • 5 years ago

    “On paper, the GTX 960 gives up nearly a full teraflop to the Radeon R9 285—2.4 versus 3.3 teraflops of peak compute throughput. In these directed tests, though, the GTX 960 only trails the R9 285 slightly. That’s Maxwell’s vaunted architectural efficiency shining through again.”

    I don’t think you know what you’re talking about. Peak FLOPS achieved in a benchmark is not related to ‘efficiency’ of the architecture but mostly how well the test is written and a bit on how good the compiler is. Some architectures can do pretty well on a MAD only test like Nvidia and AMD but some not so good like ARM.

    But carry on cause if it’s from Nvidia then it must be made of pure gold. And one more thing, your 99% percentile testing methodology is flawed – it doesn’t prevent you guys to only test a particular subset of the game where a vendor might be faster than other. It’s hard to believe otherwise as it’s pretty obvious you guys like Nvidia more than AMD.

      • Damage
      • 5 years ago

      Hm.

      There is a long tradition in benchmarking of testing flops rates with directed tests on real chips that’s meant as a companion to theoretical peak numbers. Delivered performance matters. Of course, delivered performance in actual user workloads matters even more.

      What seems to be happening in this case is:

      Theoretical: R9 285 is way better than GTX 960
      Directed flops test: R9 285 only delivers a little higher throughput than GTX 960
      FPS averages: R9 285 ever so slightly leads GTX 960
      99th percentile frame times: R9 285 trails GTX 960

      So we’ve covered the gamut from theoretical peaks to directed shader tests to real workloads. I think that’s a pretty good way of covering things, and it does indeed turn out that Maxwell does more in real workloads than one would expect given its deficit in theoretical peak flops.

      What I mean by “efficiency” is that Maxwell seems to squeeze more performance out of its given resources–including shader ALUs, texture filtering units, memory bandwidth, and watts. This efficiency is most fully expressed in the real-world 99th percentile results, but it also shows up in the directed shader tests, where the GTX 960 achieves more of its theoretical max than the R9 285 does.

      I think that should be clear to the impartial reader.

      As for the 99th percentile frame time method being flawed because of the game mix, I don’t see your point. No method of measurement prevents those who employ it from making poor choices otherwise. In the case of frame times vs. average FPS, though, I think we’ve demonstrated that the finer granularity of frame time testing is always a superior way to look at actual in-game smoothness.

      I’ll let the reader judge whether our selection of games is somehow biased. We tested Civ BE with Mantle and Alien Isolation, both Gaming Evolved titles. We tested DOTA 2 and The Talos Principle, both independent. We tested Far Cry 4 and Borderlands: TPS, both TWIMTBP titles. I don’t advocate for perfect balance in marketing partnerships for our gaming tests–what people might want to play is more important–but in this case, we have it.

      I don’t appreciate the insinuation that we chose these games for nefarious purposes, but civility remains in short supply among fanboys, whose motivations continue to baffle and fascinate me at the same time.

        • guardianl
        • 5 years ago

        It seems silly to say this in reply to such a well written and reserved response, but to Orb I say…

        “Bitch, you just got owned.” 😛

        • f0d
        • 5 years ago

        [quote<]civility remains in short supply among fanboys, whose motivations continue to baffle and fascinate me at the same time.[/quote<] me too.!! we are actually in a pretty good era where you can make almost any choice of graphics card at whatever pricepoint you want to spend at and get a damn good card for your money no point saying stuff like the original poster both the 285 and the 960 are great cards at the same price point edit.. err wrote 950 instead of 960... derp

        • sweatshopking
        • 5 years ago

        I’m not sure why you bother with this every time. I’d just make a “our testing methodology” page and just link people there. You spend more time than you should replying to these guys.

          • Redocbew
          • 5 years ago

          That’s one of things that I actually really like about TR. I think Damage and crew here do a really good job of knowing when to respond and when to just nuke someone from space and forget about it. It’s not always a straightforward decision.

            • VincentHanna
            • 5 years ago

            Nuking someone from space, because they voice an opinion is NEVER the correct option.

            • Redocbew
            • 5 years ago

            That’s a bit like saying there are no stupid questions. It’s a nice idea and a good general rule. For most reasonable people it works out that way, but not everyone on the intarweb is reasonable.

            • VincentHanna
            • 5 years ago

            In practice, what it amounts to, at least [b<]on this site[/b<], is bullying and the suffocation of discourse. Also, that's a bad analogy. More like saying that going on a long tirade about how stupid someone's questions are, makes you an A__. If, indeed there are stupid questions, just ignore them, or respond to them appropriately. There is no situation where making yourself look like the bigger jerk is appropriate.

            • Spunjji
            • 5 years ago

            “Suffocation of discourse” is a matter of perspective. I grow weary of seeing that old nag of an argument being trotted out every single time someone shuts down a thoroughly baseless opinion.

            It’s quite simple: Not all perspectives are valid. When they’re not, there is no reason to further entertain them.

            NB: I am referring here to Damage’s specific variant of “nuking from space” which is to thoroughly undermine the accuser with facts. I am not referring to calling the other persona moron and downvoting them without further comment.

            • VincentHanna
            • 5 years ago

            Then I guess we are talking about entirely different things.

            I was referring to terminating a discussion by means of moderatory powers when they FAIL to produce any argument of value to support their opinion…

            I have absolutely no problem with continuing the discussion, utilizing facts. Of course I don’t believe that’s what damage actually means when he threatens to Nuke someone from space. Its certainly not SOP on these forums.

            • JustAnEngineer
            • 5 years ago

            [quote=”Redocbew”<] Not everyone on the internet is reasonable. [/quote<] Penny Arcade called it the [url=http://www.penny-arcade.com/comic/2004/03/19/<]G.I.F.T.[/url<]* * Not safe for work.

            • Techgoudy
            • 5 years ago

            Normally I’d agree, but when you read Orbs post it comes off more as an attack on TR than anything else. I don’t know what it will take to get people to understand that TR isn’t taking sides, and that they can find articles from TR also talking about issues that plague Nvidia cards.

        • Orb
        • 5 years ago

        “Directed flops test: R9 285 only delivers a little higher throughput than GTX 960″

        That directed FLOPS test has zero validity whatsoever. You don’t even know what magic instructions that test is spewing out after going through shader compiler.

        ” it does indeed turn out that Maxwell does more in real workloads than one would expect given its deficit in theoretical peak flops.”

        Real-world workloads are seldom compute limited – they are 99% of the time bandwidth limited. Also, the final FPS also depends on how much redundant work you can throw away – both companies employ some general optimizations for this but you can always do better. Having a better gameworks program also helps as you have time to analyze their code and replace those HLSL/GLSL/GNM shaders with your PTX ones.

        “As for the 99th percentile frame time method being flawed because of the game mix”

        I didn’t say your game mix was flawed or your reported numbers are bogus. But every game doesn’t give out constant fps numbers in every level – they vary a lot and where to measure perf is very important. No one can guarantee if a reviewer’s bias won’t come in the way. Workloads go from compute heave to bandwidth heavy all the time and one architecture might be better in one than other.

          • Damage
          • 5 years ago

          [quote<]That directed FLOPS test has zero validity whatsoever. You don't even know what magic instructions that test is spewing out after going through shader compiler.[/quote<] We just reported the results from a couple of ALU-intensive synthetic tests. We're not even reporting flops per se. Just comparing relative delivered performance vs. theoretical. Interestingly, it turns out the results track more closely with real game performance than the theoretical peaks. You can take them or leave them, but I find the data points informative. These tests are a well-known optimization target for everyone. I don't see what knowing the instruction mix earns you. You seem to want them to do work they're not meant to do. They are not CPU or even compute benchmarks, just graphics shader tests meant to be similar to what a game might ask a GPU to do. [quote<]Real-world workloads are seldom compute limited - they are 99% of the time bandwidth limited.[/quote<] [quote<]Workloads go from compute heave to bandwidth heavy all the time[/quote<] So which is it? 99% bandwidth limited, or constant shifting of the constraint from compute to bandwidth depending on the workload? If you were right about the 99% bandwidth constraint situation, a lot of GPU architects would be out of a job. An intern can press the "give it more bandwidth" button over and over, if that's all that's required. My money is on "it's more complex than that," since texture filtering, memory bandwidth, shader throughput, polygon throughput, CPU/driver constraints, tessellation data flow, and many other things seem to contribute to the performance picture these days. Which is why we play games on the cards we test and report those results. [quote<]and one architecture might be better in one than other.[/quote<] But how will we know without directed tests of graphics shader performance? 🙂 As for the issue of *where* in each game we tested, I still don't see how this question relates to 99th percentile frame times versus FPS averages. One must test a portion of gameplay in either case. It is true that game workloads can vary from place to place, though I think you may be overstating the degree to which that matters for cross-GPU comparisons. We attempt to select a reasonably representative section of the game for our testing scenarios--and then we post public videos of the sections we tested, so folks know exactly how we tested. Some measure of trust is involved, of course, but folks are welcome to verify. I believe that is the tradition in such cases.

            • chuckula
            • 5 years ago

            I think we need to do a scatter plot of Patience and Politness.

            I would be way way WAY down on the low-end of both axes, while Damage would probably peg the upper right-hand corner. Hell, we could make it a relative scale based on a unit called “the Damage” where everyone else would only asymptotically approach 1 as they became increasingly Zen.

            • MarkG509
            • 5 years ago

            I thought Damage’s P&P was more of a Step Function. Super nice till you get “nuked from orbit”.

            • DPete27
            • 5 years ago

            [quote<]approach 1 as they became increasingly Zen.[/quote<] AMD fanboy comment. (joke about [url=https://techreport.com/news/27728/details-leak-out-on-amd-first-zen-based-desktop-cpus<]AMD Zen[/url<])

            • Orb
            • 5 years ago

            “a couple of ALU-intensive synthetic tests”

            Only one supposedly – perlin noise. It’s not a good directed compute test. A directed test sets out to find out absolute performance in one metric and I don’t think perlin noise can do a good job at it. It’s more like a real-world perlin noise shader. A bad data point is worse than having no data point.

            “So which is it? 99% bandwidth limited, or constant shifting of the constraint from compute to bandwidth depending on the workload?”

            Both. When I say 99% bandwidth limited, I mean that most of the time CPUs are waiting for memory to feed them data. They are doing nothing most of the time. The term compute heavy is relative – it doesn’t mean that CPUs aren’t waiting for data to comeby, but they are relatively busier.

            “An intern can press the “give it more bandwidth” button over and over, if that’s all that’s required.”
            There are smarter ways to achieve more bandwidth and hence more CPU utilization which is what most architects do! Re-using data in caches and throwing away redundant work are two of the most effective bandwidth reduction techniques on GPUs. Everything on CPU (and more on GPU) depends on memory to serve you data before you can operate on it. Even simple stuff like triangle rasterization needs 9 floats to be fetched from vertex buffers and its generally the memory bandwidth which limits the triangle rate. There are numerous examples like this.

            “My money is on “it’s more complex than that,” since texture filtering, memory bandwidth, shader throughput, polygon throughput, CPU/driver constraints, tessellation data flow, and many other things seem to contribute to the performance picture these days.”

            Everything is constrained by memory bandwidth, and not the processing power of shader cores or even the individual fixed function units. There’s a reason why bilinear texture filtering rate is same as no filtering rate on all GPUs – cause it’s bandwidth constrained.

            “But how will we know without directed tests of graphics shader performance? :)”

            Exactly, but you need good directed tests like beyond3d used to have and similar to what Hardware.fr has. I don’t think Futuremark has good directed tests. I can do a much better job of writing them.

            “We attempt to select a reasonably representative section of the game for our testing scenarios–and then we post public videos of the sections we tested, so folks know exactly how we tested”

            Good to know but a 2 min clip doesn’t justify the complete performance characteristics and variations of a 20-100 hour game.

            • Ninjitsu
            • 5 years ago

            So you want them to test for 20 hours per game?

            My problem is that you’re not really spelling out solutions. We all use these tests as another data point and correlate it with other sites for the larger picture. I know first hand how long it takes to test a single GPU, let alone so many.

            And: If things are 99% bandwidth dependent (from what I’ve been gathering for the last 4 years, that’s not true, it varies between memory bandwidth and actual processing), then Maxwell pulls off more performance with less bandwidth, and by your own volition it implies a more efficient architecture.

            Which is what Scott said too.

            • Orb
            • 5 years ago

            I have a better solution – use machine learning. We already do this internally.

            “And: If things are 99% bandwidth dependent (from what I’ve been gathering for the last 4 years, that’s not true, it varies between memory bandwidth and actual processing)”then Maxwell pulls off more performance with less bandwidth, and by your own volition it implies a more efficient architecture.

            You’re just measuring bandwidth to the main memory bus but there are a lot more memory spaces on a GPU which determine performance – shared memory, L1 texture cache, shared L2 cache etc. They are used internally to place shader constants, interpolated attributes for PS, vertex data , render-targets etc.

            Maxwell has stuff like compressing render-target writes on the fly which saves quite a lot of bandwidth in deferred rendering engines. I am not sure if GCN has a similar optimization to reduce bandwidth. It also has one more important optimization and it works pretty well in some games but not in others. Nvidia hasn’t shared this optimization with the world for whatever reason but I know since I used to work there.

            • Ninjitsu
            • 5 years ago

            Yeah, now I’m not sure you know what you’re talking about (at all). Pixel shader workloads are compute bound, you’re overstating the importance of bandwidth.

          • DPete27
          • 5 years ago

          I doubt TR is playing through the games recording framerates for an extended period of time and then cherry-picks a specific segment that favors AMD or Nvidia to show in their review article. The test sequences are random. Nothing else to it.

        • BestJinjo
        • 5 years ago

        “As for the 99th percentile frame time method being flawed because of the game mix, I don’t see your point. No method of measurement prevents those who employ it from making poor choices otherwise. In the case of frame times vs. average FPS, though, I think we’ve demonstrated that the finer granularity of frame time testing is always a superior way to look at actual in-game smoothness.”

        The problem is your limited game testing choices ironically resulted in an erroneous conclusion this time. Although you might have had good intentions, the frame times of a card like 960 against a 280X even are absolutely horrible as has been proven by other sites:

        [url<]http://www.techspot.com/review/948-geforce-gtx-960-sli-performance/[/url<] Also, you failed to address the 2GB of VRAM bottleneck, and how that will impact gamers keeping this card for 2 years, not considering at all that some might enjoy Skyrim/GTA V mods, or that games like Shadow of Mordor and AC Unity/Wolfenstein already use more than 2GB. You also threw a generalized statement about AMD cards running hot and loud despite plenty of quiet and cool after-market R9 290 cards. You do realize that Gigabyte Windforce 3X 290X and Sapphire Tri-X R9 290X don't even exceed 75C at load? Yet, in your review you come off as if R9 290 sells only in reference form factor: [url<]http://www.tomshardware.com/reviews/radeon-r9-290-and-290x,3728-6.html[/url<] Ask yourself how as an honest reviewer that's supposed to provide good recommendations for agnostic PC gamers you managed to not find many flaws with a $200 GTX960 and used the argument that MIRs don't even matter? Just because you find rebates something you do not like does not mean they do not work as long as you fill them out and send them on time. I mean even if you hate rebates, it's plain obvious that 960 is poor product at $200 because not even 2 of those cards at $400 in SLI can beat R9 290/970 on average. [url<]http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_960_SLI/23.html[/url<] You could have simply stated that the entire $200 GPU market right now is a market best skipped and that the gamer is better off waiting until the 960 drops to $150 or 970 falls in price or 960Ti comes out or wait for a deal on an R9 290 with a quiet after market cooler. Instead you found almost nothing wrong with a GTX960, even failing to account that it barely beats a 660 by 30% despite the latter being 2 years old. Was this a serious review attempt? A far cry from the TR of old.

        • the
        • 5 years ago

        By chance, could Maxwell be using its special function unit or the double precision unit (with single precision casting) alongside the regular single precision ALUs for some of these synthetic compute tests? That’d give a boost to Maxwell’s theoretical numbers and the real world performance difference would mimic the same theoretical delta.

        • sschaem
        • 5 years ago

        To be fair, no test was conducted in this review that was pure compute.
        (to do a direct compare of the quoted theoretical peek computer numbers)

        But your conclusion still stand, Maxell is the more efficient architecture at all levels.

        Now, its also worth saying that this doesn’t make maxwell based cards the better product.

        And for the game selection, it will always be a sore point , because it cant be made fair.
        The best you guys can do it spend more time to run an even wider range of games and in the conclusion also offer a way to pick the value chart per game, not just composite.

        Maybe your readers can tell what game matter to them, and pick the top 10 ?

      • MadManOriginal
      • 5 years ago

      You really ought to stop writing replies, you’re starting to look silly. Reviews from all other sites align well with the TR results. No amount of nitpicking over wording is going to change that.

        • BestJinjo
        • 5 years ago

        I am not questioning game specific data at TR. What I am saying is the conclusion had glaring omissions such as not discussing VRAM limitations, or addressing that cards like R9 290/970 are simply worth spending extra $ for, or better yet for a budget gamer wait a bit more for a 960Ti card which is bound to happen as NV won’t leave a gap between $200 and $330 forever. Preference for perf/watt over price/performance is also evident when cards like R9 280 sell for $150 and R9 290 are $250. The reviewer could simply state that to him perf/watt overrides price/performance but blatantly ignoring 1 metric for the other doesn’t show an objective conclusion.

        As far as my comment on frame times, it’s perfectly valid because unlike your statement that other sites agree with TR’s data, it’s clearly NOT the case for frame times. 960 shows worse frame time delivery than R9 280X series in a wider variety of games at other sites.

        If you think we all should be close minded and only look at TR for reviews, that’s not how the Internet works. Using 285 as a reference point doesn’t exactly instill much confidence in the 960 as the 285 itself is an underwhelming, overpriced, VRAM gimped product at $200. Unless you have an absolutely fixed budget at $200, then 960 is the winner but that’s a very close-minded approach to take for a PC gamer intending to keep this card for 2+ years. I am sure you would agree. Previous cards like 8800GT 256MB and 8800GTS 320MB were a good history lesson.

      • Ryu Connor
      • 5 years ago

      Please review rule #13.

      [url<]https://techreport.com/forums/viewtopic.php?f=32&t=83924[/url<]

        • Orb
        • 5 years ago

        I don’t work for AMD if that’s what you are implying. I just don’t like Nvidia 😉
        Anyways, you won’t hear from me from now.

          • Ryu Connor
          • 5 years ago

          No, I don’t think you work for AMD, but the rule does say any hardware company.

          Feel free to continue to contribute. Opposing insight is important.

          • BlackDove
          • 5 years ago

          You seem to be ok with a company that sold multi GPU that gave worse performance than a single card and 220W CPUs that are outperformed by a 77W one lol.

      • lilbuddhaman
      • 5 years ago

      Joined: Fri May 30, 2014 1:27 pm

      Gotta lurk more man.

    • Mat3
    • 5 years ago

    AMD should put their Tonga improvements on something like a 270X to have a closer competitor (in terms of performance, power and die size) to this 960.

    • Ninjitsu
    • 5 years ago

    Clearly a 1080p card, and that’s okay for $200. Heck, my GTX 560 cost me that much, and it’s NOT a 1080p card, except for some games.

    Anyway, that said, I’m curious about the price-performance chart: Is it using data at 1080p, 1440p, or a mix of both?

    If it’s either of the last two of those options, I’d question the decision, because again, it’s not really meant for anything above 1080p…and yeah, the results of the scatter plot would be similar, but I’m curious about any bottlnecks at 1440p not being a factor at 1080p.

    • UberGerbil
    • 5 years ago

    So with low power/noise and hardware decoding of H.265, this looks like the perfect card (for me) to buy now with an eye towards migrating it into an HTPC in a year or two, when I might actually have a 4K living room display and H.265 media to play on it. I hate being on the GPU upgrade treadmill, so having a potential second life for them makes me much more likely to buy. (Of course for a pure HTPC, integrated Intel graphics may be enough — especially 12-24 months from now — but if I have a “free” dGPU available, even if it’s a generation old, I’ll take it).

    One thing related to this: I really appreciate the board lengths included in the table, but it would be useful if there was an indication when they are over[i<]height[/i<] and if so, by how much. From the product shots it looks like only the MSI is too tall for its britches?

      • cobalt
      • 5 years ago

      [quote<]One thing related to this: I really appreciate the board lengths included in the table, but it would be useful if there was an indication when they are overheight and if so, by how much.[/quote<] I second this request. Guessing by newegg pics isn't all that reliable, and I happen to own a case without much height clearance, if any.

      • Airmantharp
      • 5 years ago

      The H.265 support will also be important for those wanting to process 4k video; Samsung’s NX1, the first camera to use H.265 as its recording format, requires videographers to pipe videos through their in-house transcoder to render them in a format that leading real-time editors can access natively.

      Having that extra processing power would certainly speed up the process, especially if it’s properly utilized by software vendors (Adobe etc.), and more especially if vendors of consumer-level video-capable cameras start shifting to H.265 in order to reduce the storage load and/or increase the quality of the resulting video.

    • f0d
    • 5 years ago

    reading the 99th percentile FPS per dollar makes this seem like a great card for the price
    reading the FPS per dollar makes the AMD cards seem like a much better choice

    overall i think this is a great card for someone who hasnt purchased a GFX card for a few generations, if your last card was a gts450 or a 5770 and you generally get the budget cards and not the higher end ones then this is a great card for you imo

    • sschaem
    • 5 years ago

    Good card, terrible price.

    It feel that since the 7 serie cards, nvidia is charging for the brand name more then anything else.

      • Platedslicer
      • 5 years ago

      [quote<]It feels like for as long as nVidia has had better marketing than AMD/ATI (pretty much forever), they've been charging for the brand name more than anything else.[/quote<] There, better I think.

      • jihadjoe
      • 5 years ago

      If we’re talking about pricing, I feel the 600 series was worse.

    • extreme
    • 5 years ago

    Thanks for covering dota. Performance looks excellent with these mid range cards. I hope dota will get included again when IGPUs get reviewed next time. I don’t think the picture is as rosy with them. My experience and benchmark about dota with amd trinity are here in this thread:

    [url<]http://www.anandtech.com/comments/8291/amd-a10-7800-review-65w-kaveri/413383[/url<]

    • gerryg
    • 5 years ago

    With the differences between G-Sync and FreeSync monitors perhaps the “total system cost” might play into the value conversation for gamers hunting for low- to mid- range new gaming pc builds? Are there any practically identical monitors yet where only diff is the sync tech?

    AMD needs to get in gear, hoping the next GPU/GCN architecture is much much more efficient, both in performance per clock and power usage.

    • USAFTW
    • 5 years ago

    I wish NVidia would’ve priced it lower.I mean when you think about it, it has a 128-bit memory interface, only 2 gigs of memory and a modest GPU on a tried process, as compared to the competitors it should be cheaper to make.
    Personally, I would prefer to have a smaller and a more lightweight design in terms of PCB and fan and heat pipe and DDs that knocks 50 dollars off the list price.
    Otherwise, solid card. I don’t understand why damage would drop Battlefield 4 from the list, though.
    Oh, and AMD is screwed. Resorting to sell a expensive 512-bit, humongous GPU at just 270 dollars doesn’t seem like a smart, long term business strategy.

      • puppetworx
      • 5 years ago

      My thoughts also. For the consumer, there isn’t much of a performance progression here but the fact that they’re achieving it with such a small die is great for NVidia.

    • codedivine
    • 5 years ago

    Scott this card does have conservative rasterization etc that is missing from the little Maxwell 750 Ti correct?

      • Damage
      • 5 years ago

      Yeah, should have all the DX12 stuff from GM204.

    • The Egg
    • 5 years ago

    It’s amazing that the 960 can even be competitive with the 280X despite having a 128-bit interface (38% of the bandwidth) and a fraction of the shading power.

      • BestJinjo
      • 5 years ago

      280X is essentially the same product as a December 2011 announced HD7970. What’s more amazing is that for barely more $, an R9 290 delivers almost 50% more performance and doubles the VRAM of the 960. That is amazing. Never could you buy so much more performance for so little more money by stepping up to the next class of card.

        • Damage
        • 5 years ago

        Stop spamming these comments or I’ll nuke you from orbit. You’ve said your bit in five or six threads. Just stop.

    • derFunkenstein
    • 5 years ago

    I was one of those who was skeptical of the leaked specs (which turned out to be 100% on the money) and didn’t think that even with Maxwell’s architectural improvements that it would provide any significant performance improvements over what it replaced. I am eating some serious crow here. This is really impressive – makes my GTX 760 look kind of old. I won’t be replacing it with a 960, mind you – I’d be better off at 1440p getting a 970 – but impressed nonetheless.

    • LoneWolf15
    • 5 years ago

    Thanks for the great review guys. It also gave me a chance to see how the 280X stood up to the GTX 970 in a modern setup, (and since I went from the former to the latter, useful for me).

    One thing of note –this has happened multiple times before, and for the third time in my ownership experiences, I’m disappointed in an nVidia decision to put out new high end cards with all the gaming toys, and then not have the chance to give them the advanced video playback hardware, so only the later-released mid-level hardware gets it. It gets old; many of us aren’t just gamers, and use that big LCD monitor for video playback too.

    When you pay the big bucks for a card, you expect it to come with all the features.

      • willmore
      • 5 years ago

      My G210 says ‘hi’ as does the GF520/GF610 next to it.

      So, yeah, WTF, nVidia? Maybe they assume that they can do in shaders what they didn’t do in dedicated hardware?

    • Damage
    • 5 years ago

    Asus has decided to price the Strix GTX 960 at $209.99 rather than $214.99. I’ve edited the article to reflect this change.

      • anotherengineer
      • 5 years ago

      And $269.99 cnd for OC version

      [url<]http://www.ncix.com/detail/asus-geforce-gtx-960-strix-c5-105121.htm[/url<] edit - hmm weird thing is R9 285 prices have actually gone up to match the GTX 960. I could have gotten an XFX DD 285 last week for $210, sigh.

        • LocalCitizen
        • 5 years ago

        blame Canada!
        actually, due to lowered Canadian Dollar, expect things to get more expensive soon.

          • anotherengineer
          • 5 years ago

          Ya, down to 80 cents just in a few months, after being at par for so long. It’s the oil prices that really hurt our dollar.

          Sigh

          Guess I won’t be buying anything, problem solved, take that economy!!

    • Bensam123
    • 5 years ago

    I don’t understand why so much emphasis is being put on power consumption. It doesn’t matter, 100w light bulb that’s only on when you’re playing games. Maybe if you’re talking about mining, that’s one thing and Maxwell does make a difference there, but not for the average user. Same with mobile, definitely makes a difference in mobile, but not desktop.

    R9-290 is still definitely the best value, especially considering you can get them for around $230 sometimes and they’re selling for $180 on eBay right now.

    That being said, I’m sure the price graph will be quite a bit different in a month when AMD readjusts their prices as per usual. I’m surprised TR doesn’t do chart updates on a regular basis, it would be a good meaningful content injection.

      • cobalt
      • 5 years ago

      It does matter. First, we don’t all have full tower cases with 10 fans. I’m on a fairly compact mATX case myself, and I may go smaller in the future.

      Plus, cooling a hot card insufficiently may affect its performance — see the 290/290X performance variation.

      And then, noise. Yes, it’s possible to cool 300W without going deaf, but it’s also possible to cool 150W with the fans shut down entirely.

      And finally, power efficiency can be a limiting factor — when we get our next set of 300W monster cards, the one with the best power efficiency is going to have the highest performance.

      So yeah, a few bucks of power consumption isn’t critical, and maybe in your case the noise doesn’t change much. But you can’t say power efficiency doesn’t matter as a general rule.

        • Jonsey
        • 5 years ago

        I have an R9 290 in a mATX case and it’s not that loud… the PSU makes more noise. It doesn’t throttle. Granted it’s not a reference design, but most people don’t have those anyway.

          • cobalt
          • 5 years ago

          Oh, of course it’s possible to cool a big hungry card — I believe I used the words “without going deaf” above — it’s just harder, and only the better non-reference designs managed it from what I can tell. 🙂 I was taking issue with the original assertion that power consumption flat-out “doesn’t matter”.

        • sschaem
        • 5 years ago

        I have a 290x in a relatively small case, no issues. And I have 1 case fan.

        How you ask? I have a reference model where all the heat is taken outside the case.
        And because the case fan bring fresh air front the front, the card intake cool air.
        I have the fan set to max 50%, and unless I do fur mark its at 1GHZ rock steady.

        People have been fooled by review site about the R9 reference…

        A $180 (ebay) reference R9-290 is about 50% faster then the $210 GTX 960 (cant find on ebay)

        And I can assure you the power bill at the end of the year will be no different if you are a normal person with a job. (not playing 8 hours a day, every day).

        • mesyn191
        • 5 years ago

        Stop with the hyperbole. You suck at it. You don’t need a full tower case or 10 fans.

        1 decent inlet fan and exhaust fan is all that are needed for decent temps with a 300w GPU and 120w CPU. Sometimes not even that if your PSU has a 120mm+ fan on it.

        Power efficiency has nothing to do with highest performance either. Mobile phone GPU’s have incredible power efficiency for instance but their peak performance is a joke compared to even most mid range GPUs from a couple of years ago.

        For desktop PC’s that aren’t SFF or iTX, which is to say nearly all of them, power efficiency really doesn’t matter all that much.

          • cobalt
          • 5 years ago

          [quote<]Stop with the hyperbole. You suck at it.[/quote<] Apparently I don't suck at it, because you understood it as hyperbole. But then you seemed to interpret it literally anyway, and build a strawman around it, all at the same time as insulting me. Neat trick! Let me try this again: power consumption has some relevance, and my point was that you can't make a blanket statement that it's irrelevant. Less power means less heat, so you can use a smaller case, use less cooling, and generate less noise. You seem to recognize this, because you added a qualifier to Bensam's assertion to make it more accurate. He said "It doesn't matter", and you said "really doesn't matter all that much". Considering my whole point was that ignoring power consumption as an absolute isn't a good idea, I think you and I aren't exactly disagreeing here. But that's only one aspect -- cost and cooling. But I pointed out an additional situation which you ignored -- in a power-constrained environment, lower power translates to higher performance. Here, you can't even make an argument that it doesn't matter all that much. Oh, and your mobile phone GPU argument is ridiculous, and you know it. You say power efficiency has nothing to do with mobile GPU performance, and you make that case by comparing the performance of a 1W mobile GPU with a 150W desktop GPU.

            • mesyn191
            • 5 years ago

            [quote<] because you understood it as hyperbole.[/quote<] Actually I didn't. There is no indication in your original post that you intended hyperbole at all. My use of that word was meant to be critical. [quote<]power consumption has some relevance.... Less power means less heat, so you can use a smaller case, use less cooling, and generate less noise. [/quote<] Goal post shifting, that wasn't your original statement at all. Also less cooling doesn't necessarily mean less noise. Good 120mm+ fans with moderate rpms can be fairly quiet and move large volumes of air easily and can fit in many cases. Some of the tinier cases might not be able to fit them and you can be stuck with high pitch noisy small fans. I did say SFF/iTX cases are an exception for a reason. [quote<]because you added a qualifier to Bensam's assertion...my whole point was that ignoring power consumption as an absolute isn't a good idea[/quote<] People can be quite argumentative and hyperbolic on the internet so I added that qualifier so that no one would be all, "what about a system that puts out literally 5000w of heat huh?". For nearly all situations, even some fairly niche ones like SLi/CF systems that use tons of power and put out lots more heat than most will ever deal with, power consumption isn't really a problem. The only time power usage and heat really become an issue is when you're trying to use TEC's or phase change coolers to get sub ambient or sub zero temps on your CPU or GPU. I've done single stage phase change before and even then its not that bad in terms of money spent on power usage at all. My power bill went up maybe $15-18/month over what it was before. I was dumping ~3x the amount of heat into the house too, in summer, room temp went from mid 70's to mid 80's F. A significant change but hardly a burden. Especially since it was typically 90F+ outside. Now multi stage phase change...THAT is a different story. Its like running a 2nd central AC unit in your house and its not unheard of to have to install extra breakers/powerlines to do it. People were using CF/SLi for years without issue before the R9 290/X showed up, and they typically used more power than a single R9 290/X, yet suddenly the R9 290/X's power consumption and heat are beyond all reason? If you want a single card example nV's GF400 series cards had similar temps and power consumption but almost no one freaked out about it then or now. The few who did complain about the temps and power of the GF400 series weren't anywhere as vocal or hyperbolic as people are now today about the R9 290/X. The hypocrisy is clear cut and obvious. edit: yes I realize your comments were about power consumption and heat in general and not the R9 290/X specifically but those cards are the red headed stepchild in any sort of discussion about power consumption and heat of PC's and GPU's in general these days. Its almost impossible to complain about heat/power consumption and not bring them up which was why I was using them in my comments as an 'extreme' example. Also OP was talking about the R9 290 as a value buy. Scare quotes used throughout my post here are used for sarcastic purposes. [quote<]But I pointed out an additional situation which you ignored -- in a power-constrained environment, lower power translates to higher performance.[/quote<] No I addressed it, why else do you think I brought up mobile phone GPU's? Desktops generally aren't a power constrained environment though. Even SFF/iTX desktops which can handle 95w CPU's/GPU's. Its the really small stuff like NUC's that run into any issues with power/heat. [quote<]Oh, and your mobile phone GPU argument is ridiculous, and you know it.[/quote<] You made a general case statement without qualifiers. But even then its nearly unheard of for power/heat to cause performance issues in a desktop setting. The R9 290/X with a reference cooler and bugged default fan profile is about the only 'common' case that exists and its easily fixed by raising the fan limit 5% or so for the few people where it really was an issue.

            • cobalt
            • 5 years ago

            [quote<][quote<]power consumption has some relevance.... Less power means less heat, so you can use a smaller case, use less cooling, and generate less noise.[/quote<] Goal post shifting, that wasn't your original statement at all. [/quote<] What? That was almost the entire content of my original post. First, I pointed out that case sizes vary (thus implying smaller cases are more favorable to less power-hungry cards), and included a mention of fan quantity, then explicitly calling out the noise involved in cooling a card which draws more power. That's a pretty damn direct match to my statement quoted above. If you call that goal post shifting, I think you're comparing my summary against someone else's post. [quote<][quote<]But I pointed out an additional situation which you ignored -- in a power-constrained environment, lower power translates to higher performance.[/quote<] No I addressed it, why else do you think I brought up mobile phone GPU's? Desktops generally aren't a power constrained environment though[/quote<] Just pulling TDPs off of Wikipedia: GTX 280, 480, 780, Titan Black, 2900 Pro, 5870, 6970, 7970, all are 250W. That's where high-end single chip solutions have been living for a number of years in the desktop space. If your architecture is more power efficient than your competition and you're constrained to 250W, you win. I'll repeat myself: I think we're in general agreement. I was just trying to add some context to the "power consumption doesn't matter" discussion, and frankly I think I was rather polite about it. I apologize if you offended by my use of the phrase "10 case fans", and I'll try to be more diplomatic in my future posts.

            • mesyn191
            • 5 years ago

            [quote<]That was almost the entire content of my original post.[/quote<] Nope. Only a few sentences of your OP talked about that stuff and only as a springboard to make hyperbolic statements about how much heat/power the R9 290/X uses. [quote<] If your architecture is more power efficient than your competition and you're constrained to 250W, you win.[/quote<] Power efficiency isn't performance, its performance/watt, and if power efficiency was the most important thing in the desktop market we'd all be using phone GPU's and SLi/CF probably wouldn't exist and no one would care about gaming at 4K or even 1440p resolutions. Power efficiency is a nice thing to have in the desktop market but its performance that actually matters most of all in that form factor. [quote<]I think we're in general agreement.[/quote<] We really aren't and I think I've been as clear as I can over these last few posts. I'm not offended per se with you or your posting style. I'm just sick and tired of the hyperbole and FUD over the R9 290/X's power usage, heat, and noise.

        • Bensam123
        • 5 years ago

        You don’t need a full tower case with a bunch of fans to get rid of heat, especially with a rear blower (which you should have if you have a mini-case to begin with).

        If you’ve looked at other reviews on TR, you know noise isn’t a big issue with a good cooler, even while disapating a extra 100w. Chances are if you’re running a game you’ll have headphones on or speakers on which will cover up what little extra sounds it admits, so this is a moot point anyway. I suppose you could be weird and play games without sound… You could argue that.

        We aren’t talking about cards on the bleeding edge of performance, these are mid range cards and even then you can dump a lot of power into a card through a PCI-E connector… That’s what they do with sli/crossfire cards.

      • Krogoth
      • 5 years ago

      It is mostly having to do with thermal management.

      Less power consumption = less heat dissipation = small HSF/low-RPM fans are required = less noise.

      Just for reference, my old system (OC’ed Q6600 with HD 4850) was a loud SOB that needed that level of cooling/noise to keep it stable. My current rig (3570K with a 660Ti) is whisper-quiet at idle and 660Ti whirls a little bit after a long gaming session.

      It is a night-day difference. I’m not going back to the loud SOB non-sense if I can help it.

        • Bensam123
        • 5 years ago

        OC’d systems are a completely different ball of wax. Thermal envelopes go wildly out of proportion when you start OCing.

          • Krogoth
          • 5 years ago

          Q6600 B3 chips at stock were toasty units to begin with.

      • derFunkenstein
      • 5 years ago

      if all else is equal (and at $200, it is) then I think I’d go with the one that will cost me the least to run. Not that electricity is a huge expenditure for me, but why cost myself any extra money if I don’t have to?

        • September
        • 5 years ago

        plus this makes a lot of sense in a mITX or HTPC case, especially if it is in a cabinet or other place with poor airflow.

        HEAT BAD.

          • derFunkenstein
          • 5 years ago

          Yup. I can’t think of a single instance in which you’d want to go with the card that uses more power.

        • auxy
        • 5 years ago

        Because you can get an R9 290 for the-price-of-lunch more, and it demolishes the 960? ( *´艸`)

          • derFunkenstein
          • 5 years ago

          Where are you getting lunch? Lunch for how many people?

            • auxy
            • 5 years ago

            Hehe. (*‘∀‘)

            Lunch for a few days, then! It’s really not that much more tho. $40!

            I know people who piss away $40 on junk they don’t need or even want every day. I’m not saying that you or I are one of those people, but c’mon, it’s $40. For like double-or-more the performance. Really? (´・ω・`) Like, really? Are you really gonna make the argument for the GTX 960 here?

            • derFunkenstein
            • 5 years ago

            no, for $250 I could be convinced to get an R9 290.

            The 960 isn’t a good fit for my 1440p monitor in the first place.

            • sweatshopking
            • 5 years ago

            I bought my msi 4g 290 for 290$ CAD, and I’m very happy with it. Wish I could pull a constant 60fps in Rome 2 @ 1080p though, but I don’t think there is a card going that can do that.

            • BestJinjo
            • 5 years ago

            Asus R9 290 – $240 USD on Newegg:
            [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16814121842&cm_re=asus_r9_290-_-14-121-842-_-Product[/url<] PowerColor R9 290 - $250 CDN at NCIX: [url<]http://www.ncix.com/detail/powercolor-radeon-r9-290-turbo-24-104017-1356.htm[/url<] In Canada, a GTX960 starts at $250 CDN. TR brushes aside these amazing values on the R9 290 and makes assumptions that most of them run hot and loud, which isn't even remotely true. They also fail to talk about the horrible frame times of a GTX960 vs. an even slower R9 280X, nevermind an R9 290: [url<]http://www.techspot.com/review/948-geforce-gtx-960-sli-performance/[/url<] Even smaller sites are testing 8 games, while TR removes Crysis 3 and BF4 from their review, no DAI either. Really expect much better professional reviews with solid conclusions and mention of poor frame times from a site that along with PCPerspective brought gamers' attention to this very issue! Also, failing to discuss 2GB of VRAM limitations despite many games already exceeding 2GB of VRAM. Really poor judgement call recommending this card for anything other than $150.

            • Bensam123
            • 5 years ago

            Not a reputable resource, but eBay you can get them for around $180 right now used.

        • Bensam123
        • 5 years ago

        It’s next to nill and the way AMD prices things, you’ll be looking at a pretty decent discount on competitive chips. That’s why it really doesn’t matter, but TR usually doesn’t keep tracking prices and update these charts, so something like this ‘stick’s even if in the future you’re talking about $50 difference for pennies on your electric bill.

        The 970 for instance was a ‘great’ deal for a month, then AMD adjusted their prices and it wasn’t. This will probably end up the same way, but we only go back to these reviews and look at the charts at the end, which don’t change.

      • sschaem
      • 5 years ago

      I saw the same thing, my take away from the review data:

      Retail both card are about the same price. ~$220 (290 vs 960)

      Biggest difference : The 960 is about 50% slower, only comes with 2GB and no free sync.

      The surprise actually is how the 290 is often even faster then the $100 more costly GTX 970.

      And that difference gap grow at higher res.

        • cobalt
        • 5 years ago

        Uh, where are you finding R9 290’s for $220? I see on Newegg mostly these: $310 ($290MIR), $290 ($260MIR), and $310 ($290MIR). Amazon prices are about the same. That’s a perfectly good price, of course, but not $220. (Edit: I do see a temporary promo code on one of them which gets you to about $240.)

          • auxy
          • 5 years ago

          [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814150697&nm_mc=AFC-C8Junction&cm_mmc=AFC-C8Junction-_-na-_-na-_-na&cm_sp=&AID=10446076&PID=3938566&SID=<]Currently seeing $249 at Newegg.[/url<]

          • south side sammy
          • 5 years ago

          In the past 2 weeks I saw a 290 for $180 at the egg. You have to be vigilant. I’ve seen prices change as much as 3 times on the same item in a matter of hours. You need to buy it when you see it without thought that it will be there or be the same price in ten minutes time.

      • f0d
      • 5 years ago

      100w light bulbs are you crazy?!?!?
      get some LED bulbs in there.!!!

        • mesyn191
        • 5 years ago

        Not cost effective yet even though there have been large cost drops over the last few years. CFL’s still have the best bang for the buck.

          • DragonDaddyBear
          • 5 years ago

          They are when you factor you have to buy one LED bulb for every 5+ CFL’s over 10 years. That alone plus them at parity.

            • mesyn191
            • 5 years ago

            CFL’s have typical lifespans of 7-10yr so I don’t know what you’re talking about.

            A anecdotal ‘I had some CFL’s die earlier than that’ is not a effective rebuttal to this either BTW. I’ve seen LED bulbs die in less than a year too. That doesn’t mean that all, most, or any significant number of LED bulbs will die in less than a year. Just as with cars, appliances, or other electronics sometimes you get a lemon. Or a edge usage case too.

            • DragonDaddyBear
            • 5 years ago

            Show me the numbers over the life of an LED vs CFL vs incandescent. Do the math, you’ll find that after ~10 years ( .5 one LED, 5 CFLs and 10 incandescent) the numbers are slightly in favor of an LED at worst (depends on energy costs and the expectation of bulb life).

            • mesyn191
            • 5 years ago

            I’m going by manufacturers data which you can find on wiki or even the product box. 7-10yr isn’t a unusual claim at all for bulb lifespan nor unheard of IRL.

            CFL’s still win out for now. Particularly if you’ve got a need for high lumen out put bulbs. Think 100-120w incandescent equivalents or better which still run around $20 a pop. Which aren’t even all that powerful either BTW. I have several 400w equivalent CFL’s that have been running non-stop for 3yr now in my garage for lighting and they only cost around $15/bulb when I got them.

            [url<]http://www.amazon.com/LimoStudio-Studio-Photography-Fluorescent-Spectrum/dp/B005FRCUHY/ref=pd_sim_sbs_hi_2?ie=UTF8&refRID=1AR9BHKYY76PY0NA8FAN[/url<] LED bulbs that come anywhere near that light output are impossible to find for even 2, 3, 4, or even 5x that price.

            • BlackDove
            • 5 years ago

            If you like damaging your retinas.

          • Andrew Lauritzen
          • 5 years ago

          I agree (but still prefer LEDs myself). Regardless, his point is true for either technology 🙂

        • jihadjoe
        • 5 years ago

        100W worth of LEDs!

        • Bensam123
        • 5 years ago

        And that only makes a big diference when you have a dozen of them on at any given moment. How many GPUs do you plan on running in tandem? Octo-sli?

        You could make a argument for a Quad sli rig, and I would give it to you, power draw does matter in that case… A lot of other things also matter in that case… but you know, anecdotal and such.

        • BlackDove
        • 5 years ago

        You should read what i wrote in this thread about LED and CFL bulbs.

        They destroy your eyes. WLEDs and CFLs should be banned, not incandescents.

        [url<]https://techreport.com/forums/viewtopic.php?f=29&t=101307&start=30[/url<]

      • travbrad
      • 5 years ago

      I don’t think it’s really power consumption itself that is the issue, but rather the other factors that come from having higher power consumption, ie more heat and louder fans.

      The 290 has great performance for the price, but it’s much louder than the other cards tested here, and while it may be fine in winter the extra 100W into your room is a lot of extra heat during the summer months. Even my GTX660 generates a lot of heat while gaming, making my office room noticeably warmer during the summer, and a GTX660 has pretty much the lowest power consumption of all the cards they tested here.

        • sweatshopking
        • 5 years ago

        get a better card. my 4g card is pretty much silent.

        • Bensam123
        • 5 years ago

        100w isn’t a lot, even in the summer. People are vastly over estimating the impact of 100w on your life, that’s why I have a problem with the power consumption bit in the article, it simply doesn’t matter.

        It’s not even a 100w light bulb on all the time, it’s on whenever you play games, and you yourself can easily judge how often that happens.

        Mobile and compute tasks (like mining), only place it matters.

          • travbrad
          • 5 years ago

          Granted my home office isn’t huge (12x12ft), and depending on the room you may not see a big difference, but in my case I DO feel a noticeable difference. I’ve even used a thermometer and the room gets 4-5F warmer when gaming with the door open, and about 8-9F with the door closed. There is no over or underestimating going on here, unless my thermometer is faulty.

          I’m not in the market for a new graphics card right now anyway, but if I was I’d have to think over whether the 290s heat is worth the incredible performance it offers for the price. I wouldn’t get a 2GB card like a 960 at this point either though.

          Not many people use 100W incandescents these days either, so that seems like a strange comparison to make. I have 2 lightbulbs in my office room, which use about 30W of power combined.

            • Bensam123
            • 5 years ago

            It’s a useful comparison. There are generations raised around incandescent and you can get a pretty good idea of the impact on your power bill as well as the amount of heat they admit.

      • Andrew Lauritzen
      • 5 years ago

      It may not matter a huge amount to a consumer looking to buy a desktop GPU to throw into a big case (noise still a factor of course), but it’s *hugely* important for the companies involved in terms of their ability to target various form factors and generally be profitable.

      Chip area and transistors are also reported and relevant in the same sort of way – you probably don’t care if you’re just looking for the best GPU to buy for Game X, but they are an indicator of the profitability and flexibility of the company to produce different SKUs and earn money.

      I suspect that on TR there are as many folks interested in those sorts of industry and architecture trends and such as there are who are seriously considering buying a GTX 960 (for instance). As a 980 owner I have zero interest in buying one, but the relevant numbers about it are still interesting for those reasons.

      • anubis44
      • 5 years ago

      “I don’t understand why so much emphasis is being put on power consumption. It doesn’t matter, 100w light bulb that’s only on when you’re playing games. Maybe if you’re talking about mining, that’s one thing and Maxwell does make a difference there, but not for the average user. Same with mobile, definitely makes a difference in mobile, but not desktop.”

      The only plausible explanation behind this blatantly pro-nVidia review, which shows a disappointingly cut-down GTX960 unable to beat the 5 month old R9 285 for the same price, is (and I hate to say this, but…) that TechReport must be getting some kind of benefit for being so pro-nVidia. I don’t know what that could be. I am loath to imagine they’re getting any money for this extreme bias, but I can’t see too many alternatives. Nvidia is clearly trying to lock customers into higher priced products with proprietary technologies, like PhysX and G-sync. Why would any intelligent person willingly support these monopolistic practices? Even if AMD cards were a little MORE expensive than the nVidia equivalents, I’d stay away from nVidia for this reason alone. Yet time and again, we hear TechReport recommend nVidia cards, even when all else is equal or in AMD’s favour. Why? Why do that? nVidia doesn’t need any help marketing, so please cut this nonsense out.

      • ronch
      • 5 years ago

      Given two chips that perform similarly but one eats less power, which would you choose if price wasn’t a factor? In reality though, performance and power efficiency command higher prices. That’s how the semiconductor industry works and it does make sense. And it applies not just to semiconductors. Finesse/refinement costs money.

      • VincentHanna
      • 5 years ago

      Speaking as the proud owners of 2 gtx580s in SLI, I agree with you.

      In game performance is everything.

      • EndlessWaves
      • 5 years ago

      A little thing called global warming.

      There’s no good reason for a graphics card to draw 100W+. Limited performance increase but a reduction in power draw is a more responsible choice than cranking up the clockspeed to deliver as much performance as possible at the cost of efficiency.

      With the general fashion for smaller and smaller systems we need to get back to the situation where a top of the range card pulls 30W rather than 300W. It’ll mean a temporary stall in performance but it’ll be better in the long run, resulting in more desirable systems and greener computing.

      • Prestige Worldwide
      • 5 years ago

      I’m with you on this one. For me, here are the points I look at when making a purchase, in order of descending importance.

      1. Price
      2. Performance
      3. Comparative feature set outside of just being able to run games at X FPS (Shadowplay in hardware vs Game DVR in software, etc)
      4. Temps
      5. Power consumption

      To me, power efficiency is really more of a “nice to have” feature. The GTX 970 I’m running at 1465 MHz is quite nice, I appreciate the efficiency of the architecture and it’s large overclocking headroom. The GTX 670s I used to run in were also pretty good performers at the time considering the sub-200w TDP.

      The GTX 295 I used before was a monster and I got it used for $100 with waterblock installed. It heated up my waterloop with its 2 GT200b chips.

      I actually have no clue how power-efficient the HD 4870 512mb I had before that was. It wasn’t even part of the decision.

      I saw GTX 280 launch at $650 along with the GTX 260 at $399 one week and the HD 4870 launch for $299, laughed, and pulled the trigger.

      Granted, I live somewhere where power costs $0.08/kWh, which is among the lowest prices in North America. I can understand those whose electricity costs are higher feeling like this is of higher importance.

      But I honestly don’t think it has ever been a deciding factor in any of my hardware purchases.

    • Krogoth
    • 5 years ago

    960 is great at 2Megapixel gaming and for all intents and purposes a direct replacement for the old-fangled 660 unlike the throw-away binned GK104-based 760.

      • derFunkenstein
      • 5 years ago

      For all intensive porpoises I have no idea what “old-flanged” means.

    • MathMan
    • 5 years ago

    Every once in a while when reading a GPU review I think to myself: “it’s probably a whole lot of effort to write these, but other than that, I could do this.”

    And then I read that adorable introduction about the world’s smallest chainsaw and I go “nah, forget it.”

    That was a fun read!

      • Pez
      • 5 years ago

      I think Scott should speak with IBM see if they can fab him one 😀 Things should have moved on from spelling their own names in Atoms now, right?

      [url<]https://www-03.ibm.com/ibm/history/exhibits/vintage/vintage_4506VV1003.html[/url<]

    • south side sammy
    • 5 years ago

    As I look around the web it seems the “fix” is in. Nobody, and not even HardOCP is testing this card above 1080p resolution. Yes, I know it’s this cards target point. So what? The power of nvidia….. I personally would like to see those higher res’s pop up just to have a look.
    and I also noticed that the card really fails with massive frame drops on some benchmarks. You’d think they would learn to keep the minimums up to playable levels by now..??

      • puppetworx
      • 5 years ago

      [url=http://www.pcper.com/reviews/Graphics-Cards/NVIDIA-GeForce-GTX-960-2GB-Review-GM206-199<]PCPer[/url<] tested at 2560x1440. It doesn't fare well in frame times against the competition.

        • MathMan
        • 5 years ago

        So I followed your link to check those bad 1440p results and… They’re nowhere to be found. There’s one game where the 960 does considerably worse at 1080p than the 285, but that’s about it.

        And other than that, they are always very similar.

          • puppetworx
          • 5 years ago

          Try looking at the Frame Variance and FPS by Percentile charts.

          [url=http://www.techpowerup.com/reviews/ASUS/GTX_960_STRIX_OC/29.html<]TechPowerUp[/url<] show the R9 285/280X on top at higher than 1080p resolutions also. Though they use game benchmarks and average FPS rather that real world game tests and frame times. Also, I missed it before but as Damage points out they did test [url=https://techreport.com/review/27702/nvidia-geforce-gtx-960-graphics-card-reviewed/6<]DOTA 2[/url<] and [url=https://techreport.com/review/27702/nvidia-geforce-gtx-960-graphics-card-reviewed/10<]The Talos Principle[/url<] at 2560x1440, and the 280X and 285 indeed come out on top. Edits: Added 280X to the TechPowerUp summary.

            • MathMan
            • 5 years ago

            Looking at TPU numbers confirms what I’m seeing at PCPer against a R9 285:
            1080p: GTX960 ahead by 1%
            1440p: behind by 1%
            4k: behind by 5%

            Technically one is faster than the other, no one could distinguish those cards in a double blind test, so they’re basically even.

            As much as these head-to-head benchmarks are fun to read, it’s worth keeping things in perspective.

        • south side sammy
        • 5 years ago

        I’ll check that out.

      • Damage
      • 5 years ago

      Maybe read our article, then?

        • south side sammy
        • 5 years ago

        I did read it. I saw the Dota thing at higher res… a mention anyways. Did I miss another?Kinda miss the games people are playing like Crysis 3, Battlefield3 and 4, etc.

        If you run physx games how did the card do with it enabled? And why so many cpu intensive games in the test?

        Glad I didn’t mention something inre to not liking my card to pump heat into my case and having to rely on case fans and power supply fans to pull heat out of my case as the motherboard and card sizzle before their fan kicks in. Also a cheap way to promote less power consumption without really giving that much of a real advantage. Would rather have the fans constantly running keeping thew temps down from the get go. Whole system will keep cooler that way.

        I gotta say straight up…… right now it retails for about $200. Not worth it. Maybe a buck and a half and that would be pushing it IMO. I also don’t like the terrible drops in frames. Reminds me of my 660. Second worse card I ever owned. This card with it’s limitations at that price point……. I see the nvidia crew sitting around a table saying lets see what we can get away with…………

        In a time when the pc market is in great transitioning heading to higher res ( everybody on the web is pushing it ) I just don’t know…………. Why a card like this?

          • Damage
          • 5 years ago

          The tested settings for each game are in the article. 2560×1440 was widely used in order to ensure GPU-limited testing.

          I don’t think this selection of games is particularly CPU intensive.

          I think more people are playing DOTA 2 and Civ than Crysis 3 and BF4, but you can check the stats yourself. Just wanted to include newer games. Will consider going back to old FPSes in the future if the people want it.

            • sweatshopking
            • 5 years ago

            While I agree with using games more people are playing, and I think you’re right about the numbers, is that dota is the source engine. Almost any discrete card will run it well.

            • anotherengineer
            • 5 years ago

            Still good info to know, lots of games run on the source engine. People check out reviews to see what gains they can expect, from purchasing a new card.

            I think a good card review should test some of the most common game engines.

      • derFunkenstein
      • 5 years ago

      Way to not look at the graphs. lol

      I don’t think this is really a decent choice for a 1440p GPU. It’s faster than the 760 I’m using at the same resolution, but I’m already not running certain things comfortably. My pick would be a 970.

        • Airmantharp
        • 5 years ago

        It’s faster than my GTX670, essentially the same as your GTX760, and I’m running a single 2GB card at 1600p. Turning off a few settings in demanding games isn’t a huge compromise to make given that I have a 60Hz monitor.

          • Krogoth
          • 5 years ago

          The 670 is faster than 760 (Same memory bandwidth but it only has 6 SMX blocks instead of 7 found in 670). 960 is about roughly in the same range as a normal 680 with the 770 nodding ahead by a little bit.

          • derFunkenstein
          • 5 years ago

          They’re all in a line as far as performance goes, and I can’t in good conscience recommend something that’s only marginally faster than what I have for a 1440p monitor. It won’t be long before it’s not enough.

          note that I got my monitor about 7 months after I got my video card. If I’d bought them both close together (or had any idea I’d get a 1440p display) I’d have gone for a stronger GPU.

      • jihadjoe
      • 5 years ago

      Aside from the first game, every other test seemed to be at 1440P. If you need higher res TPU tested at up to 4k, no frame times, though.

      • f0d
      • 5 years ago

      going by the final 99th percentile FPS the 960 actually has a much smoother framerate than the competition

      going through all the charts again the 960 definitely has a smoother framerate and less framedrops than its amd competitor (you did click amd mid right?)

    • the
    • 5 years ago

    Huh, I wasn’t expect as high of clock speeds on the GM206, though the difference between my expectations and what nVidia did ship was only 50 Mhz or so. I guess that’s all that was necessary to keep it competitive with the R9 285. nVidia is making out like a bandit here as the GTX 960 has to be cheaper than the R9 285 to make.

    AMD still has hope in this segment as Tonga isn’t fully enabled and can go to 4 GB (or 6 GB) of memory with relative ease. I think this is why we haven’t seen a R9 285X yet is that AMD was waiting on the GTX 960 to launch to reposition Tonga. I have a feeling that Tonga will be rebranded the R9 370 and R9 370X in the coming months as AMD refreshes the high end. Until then, nVidia is the company to go to for a video card less than $200.

    The one thing I’d like to see is a direct comparison between two GTX 960 in SLI vs. a single GTX 980 with clock speeds normalized. Considering that the GTX 960 is ‘half’ a GTX 980, it’d be a good test of how well SLI scales compared to a single chip with the same specs.

      • EzioAs
      • 5 years ago

      TechPowerUp tested SLI. When SLI works and at 1080p, it’s pretty much the performance of a GTX 980/970.

      [url<]http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_960_SLI/[/url<]

        • the
        • 5 years ago

        Thanks for the find.

        Glancing through the results are pretty much what I expected: GTX 980 results at lower resolutions and a clear break away when resolution reaches 4K. The weird thing is that the GTX 980 is rarely approaches being twice as fast though the GTX 960 SLI more often flirts with that mark. I would have presumed that the SLI overhead was larger than scaling everything on-die. The data doesn’t align quiet where you’d expect it to.

          • Damage
          • 5 years ago

          The data are just a bunch of FPS averages. Of course AFR SLI will scale well in that specific way, but that doesn’t mean you’re getting a better gaming experience. I wrote some articles on this topic you might find interesting. 😉

            • auxy
            • 5 years ago

            Yep. A lot of sites like this “logical increments” site I’ve been hearing about lately are recommending SLI of weaker cards over stronger single cards and it just makes me roll my eyes. (They’re also recommending FX-6350 over a Core i3, which is just… sigh.)

            • the
            • 5 years ago

            True. Perhaps you know of when a site that does good frame pacing benchmarks will have results ready? 🙂

            I’m pretty confident that the frame times would be better on the GTX 980 due to SLI overhead. I also had the hypothesis that the same overhead would enable the GTX 980 generally to come out on top in terms of raw frame rate regardless of resolution. There are also a few SLI oddities in the presented data too: a few games still don’t like multiple GPU scaling and show negative results. Then there are just some broken games like Assassin’s Creed Unity thrown in that probably hasn’t been patched up enough to merit making a video card buying decision on. Actually has Assassin’s Creed been patched up enough to make a buying decision on itself as a game?

            OTOH, GM206 -> GM204 is a good indicator of how well nVidia can scale their silicon for GM200 predictions. Back of the envelop calculations would put a 3072 shader, 96 ROP, 384 bit wide, GM210 chip at ~7.47 billion transistors and a 569 mm^2 die size. The die size would be in line with [url=https://techreport.com/news/27684/nvidia-big-maxwell-gpu-appears-online<]recently leaked pictures of the GM200.[/url<]

        • BestJinjo
        • 5 years ago

        Nowhere close to a 980. Did you even bother looking at the scores?
        [url<]http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_960_SLI/23.html[/url<] The glaring issue is that $400 960 SLI is only about as fast as a single R9 290 that costs $250.

          • EzioAs
          • 5 years ago

          Like I said, if SLI works (properly). That means you can’t use the performance summary charts. If you look at the games where SLI works such as Ryse: Son of Rome, it shows the SLI GTX 960 performs somewhere between a GTX 970 and GTX 980 and in COD: Advanced Warfare, 2 GTX 960s even beat the GTX 980, these games prove my point that they perform around the GTX970/980.

          Now, in games where SLI doesn’t work properly, say AC:Unity or Wolfenstein: The New Order, the GTX 970 and 980 will definitely stomp over the 2 GTX 960s. Since these games are also taken into the performance summary charts scores, they will obviously drag down the relative performance scores of SLI GTX 960s. Then you also have to factor in resolution. It seems like as the pixel number increase, the gap between and 2xGTX 960 and a GTX 980 widens (which is no surprise tbh).

          Hope I made it clear.

    • Flapdrol
    • 5 years ago

    Right. The R9 285 was a complete sales flop, for obvious reasons, and now nvidia thinks it’s a good idea to release a card that performs and costs exactly the same.

      • cobalt
      • 5 years ago

      That’s not quite a fair comparison — they’re targeting the 285’s new price-performance ratio, not the one that it was released with. As the article points out, the 285 is almost 20% cheaper than when it was released. Edit to clarify: I don’t think the 960 is exactly an amazing performer for the price right now — which I assume is part of the point you’re trying to make — but I suspect it will actually sell okay.

    • guardianl
    • 5 years ago

    Great review!

    So FC4,
    – 960 57 FPS
    – 270X 52 FPS

    Percentile times a little more in favor of the 960, but considering the 960 tested here is overclocked, and I’m assuming the 270X in the charts is stock and overclocked versions are going for as little as $155 [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16814127761&cm_re=radeon_270x-_-14-127-761-_-Product[/url<] I'm not sure the 960 looks as good in real-life. The 960 seems like a pretty meh card overall. Other than perf/watt, it's in the same performance segment as a Radeon 270X which has been readily available at $180 or less for over a year. Like everyone else, I don't really understand the Radeon 285 card. What purpose does it serve in the retail channel?

      • sweatshopking
      • 5 years ago

      285 sucked.
      but you’re right. the 270x is a better value. this is why I usually end up with AMD. they’re just cheaper.

    • Eggrenade
    • 5 years ago

    I nominate the Xbone for the unit of graphics horsepower to be used for all cross-architecture comparisons and rough assessments.

    • Ryhadar
    • 5 years ago

    Wow, really impressive. I’m held back from upgrading my 7870, though, due to the whole G-Sync versus Freesync thing going on. I want a new monitor more than I want a new GPU, and I definitely want some of that variable refresh rate goodness.

    I’m leaning more towards a freesync/adaptive sync monitor. Mainly due to pricing, but also because it’s part of the DisplayPort standard and not tied down to a GPU manufacturer. But the 9 series of cards, like my 7870, only come supplied with DisplayPort 1.2 and not 1.2a (and up) necessary for freesync/adaptive sync to work. So, if I get a GTX 960 I’m stuck with g sync as my only option.

    I wonder if others are in a similar situation?

    Anyway, I’m not even thinking about upgrading until summer, so I’ve got plenty of time to mull it over.

    • EzioAs
    • 5 years ago

    Thanks for the review, I wish you could benchmark more games though. Still, I can take a look at other sites.

    The card is almost what I’d expect after the rumours and leaked benchmarks. I don’t think it’s a bad card for people starting a new mid-range system. However, for current GTX 660 owners like me who usually buys mid-range cards, I’d say skip this card if you’re looking to upgrade in the near future and get something higher end like the GTX 970 or an R9 280X/290 (if you don’t mind power consumption much).

    Yes, they cost significantly more but they also offer significantly higher performance and more memory (the GTX 970 is also more power efficient). At least, that’s what I’ll be doing 2-3 months later. Or if you’re still satisfied, keep the card and wait for next gen.

    TLDR: IMO, the GTX 960 is not a worthwhile upgrade from a GTX 660 (or GTX 760).

      • Pez
      • 5 years ago

      Agree, nice review Scott but some more games would be nice 🙂

      Would be nice to see some Dragon Age results in there, being incredibly demanding. The 2GB and 128-bit limits of the card start to really show on more demanding games and higher resolutions/settings.

        • Techgoudy
        • 5 years ago

        What disappoints me is the 2GB memory configuration. I would have liked to see 2GB and 4GB models or just 4GB models.

          • EzioAs
          • 5 years ago

          I think there was some rumours that there would be cards with 4GB memory. They’ll probably be available later. I personally think you shouldn’t bother with the 4GB version. Just get a GTX 970 or the speculated 960 Ti.

            • Krogoth
            • 5 years ago

            4GiB makes no sense for 960. It just doesn’t have processing power to handle content and resolutions where 4GiB of VRAM begins to matter. It is about as useful as old GF4 MX4 with 256MiB of VRAM (yes, it did exist).

            • Concupiscence
            • 5 years ago

            I really thought the GeForce4 MX cards topped out at 128 MB. Huh. Even that was garish overkill… though I suspect the 960 could goose at least a bit more life out of 4 GB.

            • Krogoth
            • 5 years ago

            960 is somewhat slower than a fully-fledged GK104 chip which doesn’t have a chance when games start to “require” 4GiB or more VRAM.

            • Concupiscence
            • 5 years ago

            I think you’re right. This is just a weird product, and it’s not even twice as fast as the GTX 750 Ti, which consumes half the power.

            • Krogoth
            • 5 years ago

            Architecture efficiencies drop off when you try to super-scale the same underlying design which is why 980 (GM206 is 4xGM107 cores glued together) isn’t 4x faster then a 750Ti and likewise 960 (2xGM107 cores glued together) isn’t twice as fast as 750Ti.

            • Concupiscence
            • 5 years ago

            Sage observations. I feel silly for quibbling, and in and of itself it’s a fine part. All of that said, I’m definitely not buying a 960 to replace my 660 Ti…

            • yokem55
            • 5 years ago

            The games that will have a problem with this don’t need to use a full 4 gb in order for 4g to be useful – they just need to use more than 2gb. And turning down the processor intensive graphics options (SSAO, Tesselation, Physx, etc) doesn’t have the same impact as having to crank down the texture and shadow resolution. I think a $250 4gb version of this would be a very reasonable option.

            And VRAM goes a long way in extending the life of a video card. My 2gb 560Ti is still a reasonable GPU ~3.5 years on. Had I gotten the 1Gb version, I probably would have had to upgrade a year and a half ago while missing out on some of the crazy texture packs for Skyrim and the like. It even tends do handle things better than the 1.5gb 580’s of the era, simply because it has more ram to work with. This is why I’m really hoping that an 8gb version of the 970 comes around. If I’m spending several hundred dollars on a board, I’d like to only have to do so every 3-4 years.

            But maybe this is Nvidia’s point of limiting the VRAM options in this round of GPU’s – knee-capping the usable life span to turn out a shorter upgrade cycle….

            • EzioAs
            • 5 years ago

            I agree. I didn’t said it made sense, it’s just that I think there were rumors about it last month.

            I am not impressed with the GTX 960. There are just much better options available currently around the same price.
            Yes, it’s more power efficient than Keplers and AMD’s current lineup, but those cards are also more efficient than their predecessors that even the current highest end single GPU setup wouldn’t draw much more than 400W.

            For most people owning a mid range gaming PC, they probably have more than enough power from their PSU, that higher end GPUs shouldn’t pose a problem (unless it’s a cheap generic PSU). Most new builders asking for advice or read reviews tend to buy proper 500W units and above.

            In the end, the card will suit people looking for really silent, low power or SFF system but that is not what the average gamer today has.

      • Pville_Piper
      • 5 years ago

      I’m leaning that way as well. My friend just picked up a 970 and he is hitting 100+ FPS on (i think) 1080p. real tough to decide. With Microsoft saying that current cards won’t be able to fully utilize DX12 it really makes the decision hard. Go cheap for 1-2 years and get close to 60 FPS out of the 960 or go expensive for 1-2 years before the DX12 titles kick in with the 970. Hmmmnnn…

        • EzioAs
        • 5 years ago

        I’m thinking DX12 will unlikely be relevant the first year or two after it’s release. Might as well just get the GTX 970 now and upgrade a couple years later.

    • yokem55
    • 5 years ago

    Looks like a decent chip, but with only 2 GB of vram, this thing is going to have a hard time as texture packs creep up leading to a much shorter prime life span. Any word on 4gb versions?

      • sweatshopking
      • 5 years ago

      I personally wouldn’t buy a 2gb at this stage

        • anotherengineer
        • 5 years ago

        Well depends if you have a 1080p monitor and will be using it for the next while.

        Some of the info out there shows 2GB is enough for 1080p.
        [url<]http://www.guru3d.com/articles-pages/crysis-3-graphics-performance-review-benchmark,8.html[/url<] [url<]http://www.guru3d.com/articles-pages/battlefield-4-vga-graphics-performance-benchmark,11.html[/url<] Hopefully TR starts charting VRAM usage in future reviews.

          • travbrad
          • 5 years ago

          How long is that “next while” though? A lot of people hang onto the same GPU for a few years, and I could easily see games using more than 2GB in the next few years, even at 1080p.

        • travbrad
        • 5 years ago

        I haven’t run into any problems at 1080p with my 2GB 660 running out of VRAM so far, but yeah if I was buying a new card at this point I’d want more than that just for future games. There are already a few games where I’ve seen 1.5-1.8GB usage, and that will surely go up.

      • BlackDove
      • 5 years ago

      Pretty sure its enough for 1920×1080 which is still common enough for people interested in a $200 GPU.

      If you want to run things on all ultra you need a more powerful GPU than this for higher resolutions so it would be a waste for systems that need a good cheap GPU.

        • jessterman21
        • 5 years ago

        It is enough for 1080p right now, but in a year who knows? Plus, DSR is so easy, it’s tough not to want to downsample games… Increases VRAM usage a lot.

          • BestJinjo
          • 5 years ago

          This is incorrect. Some modern games already show cards with 2GB of VRAM running out of VRAM:

          AC Unity – [url<]http://abload.de/img/acu_19203bkni.png[/url<] Shadow of Mordor - [url<]http://abload.de/img/som_1920kukvw.png[/url<] TPU confirms the same - [url<]http://www.techpowerup.com/reviews/MSI/GTX_960_Gaming/22.html[/url<] Also, you won't be able to access the highest resolutions in Titanfall and Wolfenstein NWO without incurring massive stutters. Future games will continue to demand even more VRAM as we get 2nd and 3rd gen console ports.

            • jessterman21
            • 5 years ago

            Okay, except for one broken game and one with an Ultra setting that uses uncompressed textures.

        • _ppi
        • 5 years ago

        You have to realize consoles now have 8 GB shared memory. And these consoles are now becoming the baseline for graphics quality.

          • f0d
          • 5 years ago

          [quote<]You have to realize consoles now have 8 GB shared memory. And these consoles are now becoming the baseline for graphics quality.[/quote<] with 6 gb usable (2gb by os) and then that is shared by the cpu and gpu lots of my games nowdays are using 3/4gb of memory just on the cpu alone and that leaves around 2/3gb of memory for gfx card if it was a console with 8gb i think people are really overdoing the "8gb on consoles thing" and while i do agree it could become an issue eventually im not going to buy something now for something that might happen in the future when my games start to require 4+gb of vram then i will just consider a 4gb card, they will be cheaper by then - until then my gtx670 2gb has been just fine so far

            • auxy
            • 5 years ago

            Keep in mind that the consoles do not have this division of RAM vs. VRAM. Duplicating assets creates lots of overhead.

          • Krogoth
          • 5 years ago

          You realize that it is shared memory and non-graphical content consumes a lot of memory as well?

          It is a non-issue for the intended market of the 960. The 960 doesn’t have the power to handle content that “requires” 4GiB or more memory?

      • NovusBogus
      • 5 years ago

      I’m hoping they’ll do a Ti or Boost edition in the $250 range, with moar rams and maybe a bigger bus. It’s not a horrible card for the price point but they’ve left a huge open gap between the $200 (realistically $180ish) 960 and $330 970.

    • sweatshopking
    • 5 years ago

    I couldn’t get the drop down menu to let me pick Talos Principle in opera 25.

    Also, picked up a 290 and it’s doing well. I bought the 4g MSI with 2 fans, and put it into a microatx case. Temps are good, and it chugs along nicely with my 4790k. TBH I had a maxwell gpu, the 860m, and before that one a 290x, but liked the 290x enough to pick up a 290 to when building another tower. the performance per dollar is excellent, and since i’m in Canada, it’s price disparity is even more pronounced than in the USA. I’d buy one again, even given the current pricing and situation. I wouldn’t buy an AMD cpu pretty much ever, but I seem to prefer their gpu’s to nvidias. Not sure why, except perhaps that for fps/$ they seem to come out on top in Canada.

    • Ratchet
    • 5 years ago

    I was wondering which card I should upgrade to. The only things I care about are (in order): Price, Performance, and features.

    I was saving for a 970, thinking that was the best option these days, but the scatter charts in the conclusion seem to indicate the 285? I would love to see an article with a massive performance review looking at the current graphics cards and how they compare to one another in the areas that truly matter.

      • sweatshopking
      • 5 years ago

      wait till the new amd stuff drops if you’re not in a major rush.

      • Flapdrol
      • 5 years ago

      I’d keep saving for the 970

        • derFunkenstein
        • 5 years ago

        I agree. The 960 is nice – a big improvement over where we were – but it’d be tough to continue to ignore the 970.

          • Ratchet
          • 5 years ago

          But the dilemma I’m facing is that the 290 seems to be better than the 970 (I said 285 in my first post, meant to say 290). Considering my priorities, where’s the upside of the 970? Why spend a lot more for a card that is pretty much right on par performance wise, according to the chart?

            • Damage
            • 5 years ago

            The R9 290 is a great deal right now, as long as you don’t mind the extra ~100W+ of power use vs. the GTX 970. Seriously cheap for the hardware involved and its performance.

            • Ratchet
            • 5 years ago

            The extra 100w is no concern. Like I said: Price>Performance>>>Features. Everything else is nothing.

            I wanted to move away from AMD though, I’ve been using ATI stuff since … well, a long time because they always seem to have the price/performance advantage right at the times I’m upgrading, and I want to see how things really are with NVIDIA’s high end hardware these days. I have some cash put aside for the 970, but this new scatter chart threw a monkey wrench at my plans. I can’t justify the extra $100 for a card that performs the same.

            A nice big update to the GPU landscape with regards to price/performance would sure be nice.

            • mesyn191
            • 5 years ago

            Other than the features which you said you don’t care about you won’t see a difference for single card usage. Even if you did care about the features they aren’t worth the price difference.

            • sweatshopking
            • 5 years ago

            yeah, i have a 290, and it’s great. I couldn’t justify spending the extra 100$ for a 970.

      • BestJinjo
      • 5 years ago

      Price and performance top of your list?

      Asus R9 290 for $240 is the best card you can get right now. About 45% more performance and double the vram compared to a GTX960 for $40 more. Performance will be very close to a 970.
      [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16814121842&cm_re=asus_r9_290-_-14-121-842-_-Product[/url<] Performance: [url<]http://www.techpowerup.com/reviews/MSI/GTX_960_Gaming/29.html[/url<] The power consumption difference between an R9 290 and a 970 in a total system configuration is not that large: [url<]http://www.techspot.com/review/946-nvidia-geforce-gtx-960/page7.html[/url<] Noise levels are easily taken care of by the after-market DCUII cooler: [url<]http://www.youtube.com/watch?v=pYxmW4JiJs8[/url<]

        • Ratchet
        • 5 years ago

        I’ve decided to wait and see what AMD does in the coming months. If whatever they come out with is a good Price/Performance value I’ll look at the way NVIDIA adjusts their prices to compete and see what the best bang for my buck is then.

    • DPete27
    • 5 years ago

    (Didn’t read the WHOLE article) So by using faster memory, Nvidia overcame the narrow memory interface?

      • BlackDove
      • 5 years ago

      No its a new compression technique they used to overcome it.

      Thats how the GM204s with a 256bit bus can outperform the GK110s which had 384bit busses.

      • Krogoth
      • 5 years ago

      Maxwell architecture efficiencies and having very fast GDDR5 chips.

      Nevertheless, the 960 does show its limitations when you try to push with 4Megapixels under healthly doses of AA/AF.

    • Pville_Piper
    • 5 years ago

    Nice review… Gonna make it harder to decide if I want to bust the bank and get the 970 or or go light and get the 960 when I upgrade from my 660. With Microsoft saying that the current GPUs won’t run the full on DX12 the 960 might be a good interm solution. By the time games fully utilize DX12 the 960 will be a couple of years old and ready to upgrade. The one thing that scares me is that the 960 feels like 750ti, a very capable card but but full on FPS games are gonna run away from it pretty quickly with the small memory bus. Maybe they will come out with a ti version that will have a wider bus and more ram.
    Any idea on the BF4 Ultra frame rates for the 960?
    Also, if you can, would love to see that GPU used in the testing of G-Synch screen you are testing. Most reviews of G-Synch seem to focus on the 144hz relm and to me it seems like a G-Synch can help a mid-range card run higher settings more fluidly.

      • DancinJack
      • 5 years ago

      Not sure it’s ULTRA, but at least some AA….

      [url<]http://www.techpowerup.com/reviews/MSI/GTX_960_Gaming/11.html[/url<]

        • Pville_Piper
        • 5 years ago

        Most of the sites show about 13-15 frame rate increase for full Utra. Although one site only had the frame rate pegged at 38 FPS for 1080P!

      • jessterman21
      • 5 years ago

      Hang on to your 660 a little while longer! 35-40% more performance isn’t worth the money.

      The inevitable 1280core 192-bit 3GB GTX 960Ti is the one I’m waiting for.

    • ultima_trev
    • 5 years ago

    It only took about three years, however one can now finally have HD 7970 performance for 200 dollars and under 140 watts. I finally have a card that won’t overtax my 350 watt power supply and is worthy of replacing my HD 7850… I only wish AMD would have managed to do such a thing first. 🙁 Sorry AMD, but this is my next GPU purchase.

      • JustAnEngineer
      • 5 years ago

      You could have purchased a good power supply for $50.

      • tipoo
      • 5 years ago

      *Scratches head*
      So why not a new power supply in all that time? If power draw and efficiency and the whales are concerns, a higher efficiency one could likely offset whatever you’re using now.

      • BlackDove
      • 5 years ago

      Seriously? I dont like AMD much but get a real PSU.

      • ronch
      • 5 years ago

      My man, it’s time you caught up with the rest of us and get a 600w or better PSU.

        • ultima_trev
        • 5 years ago

        Can’t afford the PSU nor the beefed up electricity bill that would occur as a result.

          • MadManOriginal
          • 5 years ago

          PSU capacity has nothing to do with power draw, other than slight differences due to efficiency. Components draw the power that they will draw.

            • Anonymous Coward
            • 5 years ago

            I believe a higher capacity PSU will be less efficient running at a low load, than a PSU that is correctly sized for the load, quality of the PSU’s being the same.

            • JustAnEngineer
            • 5 years ago

            [url=http://www.seasonicusa.com/Platinum_Series_XP2.htm<]Mine[/url<] is more efficient at any load than yours. :p [url<]http://www.jonnyguru.com/modules.php?name=NDReviews&op=Story&reid=326[/url<]

            • anotherengineer
            • 5 years ago

            I have the 460W 😉

            [url<]http://www.seasonicusa.com/Platinum_Series_FL2.htm[/url<]

            • MadManOriginal
            • 5 years ago

            True, but in reality with the idle power draw of today’s components, you need to look at absolute numbers too. The difference is sub-10W. Turn off a light or your power vampire electronics and you get the same power savings.

    • anotherengineer
    • 5 years ago

    Well from a quick review of the review, the fast memory speed Nvidia has gone with seems to help quite a bit.

    Hopefully AMD will implement the same memory chip speed or HBM in Trinidad the Curacao successor, and future mid to high end cards.

    The other huge “LOAD” power savings seems to be variable 3D voltage and frequency Nvidia has implemented.
    [url<]http://www.techpowerup.com/reviews/MSI/GTX_960_Gaming/33.html[/url<] 1.030V @ 1190MHz vs. the R9 285 at 1.19V @ 985 MHz is quite a bit indeed. [url<]http://www.techpowerup.com/reviews/Sapphire/R9_285_Dual-X_OC/29.html[/url<] If AMD implemented the same thing in their V-BIOS I bet load power consumption wise they would be a lot closer, which would make sense since they are made at the same foundry and fab size. Damage, I may have skipped over it, but does the GTX 960 support display port ver. 1.2a+ and adaptive sync?

      • sweatshopking
      • 5 years ago

      no adaptive sync.

        • BlackDove
        • 5 years ago

        You sure? Seems to be a working option in the drivers on GPUs as old as the 560ti.

          • Ryhadar
          • 5 years ago

          Are you sure you’re not seeing Adaptive V-Sync?

            • BlackDove
            • 5 years ago

            Lol i did. Oops.

        • anotherengineer
        • 5 years ago

        Thanks

    • Duct Tape Dude
    • 5 years ago

    Out of curiosity, why did the 270X get cut out of the power consumption charts? I was following performance among the 960, 285, and 270X (since that’s closest to the 7870 I have) and was a bit confused when it suddenly dropped out.

    Nothing major–just would be nice to follow the same competition throughout the review.

      • Damage
      • 5 years ago

      Time constraints for testing.

        • Duct Tape Dude
        • 5 years ago

        Oh ok.

        For anyone else curious here’s the 7870:
        [url<]https://techreport.com/review/22573/amd-radeon-hd-7870-ghz-edition/10[/url<] And here's the 270X: [url<]https://techreport.com/review/25466/amd-radeon-r9-280x-and-270x-graphics-cards/10[/url<] Note the testing rig is different! But it's still good for ballpark figures.

      • Essence
      • 5 years ago

      Why do you think? To make the 960 turd look better.

      Them “games” aren’t demanding and is why the card just about edges the 280 by a couple of frames, but loses to the 285 and especially the 280X. The 280X is the same price as the 960, but the 280X is superior in every way including higher resolutions and higher bandwidth demanding games with 40W extra usage, which is nothing. And remember this is [b] aftermarket vs Reference. [b]

      The 280X and 285 is actually cooler and quieter from looking at the charts.

      AanadTech not doing a review looks a bit fishy on what Nvidia demands were to Review sites (Maybe overclocked vs Stock??).

      [quote] [b] The Strix 960 was also first to arrive on our doorstep, so we’ve tested it most extensively versus competing GPUs [b] [quote]

      Not surprised one bit you think testing the fastest 960 cards vs Stock AMD cards is OK, and with a couple of games?

      [quote] [b] We “”sometimes use a tool called FCAT”” to capture exactly when each frame was delivered to the display, but that’s usually not necessary in order to get good data with single-GPU setups. We have, however, filtered our Fraps results using a three-frame moving average [b] [quote]

      That`s not what everyone used to preach, and its most likely bcoz Nvidia don’t allow it any more bcoz their cards suck or don’t work on it.

    • anotherengineer
    • 5 years ago

    Scott.

    Top of pg.2 I think you have to flip the mem size around on the 980 and 960.

    Cheers

      • Damage
      • 5 years ago

      Fixed!

Pin It on Pinterest

Share This