Nvidia’s GeForce GTX 950 graphics card reviewed

So far, 2015 has given us a bounty of high-end graphics cards. If you want to spend somewhere north of three hundred bucks on a graphics card—and heck, maybe three times that—then you have plenty of shiny new options from which to choose. I’ve spent a borderline scandalous amount of time testing graphics cards in 4K resolutions in the past six months.

Playing with the fancy toys is nice, but it’s not very realistic. The vast majority of PC gamers play at resolutions of 1080p or below, and for the most part, you can get away with using a much more affordable graphics card when you “only” have a couple of megapixels to paint.


Gigabyte’s GeForce GTX 950 Xtreme Gaming

That’s pretty much the rationale behind the GeForce GTX 950, a new entry in Nvidia’s graphics card lineup that promises a nice mix of price and performance. The GTX 950 isn’t especially revolutionary in technology terms. It’s based on a somewhat hobbled version of the same GM206 graphics chip used in the GeForce GTX 960. So no new silicon here. But the GTX 950 does bring a full Maxwell 200-series feature set to cards priced well under $200, and its performance is more than credible. In fact, as Nvidia points out, this card is “faster than any current console,” with the obvious targets being the Xbox One and PlayStation 4. As a result, the GTX 950 ought to be a popular choice for an awful lot of PC gamers, especially those folks with 1080p displays.

If you keep on top of these things, you may be aware that the GTX 950 has been on the market for over a month now. Our review has been lingering deep in the bowels of Damage Labs for . . . reasons. However, our delay has been somewhat fortuitous. Not only do we have our hands on three different versions of the GTX 950, but we also have a couple of variants of the competing card from the red team, the Radeon R7 370. We should be able to give you a bit deeper look not just at these mid-range GPUs but also the particular cards you might be buying.

Base

clock

(MHz)

Boost

clock

(MHz)

ROP

pixels/

clock

Texels

filtered/

clock

Stream

pro-

cessors

Memory

path

(bits)

Memory
transfer

rate

(Gbps)

Memory

size

Peak

power

draw

Price
GTX
750 Ti
1020 1085 16 40 640 128 5.4 2 GB 60W $119.99
GTX
950
1024 1188 24 48 768 128 6.6 2 GB 90W $159.99
GTX
960
1126 1178 32 64 1024 128 7.0 2 GB 120W $199.99
GTX
970
1050 1178 56 104 1664 256 7.0 4 GB 145W $329.99
GTX
980
1126 1216 64 128 2048 256 7.0 4 GB 165W $549.99

The table above offers a look at how the GTX 950 fits into the current GeForce lineup. The mighty-mite GTX 750 Ti has dropped to $119.99 in order to make room for its new big brother. Since it’s based on a larger chip with more of everything, the GTX 950 represents a nice step up at $159.99.

At just 90 watts, the GTX 950 technically would only require a single six-pin aux power input to do its thing. Board makers like Gigabyte, whose Xtreme Gaming version of the 950 is pictured above, have taken things up a notch, as they tend to do. The cards we have from both Gigabyte and EVGA feature an eight-pin input. The extra juice might ensure more overclocking headroom. The Xtreme Gaming also happens to look like Batman’s hovercraft, which I find strangely appealing.

EVGA offers no less than four versions of the GTX 950, and the FTW edition shown above is the most extreme, with base and boost clocks of 1203 and 1405MHz, respectively. (Incidentally, that’s exactly the same rated clock speeds Gigabyte has assigned to the Xtreme Gaming.) The FTW edition is selling for $179.99 at Newegg, a price that may be justified by the presence of a “full-sized” board and cooler combo that measures just over 10″ in length.

Asus has taken a bit of a different approach with some of its mid-range graphics cards recently, and the Strix GTX 950 follows suit. This card has a capable dual-slot cooler, but it sports a six-pin power input and has somewhat more conservative clock frequencies of 1165/1355MHz. Don’t discount the Strix’s performance based on those specs just yet, though. We’ve found that actual clock speeds—and thus performance—with today’s GeForces depend quite a bit on cooling and board power delivery, and Asus has a good record on this front.

Since the Strix was the first example of the GeForce GTX 950 to arrive in Damage Labs, we’ve used it to represent this GPU in the bulk of the performance results on the following pages. That said, we have tested the various flavors of the GTX 950 against one another, as well.

The Radeon R7 370

This look at the GTX 950 allows us to devote some attention to its closest competitor, the Radeon R9 370, which also made its debut in recent months. The R7 370 is fascinating because it’s a reasonably competitive modern graphics card based on the AMD Pitcairn GPU, a chip first introduced aboard the Radeon HD 7870 and 7850 in March of 2012. Pitcairn also had starring roles aboard the Radeon R7 270X, R7 270, and R7 265.

Three and a half years seems like a long time in the high-tech realm, and Pitcairn shows its age by not supporting some features introduced in newer Radeons. For instance, the R7 370 doesn’t have the TrueAudio DSP for the acceleration of in-game sound effects, and its video decoder and encoder hardware isn’t ready for 4K data rates or encoding types. Even more notably, the R7 370’s display hardware isn’t capable of working with FreeSync variable-refresh displays, one of the niftiest innovations in PC gaming in recent years. (The GTX 950 can work with variable-refresh displays based on Nvidia’s G-Sync standard.)

Base

clock

(MHz)

Boost

clock

(MHz)

ROP

pixels/

clock

Texels

filtered/

clock

Stream

pro-

cessors

Memory

path

(bits)

Memory
transfer

rate

(Gbps)

Memory

size

Peak

power

draw

Price
GTX
950
1024 1188 24 48 768 128 6.6 2 GB 90W $159
R7
370
975 32 64 1024 256 5.7 2 GB, 4 GB 110W $149

One other notable feature Pitcairn lacks is delta-based color compression, which recent GPUs have used to squeeze more throughput out of their given memory bandwidth. The R7 370 makes up for this shortcoming the honest way: by using a 256-bit-wide path its GDDR5 memory, double the width of the GTX 950’s memory interface. In fact, the R7 370 sports wider, more robust hardware in almost every respect compared to its competition—and, in a classic AMD move, its starting price is ten bucks cheaper than the GeForce, too.

Oddly enough, the tables above will tell you that the R7 370 uses a slightly cut-down version of the Pitcairn GPU. The full chip has 1280 stream processors and 80 texels per clock of filtering power. Evidently, AMD prefers to disable the weaker parts of the Pitcairn GPU and crank up clock speeds instead. The R7 370 is almost entirely identical to the Radeon R7 265, yet the 370 has a 50MHz higher boost clock and slightly faster memory. AMD says it has implemented “new features at the micro-code level which enable higher overall performance.” I suspect most of the changes have to do with the PowerTune dynamic clocking algorithm, which the firm has refined incrementally over time.

Speaking of clock speeds, MSI has taken things even further with its R7 370 Gaming 2G, slapping one of its formidable coolers onto Pitcairn and raising the boost clock to 1050MHz. MSI is asking $159.99 for the Gaming 2G at Newegg.

Sapphire has chosen a more compact dual-slot cooler and a more conservative 985MHz boost clock for its Nitro R7 370, but this card has an ace that the MSI card lacks: 4GB of GDDR5 memory. Cards in this class have gotten along quite well with 2GB of RAM to date, but doubling up to 4GB could give the Nitro a bit of future-proofing. The Nitro is going for $169.99 at Newegg, but Sapphire also sells a 2GB version for 20 bucks less. Similarly, MSI offers an R7 370 Gaming 4G for $179.99.

Our testing methods

Most of the numbers you’ll see on the following pages were captured with Fraps, a software tool that can record the rendering time for each frame of animation. We sometimes use a tool called FCAT to capture exactly when each frame was delivered to the display, but that’s usually not necessary in order to get good data with single-GPU setups. We have, however, filtered our Fraps results using a three-frame moving average. This filter should account for the effect of the three-frame submission queue in Direct3D. If you see a frame time spike in our results, it’s likely a delay that would affect when the frame reaches the display.

We didn’t use Fraps with Civ: Beyond Earth or Battlefield 4. Instead, we captured frame times directly from the game engines using the games’ built-in tools. We didn’t use our low-pass filter on those results.

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-5960X
Motherboard Gigabyte
X99-UD5 WiFi
Chipset Intel X99
Memory size 16GB (4 DIMMs)
Memory type Corsair
Vengeance LPX
DDR4 SDRAM at 2133 MT/s
Memory timings 15-15-15-36
1T
Hard drive Kingston
SSDNow 310 960GB SATA
Power supply Corsair
AX850
OS Windows
10 Pro
Driver
revision
GPU
base

core clock

(MHz)

GPU
boost

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

MSI
Radeon R7 370
Catalyst
15.7.1
1050 1425 2048
EVGA
GeForce GTX 650
GeForce
355.60
1059 1250 1024
Zotac
GeForce GTX 750 Ti
GeForce
355.60
1033 1111 1350 2048
Asus
Strix GTX 950
GeForce
355.69
1165 1355 1653 2048
MSI
GeForce GTX 960
GeForce
355.60
1216 1279 1753 2048

Thanks to Intel, Corsair, Kingston, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Sizing ’em up

Multiply the peak clock speeds by the peak per-clock rates of these graphics cards, and you end up with this:

Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear

filtering

int8/fp16

(Gtexels/s)

Peak

shader

arithmetic

rate

(tflops)

Peak

rasterization

rate

(Gtris/s)

Memory
bandwidth
(GB/s)
EVGA
GeForce GTX 650
17 34/34 0.8 1.1 80
Zotac
GeForce GTX 750 Ti
18 44/44 1.4 1.1 86
Asus
Strix GeForce GTX 950
43 65/65 2.1 2.7 106
MSI
GeForce GTX 960
41 82/82 2.6 2.6 112
MSI
Radeon R7 370
34 67/34 2.2 2.1 182

Compare the peak rates for the R7 370 and GTX 950, and you’ll notice the disparities right away. The GTX 950 has substantially higher peak rates for pixel fill, fp16 texture filtering, and rasterization. The two cards have comparable shader arithmetic rates. Meanwhile, the R7 370 has a rather cavernous advantage in terms of memory bandwidth.

Of course, those are just theoretical peak rates. Our fancy Beyond3D GPU architecture suite measures true delivered performance using a series of directed tests.

The GTX 950’s advantage in the fill rate department is even larger in practice than in theory, likely because of its color compression storage capability. The R7 370’s ROP throughput is probably limited by memory bandwidth, a bottleneck that newer GPUs appear to have sidestepped thanks to compression.

By the way: GPU nerd alert. The GTX 950’s victory over its bigger brother, the GTX 960, suggests something odd about how Nvidia is disabling units in the GM206 in order to create the 950. Based on what we know about the Maxwell architecture, we’d expect the GTX 950’s six SM units to be limited to 24 pixels per clock of peak throughput, four pixels per clock for each SM. Instead, the GTX 950 is clearly hitting 32 pixels per clock of throughput in this test. Something unexpected must be happening.

Could the firm be disabling half of an SM across four different SMs, killing off the texturing and ALU hardware while retaining the full datapaths to the ROPs? That setup would be unconventional, but it would explain things. I’ve asked Nvidia what’s up and will post an update if we get an answer.

This bandwidth test measures GPU throughput using two different textures: an all-black surface that’s easily compressed and a random-colored texture that’s essentially incompressible.

The R7 370 fares pretty well here thanks to an abundance of raw memory bandwidth, even though it shows no evidence of any color compression at all. The GTX 950, however, nearly matches the R7 370’s throughput when compressing a texture of a single color.

The GTX 950 and R7 370 are very evenly matched when dealing with texture formats that are eight bits per color channel and 32 bits per color channel. When filtering a 16-bit format, though, the GTX 950 is roughly twice as fast as the R7 370. How much this difference matters in practice will depend on the texture formats used by individual games.

Nvidia has long had an architectural advantage in polygon throughput, both in terms of raw rasterization rates and DX11-style tessellation with geometry expansion. That situation persists here, with the GTX 950 doubling the performance of the R7 370 in TessMark.

There’s remarkable parity in shader processing power between the GTX 950 and R7 370. Other aspects of AMD’s GCN architecture are starting to look dated, but its shader arrays remain very respectable.

GTA V

Forgive me for the massive number of screenshots below, but GTA V has a ton of image quality settings.


Here’s our first look at performance in a real game, and none of the GPUs show signs of the big spikes and slowdowns we often see. Rockstar has done a nice job with GTA V on that front. As a result, the FPS averages and the 99th-percentile frame times tend to match up pretty closely.

Beyond that, the GTX 950 is pretty clearly faster than the Radeon R7 370 here. Truth be told, both cards are quite competent to run GTA V at these settings.

Those nice, flat frame-time plots above produce equally flat percentile curves. Even the most difficult-to-produce frames don’t cause much trouble for most of these cards. Look back a couple of generations, though, and the GeForce GTX 650 is overmatched; it’s consistently slow in this scenario.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame, and 8.3 ms is a relatively new addition that equates to 120Hz, for those with fast gaming displays.

On our measure of “badness,” the GTX 950 and R7 370 are nearly equal. Neither card spends much time working on frames that take longer than 16.7 milliseconds to produce, which means they they both offer an almost flawless and steady 60 frames per second.

The Witcher 3


The frame-time plots in this game are a little less consistent, which is more typical of PC games generally. The worst offender here looks to be the GeForce GTX 750 Ti, which shows the sort inconsistency we’ve also seen from older, Kepler-based GPUs in this game. The R7 370 struggles with some 50-60-ms spikes, as well. Those are large enough to feel as hiccups during gameplay.

The GTX 950 is generally faster than the R7 370, as reflected in the FPS average, and it’s quicker when dealing with more difficult frames, as the 99th-percentile result reflects. In fact, in this case, there’s little practical advantage for the GTX 960 over the 950, as the percentile results show.


With the exception of ye olde GTX 650, all of these cards do a credible job of avoiding the worst slowdowns: frames that take longer than 50 ms to produce. The R7 370’s few spikes do push it past that threshold a bit. The real difference comes at 33 ms, a mark the GTX 950 exceeds only for a single millisecond, while the R7 370 spends a more than a third of a second working on frames that take longer than 33-ms—and are thus slower than a constant 30-FPS rate.

Civilization: Beyond Earth

Since this game’s built-in benchmark simply spits out frame times, we were able to give it a full workup without having to resort to manual testing. That’s nice, since manual benchmarking of an RTS with zoom is kind of a nightmare.

Oh, and the Radeons was tested with the Mantle API instead of Direct3D. Only seemed fair, since the game supports it.



The GTX 950 maintains its comfortable lead over the R7 370 here—and notably, once again, the 950 stays consistently below the 30-ms threshold with each frame produced, while the R7 370 does not.

Battlefield 4
BF4 appears to run better on the R7 370 (and on other Radeons using recent drivers) in DirectX 11 rather than Mantle, so the results for the R7 370 below come from the game’s DX11 mode.



Both the GTX 950 and the R7 370 handle BF4 on Ultra pretty well, with 99% of frames produced in less than 33 ms—or more than 30 FPS. The persistent reality is that the GTX 950 is somewhat faster, though, and that advantage translates into better performance according to every one of our advanced metrics.

Ashes of the Singularity

Here’s our first look at the Ashes benchmark, which lets us test performance in both DirectX 11 and 12 using code from a pre-release version of this upcoming game. DirectX 12 is a new graphics programming API built into Windows 10 that promises lower CPU overhead, better threading, and fuller use of the GPU’s potential. (We’ve also tested the GTX 950 and Radeon R7 370 in the Fable Legends benchmark right here.)

Click through the results below to see frame time plots from both versions of DirectX.



All of the cards appear to hit a tough spot early in the benchmark where there are one or more really slow frames. The slowdown appears to be a worse on the GeForces than on the R7 370, and the problem is further heightened on the GeForces in DX12.

Although these spikes are huge, over half a second long in some cards, they’re surrounded by enough other frames that they don’t drag down FPS averages. If you’ve seen FPS-only results other places, rest assured, they’re probably not terribly misleading as a result of this hiccup. Ahem.

Forgive me for not sorting the results in the bar graphs; doing so would require some extra work in Excel, and let’s face it: I’m not doing that.


Overall, the GeForces perform a little slower in the DX12 version of this benchmark than in DX11. Meanwhile, the R7 370 benefits from the switch to DX12. Still, the Radeon remains slower than the GTX 950 overall. And, truth be told, none of these cards are producing frames quickly enough over time to provide a good gaming experience. We might have to lower the display resolution or dial back the image quality to achieve acceptable performance.

A look at the various cards

Since we have three examples of the GTX 950 and two of the R7 370 on hand, I figured we should take a moment to look at the individual cards. Board makers tend to put a lot of effort into making their products stand out from the crowd these days, and these five cards illustrate how that trend plays out.

GeForce GTX 950 cards from Gigabyte, Asus, and EVGA

Base

clock

(MHz)

Boost

clock

(MHz)

GDDR5
clock

speed

(MHz)

Power

connector

Length Height

above

PCIe

slot top

Price
Asus
Strix GTX 950
1165 1355 1653 6-pin 8.6″ 0.75″ $169
EVGA
GTX 950 FTW
1203 1405 1653 8-pin 10.1″ 0.25″ $179
Gigabyte GTX
950 Xtreme Gaming
1203 1405 1750 8-pin 8.9″ 0.25″ $179

These three cards offer tremendous variety in terms of looks, cooling capacity, and physical dimensions. As I’ve noted, the Asus is the most compact member of the group, while the EVGA offers a cooler of gratuitous length and clock speeds to match. The Gigabyte Xtreme Gaming would represent a middle point between the two, except it matches the EVGA’s GPU clocks and throws in a ~100MHz higher memory clock, as well.


Sapphire’s Nitro R7 370 and MSI’s R7 370 Gaming 2G

Boost

clock

(MHz)

GDDR5
clock

speed

(MHz)

Power

connector

Length Height

above

PCIe

slot top

Memory

capacity

Price
MSI
R7 370 Gaming 2G
1050 1425 6-pin 10.1″ 1.0″ 2GB $159
Sapphire
Nitro R7 370
985 1400 6-pin 8.4″ 0.2″ 4GB $187

MSI’s take on the R7 370 is easily the largest card we’ve encountered in this class, with a cooler that looks like it belongs on a much larger, power-hungrier GPU. I really dig the looks of these MSI coolers, though, so why not slap one on Pitcairn?

Sapphire’s Nitro looks understated by comparison, but its dual-fan cooler ought to be more than sufficient for its mission. The Nitro has a potential advantage lurking under that cooler in the form of extra memory capacity—4GB rather than 2GB.

Here’s how the cards stack up.

Gigabyte’s Xtreme Gaming takes the top spot among the GTX 950s, and MSI’s R7 370 leads the Sapphire Nitro. We’re mostly looking at small differences here, especially among the GeForces.

Power consumption

Please note that our “under load” tests aren’t conducted in an absolute peak scenario. Instead, we have the cards running a real game, Crysis 3, in order to show us power draw with a more typical workload.

The fact that the Maxwell-based GTX 950 is more power-efficient than the Pitcairn-based R7 370 probably shouldn’t be a surprise to most folks. We’re talking distinctly different generations of products. Still, the gap between the two cards, in terms of total system power draw when installed, is relatively minor at just 10-15W while running a game. We’ve already seen that the GTX 950 tends to outperform the R7 370, though, so its power efficiency is undoubtedly superior overall.

I suppose it should also come as no surprise that the fastest individual GTX 950 and R7 370 cards are the ones with the highest power draw under load. For instance, Gigabyte’s Xtreme Gaming beats out the EVGA GTX 950 FTW in the benchmarks by an eyelash, and it also requires a few more watts to do its thing.

Noise levels and GPU temperatures

Don’t put too much stock into the differences between the cards at the 30-31 dBA range. That’s near the noise floor in Damage Labs, and those differences are more likely attributable to fluctuations in ambient noise than anything else. In fact, I believe all of the GTX 950 and R7 370 cards simply stop their fans from spinning when idling at the Windows desktop. This semi-passive fan policy is a recent innovation, and everybody now seems to be using it, which is most excellent.

Meanwhile, ye olde GeForce GTX 650 just seems to keep its fan running at a constant speed, which is kind of sad.

These new mid-range cards all fall within a range of a couple of decibels under load, and none of them are particularly loud. Sapphire’s R7 370 is the noisiest of the bunch, thanks to its relatively small cooler and the Radeon’s somewhat higher power draw (and thus heat production). Gigabyte’s take on the GTX 950 registers just a bit higher on the sound level meter than its competitors, almost surely because it’s aggressively tuned to keep the GPU cool, as our peak temperature readings indicate.

In the end, none of the cards run all that hot. The highest temperature we saw was only 69°C, well below the temps in the low 90s that we’ve seen in past products.

I think these results attest to the fact that board makers are serious about producing solid products in this price range. If anything, some of the bigger coolers may be a little excessive—not that I’m complaining.

Conclusions

As usual, we’ll sum up our test results with a couple of value scatter plots. The best values tend toward the upper left corner of each plot, where performance is highest and prices are lowest. We’ve converted our 99th-percentile frame time results into FPS, so that higher is better, in order to make this layout work. These overall numbers are produced using a geometric mean of the results from all of the games tested. The use of a geomean should limit the impact of outliers on the overall score.

Since it’s an early automated benchmark based on a not-yet-released game, I decided to leave the Ashes of the Singularity results out of our overall performance index.


Our game-by-game results were pretty consistent in demonstrating that the GeForce GTX 950 outperforms the Radeon R7 370. The gap is a little larger when we focus on advanced metrics like the 99th-percentile frame time than it is with raw FPS. That’s a testament to the driver optimization work Nvidia has done in recent years in order to ensure smooth gameplay.

If you’re looking for a video card in this price range, the GeForce GTX 950 is an obvious choice. Heck, I’d have a hard time justifying paying more for a GTX 960, given the results on the preceding pages. Not only is the GTX 950’s value proposition strong, but this GPU itself lands in a nice spot by offering fluid gaming at decent image quality levels at 1080p.

That value proposition gets even stronger when you consider the excellence of the board offerings from Asus, Gigabyte, and MSI. Heck, I’m not sure I could pick a favorite among them. If forced to do so, I’d probably single out the Gigabyte Xtreme Gaming card, which looks great and is the fastest overall for only ten bucks more than the Asus. However, the Gigabyte board wants an eight-pin power input. If your PSU will only accept a six-pin input, the Asus Strix GTX 950 is a very safe bet.

As for the Radeon R7 370, this GPU can’t quite keep pace with the GTX 950, and AMD has built in a discount in recognition of that fact. I’m just amazed by how credible an option the R7 370 looks to be given that it’s based on a three-and-and-half-year-old chip. The GPU’s performance is generally decent at 1080p, and both MSI and Sapphire offer cards with reasonable power consumption and noise levels. Frankly, I think the biggest omission from the R7 370 is support for FreeSync-style variable-refresh displays. To see one in action is to want one.

I have to admit: I am a little perplexed about why AMD didn’t give the R7 370 a fully enabled version of Pitcairn with more stream processors and texturing capacity. AMD could have closed the gap in performance with the GTX 950 by shipping that config. Not doing so seems strange to me.

Also, I wish we had time to address the question of whether slapping 4GB of memory onto this class of card, as Sapphire does with the Nitro, offers tangible benefits. As you may have noticed, the 2GB cards we tested handled the games and settings we used quite well, with few major hiccups. We may have to explore the 2GB-versus-4GB issue in a future article with a more challenging set of games and image quality options.

For now, I’d like to end by pointing out something that may be obvious to many of our readers—but not all of them. The games we used in this review tend to be pretty demanding, as these things go. Quite a few popular titles will run even smoother when paired with the GPU power in this class of product. I had hoped to test Dota 2 to illustrate that fact, but the game got “reborn” with a massive update after I’d collected 95% of my results. I chose to exclude it rather than to re-test, but my preliminary results showed both the R7 370 and the GTX 950 maintaining a near-steady 60 FPS throughout our test scenario at 1080p. This was a fairly grueling section of gameplay, as Dota 2 goes. The FPS averages were in excess of 100 FPS for both cards.

Other MOBAs, like League of Legends, are even less demanding than Dota 2.

I suppose what I’m saying is, for most gamers playing the most popular games, this class of GPU hardware offers ample power. More is better, of course, and I love me some fast graphics chips—but I wouldn’t be shy about recommending a GTX 950 or R7 370 to most folks. You really can’t go wrong.

Enjoy our work? Pay what you want to subscribe and support us.

Comments closed
    • vargis14
    • 4 years ago

    I still cannot believe AMD has not made a full fledged full sized tonga GPU for the PC with everything unlocked with the rumoured 384bit memory bus. I really do not know what they are waiting for, I mean the 28nm process is mature as it is going to get so I see no excuses.

    Get with it AMD!

      • JustAnEngineer
      • 4 years ago

      Tonga Pro (Radeon R9-285/R9-380) didn’t appear in this review of competing graphics cards, let alone any possible future graphics card using a hypothetical Tonga XT GPU.

        • Mr Bill
        • 4 years ago

        Is the Tonga GPU mostly for pro graphics workstations? Maybe the R9-285 and R9-380 are diverted into consumer cards because did not bin high enough to be used in the FirePro series professional cards.

    • ronch
    • 4 years ago

    Am I the only one who thinks most coolers that come with graphics cards these dates are ugly? Lots of protrusions, lots of what seem like careless designs, like the manufacturer just chose whatever seemed like it could fit the card and span its entire length, never mind if it’s a little longer. Honestly, most cards today look really messy.

    • MHLoppy
    • 4 years ago

    Seems to be a typo with the load temps graph – the Nitro’s apparently rocking out at 69 dBA instead of 69 degrees.

      • Damage
      • 4 years ago

      Doh. Fixed.

        • MHLoppy
        • 4 years ago

        Thanks Scott (Y)

    • SomeOtherGeek
    • 4 years ago

    Batman has a hovercraft?!?

    As always, nice review!

    • Bensam123
    • 4 years ago

    I can’t help but think having data points a couple price points above the intended product would help give people context and a better overall view of what the review is on. It looks like the comparisons are missing half the data.

    Also the 960s can be routinely found on Newegg for around $160, which kinda makes this card completely pointless. That’s also putting aside the R9-380s, which can also be found for about same price point.

    • maroon1
    • 4 years ago

    Why use Core i7-5960X ?

    Most people who will buy GTX 950 will most likely be using core i3 or i5 at most, and nvidia known to perform better will slower CPU, at least in DX11

      • auxy
      • 4 years ago

      Because the point of the testing is to get a near-academic idea of the graphics cards’ performance. Using the fastest processor around helps eliminate any possible CPU bottleneck. (´・ω・`)

        • Firestarter
        • 4 years ago

        Is the budget minded buyer interested in an academic comparison though? I think they would rather just see how these cards compare in a setting that is actually relevant to them, like a current Intel i3 or maybe yesteryears AMD or Intel quad cores. Knowing how these GPUs themselves stack up when unconstrained is nice to know, but ultimately doesn’t actually help Tech Report’s readership, if you ask me.

        Specifically, I think testing these GPUs with suitable CPUs would improve the quality of the System Guides. In the September edition, the GTX 950 is recommended for budget builds (together with a [i<]Pentium[/i<]) with no mention of the R7 370 or any AMD card in that range. If they could point to benchmarks of both cards with a dual-core CPU to tell us why this is, that would have been very helpful. Instead, we have numbers with an 8-core monster, and we're none the wiser.

          • auxy
          • 4 years ago

          I don’t really have a strong opinion on the topic. ¯\_ȌᴥȌ_/¯

          I do think that if you want that sort of information you should probably go to other hardware blogs as well as TR. TR stands alone in the field of near-scientific testing of GPUs. However, I will admit I don’t really know of anyone doing that kind of real-world testing. I would do it myself if I had the hardware! (*’▽’)

          [sub<]With that said, I do prefer an academic viewpoint since it ultimately gives us "purer" information.[/sub<]

      • tipoo
      • 4 years ago

      I’d like to see low end GPUs start including low end CPUs paired with them as well. Though that would mean double the testing so I can see why not. The high end CPU is selected to show the GPU at it’s best, but you’re right, there are differences under DX11 in how well they do on low end CPUs.

        • Ninjitsu
        • 4 years ago

        It’s a bad idea, I feel. If the user upgrades their CPU down the line, but can’t upgrade the GPU, they stand to chose a GPU that may not have shown much of delta earlier but is the new bottleneck now.

          • tipoo
          • 4 years ago

          Future upgrades will always throw off these considerations, things should be tested for the sets they’ll get this GPU with now. They could also upgrade the GPU but not the CPU in the future and further be hampered by the CPU. Digital Foundry showed AMD cards on lower end CPUs tend to do worse, but every low to mid range card is tested on cutting edge CPUs.

          I’d like to see an i3 at least thrown into the low end GPU tests. I’m not asking for the exclusion of the high end CPU, just the addition of the low end one in a few tests. So there’s nothing to worry about on either side.

      • fbertrand27
      • 4 years ago

      I agree!! I understand the value of “pure” GPU results, but having more “realistic” CPU pairing results might provide lots of value to buyers.

      i.e. the extra boost of a higher-price GPU might NOT be apparent on a slower CPU, hence not worth the cost!

      It could give very interesting, [b<]much different[/b<] and [b<]useful scatterplots[/b<]. (which are already brilliant BTW) It'd be great to compare those! (pure GPU vs realistic CPU pairing) Perhaps only for low-end GPU's, to reduce testing time when unnecessary.

    • albundy
    • 4 years ago

    wouldn’t the Radeon R9 380 4GB be a better performer since they are both at similar price points? Not sure why the R7 was only included in the benchmarks.

    • Chrispy_
    • 4 years ago

    I find it so frustrating that Nvidia haven’t managed to hit the magic 75W TDP with any of their products in a while. There’s a non-trivial market sector of people wanting “the fastest card that doesn’t need a PCIe power connector”

    Most notably any machine using a PicoPSU, or the vast number of store-bought computers that have inflated feature lists but puny sub-250W power supplies that effectively lock you out of upgrades.

      • HisDivineOrder
      • 4 years ago

      I think that’s mostly the fault of the die size they’re currently using. I’d wager to guess you’ll see a new product that does just what you want when they use Pascal and a dieshrink to get there.

      In a year or so, I’d imagine an HBM-based solution might even progress the narrative a bit further, given the HBM-related power savings. Probably not next year though, given the newness of HBM2.

      • Damage
      • 4 years ago

      Are you not including the GTX 750 Ti in this conversation? Seems like a solid option for that sort of thing. Check out the Zotac cards here:

      [url<]https://techreport.com/review/26050/nvidia-geforce-gtx-750-ti-maxwell-graphics-processor/2[/url<] Or... do you just want them to use ALL 75W because reasons?

        • Chrispy_
        • 4 years ago

        Yeah, I’m not discounting the 750Ti, it’s a decent performer on 60W but it’s significantly slower than the 950. My frustration is more down to the 750Ti launching over 18 months ago – I was hoping for some process refinement or efficiency tweaks to get more performance into the magic 75W bracket, but that hasn’t happened and probably won’t until Pascal and HBM2 I guess.

        If I had to cite [i<]reasons[/i<], I guess the biggest one is that the 750Ti can no longer be considered a 1080p card with today's games. 18 months ago there were very few situations where a 750Ti wasn't hitting 1080p60. At the end of 2015 you're going to need to crank quite a few of those sliders to the left or put up with lower framerates, which means abandoning vsync or getting the horrible 60-30-60 judder that plagues the fixed-refresh displays at the budget end of the price spectrum. Maybe it's sloppier programming and lazier ports, maybe it's developers starting to realise the potential of the next-gen console GPUs and making games genuinely more demanding. At any rate, I've mentally upgraded the minimum requirement for 1080p gaming to a 950 or Pitcairn XT (7870 or 270X) and not the re-rebrand of the 7850 which is also starting to look a little weak these days 😉 But yeah, nice review as always. It's a shame you're behind with these things but late is better than you getting burnt out and hating what you do.

          • MathMan
          • 4 years ago

          I don’t think you’ll see HBM(2) in this market segment anytime soon. GDDR5 will be just fine.

          But a 16nm small Pascal or AMD-equivalent with GDDR5 could do very well in a 75W envelope.

      • Nevermind
      • 4 years ago

      If your aim is to put something close to the 75w threshold you’re going to cause instability on some mobos. The power connector is there for a reason.

      Yeah I’d like it if they ran on ambient heat also.

      • vargis14
      • 4 years ago

      Chrispy I mentioned this earlier in the comments but no love for me…anyway AMD and Nvidia need a PCIE power connectorless card that outperforms the 750ti and AMD’s old HD7750.

      Also I know the adapters for 2 molex into a single 6 pin power connector but a lot of prebuilts only come with a single molex connector and if a better performing card that uses say 80 watt at full peak power and molex connector would be ideal then a adapter cable IMHO. It is not like any PC has no molex connector available for a low power card that can handle 1080p on high or ultra in some cases.

      A GTX 950 could be the answer if they just had a bios that would detect when a 6pin power connector is not used so the clocks would lower a bit to keep peak power around 70 watts. It does not take much to drop 20 watts, plus instead on NV making a GT 950 they could sell the more expensive GTX 950 and make more money if they just had a smart bios or even a dual bios with a switch.

    • ca_steve
    • 4 years ago

    Thanks for the review.

    “We may have to explore the 2GB-versus-4GB issue in a future article with a more challenging set of games and image quality options. ”

    Would love to see this @ 1080p.

    • The Egg
    • 4 years ago

    I’m nitpicking, but doesn’t the GTX970 have 56 ROPs?

    Back on topic, the 750 Ti and 950 are both solid values. That said, anything below a 750 should be removed from the lineup, as they don’t offer customers enough for their money over ever-improving integrated graphics.

      • Damage
      • 4 years ago

      I’m nitpicking, but the GTX 970 has 64 ROPs. 🙂

      Due to data-path limitations between the SMs and the ROP units, it is only capable of 56 pixels per clock of peak ROP throughput. I’ve corrected the table in this review showing the 970’s per-clock ROP pixel rate. Thanks for noticing.

    • anotherengineer
    • 4 years ago

    I am still at a loss why AMD does not use GDDR5 up to 7GHz like Nvidia does?? One would think the small added expense and small power increase would hopefully warrant a justified performance improvement??

    edit – and the 370 is still a decent card considering how old the silicon is.

    edit 2 – but just goes to show how much the mid-range has stagnated, cards are the same price and same performance as 3 years ago, so still not much point in upgrading. Not like the 3850 to 4850 to 5850 to 6850 (same on Nvidia’s side) jumps we have seen with the mid-range price remaining stable. Wait then again, there really hasn’t been a ‘new’ successor to the 7850 though.

      • vargis14
      • 4 years ago

      I wonder if the AMD chips need the memory bandwidth anyways?

      Would be cool to test by reducing the memory speed on a say a 280x, 380 along with AMD’s slower R5 and R7 rebrands ETC.

      This would tell us if the AMD card would benefit from the 7gbps memory, which is the real question right? Or is it already fast enough to keep the GPU itself fed?

      If anyone is interested I could test this with my HD 7750…..I wonder what benchmarks to use?
      3dmark11, Valley ETC ??

        • anotherengineer
        • 4 years ago

        Since they didn’t do a 270 review
        [url<]https://www.techpowerup.com/reviews/AMD/HD_7850_HD_7870/29.html[/url<] [url<]http://www.techpowerup.com/reviews/MSI/GTX_950_Gaming/33.html[/url<] Be nice to see a GPU only, and Ram only and a combined OC table for actual fps increases.

        • NoOne ButMe
        • 4 years ago

        AMD’s chips should need the bandwidth.

        AMD’s approach this generation seeing where the power usage for GDDR5 was going aimed at maximizing pref/watt without inflating die size to much.

        The clocks for the initial 7000 series were likely the ideal pref/watt GDDR5 that hit about the Bandwidth that AMD needed for the parts.

        NVidia’s approach was to boost the GDDR5 up as fast as possible which has hurt their pref/watt of their cards. Yes, stripping away the RAM Nvidia’s GPUs are even farther ahead in pref/watt than indicated.

        For benchmarks, I would test with a max memory OC, stock memory, and a percentage reduction equal to the max increase you go (so, if max OC for memory was 20% test with 80%, 100% and 120% of stock bandwidth or so).

        I would try running a few different benchmarks/games until you find ones that get a relatively large different (say, at least a quarter the score different of the memory clock) with memory clocks different.

        Would be a very neat exploration. Maybe if you can make fancy graphs Damage would let it be published here ;)!

      • Damage
      • 4 years ago

      Adding faster GDDR5 isn’t that simple. The GPU’s memory interface has to be able to support higher frequencies, as well, which it may not be capable of doing. The chips Nvidia was selling when Pitcairn came out were not capable of really fast GDDR5 speeds. That changed with Kepler and then Maxwell 100- and 200-series parts, but it required real work to make it happen.

        • anotherengineer
        • 4 years ago

        Of course. I just think if it is ‘refined’ silicon, one would think they could have made those changes over 3 years. But I guess not.

          • NoOne ButMe
          • 4 years ago

          But does AMD want to add faster GDDR5? Original Hawaii cards used about 50-60W for their memory subsystem of 4GB of GDDR5 @ 5Ghz @ 512b bus.

          The 780ti on the other hand with it ‘s 3GB of 7GBps of GDDR5 @ 384bit bus used 70-80W of power.

          Partially due to Nvidia having a less advanced GDDR5 memory controller, partially due to using compression technologies to catch up in overall bandwidth and partially due to higher clocks.

          I don’t know the breakdown, but, I imagine that it would cost AMD a good 5W for every 64b moved from ~5 –> 7GBps as well as increased cost of RAM due to having to buy 7GBps RAM.

          Now, it is possible the power difference is almost entirely due to the compression that Nvidia uses, but, I find that unlikely.

            • MathMan
            • 4 years ago

            I’ve seen the argument made at other places that NVidia memory controllers are ‘less advanced’.

            I’m struggling to see how this can be justified?

            To me, an advanced memory controller is one that can run at very high speeds (due to nature of signal degradation on PCBs, high speeds means lower S/N margin, means more complex circuitry to recover the signal) and it’s one that extracts most performance out of the resources that it has.

            When we look at Kepler, gk104 ran at 7Gbps, when Tahiti only ran at 6Gpbs. And the GTX770 was roughly equal performance-wise with a 256 bits bus against a 384 bit bus.

            Looking at Maxwell and later AMD offerings, this gap has only increased. With a 256-bit wide gm204 being competitive against a 512-bit Hawaii.

            And that’s not even talking about power consumption.

            Now it’s hard to separate system aspects (compression, caching, ROPs) from lower level details (MC efficiency alone), but I don’t see any arguments to claim that today’s NVidia memory controllers are in any way worse than those of AMD and more indications that the reverse is true.

            • NoOne ButMe
            • 4 years ago

            AMD memory controllers get better pref/watt and get better utilization. The second one is the big one.
            Memory bus GDDR5 AMD: 4800, 5000, 6000, 7000, GCN gen 3. 5 generations.
            Nvidia: 200 series rush, 400 series, 600 series, 900 series. 4 generations

            For example, pulling from the Fury X review and this one…
            780ti- ~59% of (non-compressible) utilization from it’s 336GBps memory bus.
            ASUS 290x- ~76% of (non-compressible) utilization from it’s 346GBps memory bus.
            980- ~77% of utilization from it’s 224GBps.
            980ti/TitanX averaged get ~70% of their 336GBps memory bus.
            Fury X (first generation HBM!)- ~65% utilization.
            370- ~78% utilzation.
            ASUS strix 950- ~77%.

            So, Kepler had about a 60% utilziation as a 3rd generation memory controller for GDDR5, AMD’s 4th had ~76-77%.

            Nvidia’s 4 generation controller still sometimes struggles to keep up with AMD’s 4th generation depending on its implimentation.

            AMD’s first generation HBM controller has 65% utilziation.

            Given NVidia’s 3rd try at GDDR5 memory controller was weaker than AMD’s first generation HBM controller what does that tell you?

            Also, NVidia being able to hit their 7GBps+ speeds came at a cost, their memory controllers on 28nm are huge to everything AMD made excluding Tahiti. Hawaii’s memory controller is smaller than GK110.

            So, NVidia has a larger, higher power usage and less utilization without compression (which drives up power!).

            I would solidly call that inferior. The compression technology is very good, but, well. I think that cutting it and going with a wider slower bus for NVidia would give consumers better products for their 28nm products. You could probably drop a GTX 980 to under 150W this way.

            Also, Fermi controller was under 50%, AMD’s have all been over 50% since the 5000 series. Unsure about the 4800 series controller. Never got good info on it.

            AMD/ATI has been ahead of NVidia for years in memory technology overall. Mostly because they tend to put more R&D into pushing them. Which, is fair. They spend more money on it, they do it better. Just what is expected. That is actually one of the nice things about HBM for NVidia. Even if they really f-ck up and make a terrible memory controller it has so much headroom to clock and power usage affected by clocking is so much less than GDDR5 was.

            Of course, I don’t expect their engineers to f-ck up. The only reason that the control for the first two generations was so bad had to do with rushing it the first time and that Fermi had a large host of problems from conception to realization. And TSMC 40nm woes did not help the many problems.

            • MathMan
            • 4 years ago

            It’s kind of irrelevant that the size of the PHYs are larger on the Nvdiia implementation: that’s simply a consequence of them reaching higher speeds. That’s clearly an engineering trade-off, but it says little about one or the other being more advanced.

            And when I look at the Maxwell numbers, its efficiency is basically identical to Hawaii, despite running at faster clocks (which impact the efficiency due to a relatively higher rate of refresh cycles.)

            Another consideration is that fill rate efficiency is only one parameter, and a pretty bad one at that: it shows peak rate of the MC, but it says little about the performance for mixed loads, when you don’t deal with long burst (which are very easy for a scheduler to handle.)

            As for power consumption: how can you make this argument when there is no way to separate the memory controller power rails? All we know is that HBM is lower power, but there are no comparable numbers for NVidia. (And there won’t be separate power rails there either.)

            Feel free to dwell on the past if that makes you feel better, but the conclusion from your numbers is that NVidia is currently just as efficient as AMD for GDDR5. And it’s capable of running at higher speeds. IOW: there’s little evidence of it being less advanced, quite the contrary.

            As for being able to extract overall performance out of a certain amount of available BW, it’s obviously no contest, as is shown once again in this review.

            • NoOne ButMe
            • 4 years ago

            There are labs that can look at this stuff. Having got some numbers from them NVidia’s high end cards have had over 30% of their power used on memory for a while while AMD has keep keeping around 25% for a while.

            You don’t have to believe me, I don’t have a hard source I can give you after all.

            And, let me state this clearly: NVidia’s Kepler controllers were far larger than AMD’s while having much less effective bandwidth. Over twice the size for the same bus width (exlcuding Tahiti, which had a huge bus compared to every other 28nm AMD card).

            With AMD basically sitting still for 3 years and hundreds of millions of dollars in total R&D (not all on the memory bus of course!) to get equal utilization with higher power.

            And, AMD’s memory bus has been shown to operate at up to 6.5GBps at stock. They’re capable of reaching 7GBps if they wanted to engineer it.

            But, here’s the truth: It wouldn’t be good for consumers or AMD. Memory manufacturers certainly wouldn’t mind it however.

            • Klimax
            • 4 years ago

            Don’t mistake inability to use bandwidth for worse memory controller. There is no equivalency whatsoever. Also don’t mistake physical size for that either.

            You are skipping a lot to reach your conclusion which is completely unsupported by anything. You are comparing incomparable and getting thus nonsense.

            • MathMan
            • 4 years ago

            This is gold!

            Of course AMD controllers consume only 25% of power as opposed to 30% for NVidia: that’s extremely easy to achieve when your non-MC logic consumes power like a Hummer.

    • vargis14
    • 4 years ago

    On another note AMD was doing so well with the HD 7750 that was..I repeat WAS the top performing low power card that did not need a PCIE power connector.
    I really feel AMD has dropped the ball on the Pre built 1st time gamers not releasing a card that performs better than a GTX 750ti/hd 7750.

    Also is there any reason why no low power card could be made to accept a molex connector like the good old days to supply a tiny bit of power for pre builts that have no PCIE power connectors?
    It is a large market that I feel is not being attacked by AMD at all really with any good performing card besides the old HD7750. I believe it still is AMD’s best performing card out that does not need a PCIE power connector ?

      • JustAnEngineer
      • 4 years ago

      [quote=”vargis14″<] Is there any reason why no low power card could be made to accept a molex connector like the good old days to supply a tiny bit of power for pre builts that have no PCIE power connectors? [/quote<] That's why you'll find an adapter like this one included in the box with many new graphics cards. [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16812423173[/url<]

        • vargis14
        • 4 years ago

        Tons of OEM PC builders like Dell, Acer, Gateway, the list goes on and on, but most prebuilts with 300watt and under PSU’s only come with a extra molex or 2 and Sata power connectors. which is plenty to power any 50-70 watt graphics card like my current Gateway tower Sporting a 300watt PSU powering aHIS iCooler HD7750 that runs at with stock clocks of 800mhz on the core and 1125 on the memory…in fact it runs flawlessly at 1125mhz core and amazingly 1400 mhz memory easily outperforming the HD7700. It is a gem of a card I know 🙂

        I also have a Slim Gateway with a i3 2120 8gb ram with a HD6570 running OC’ed that is powered by a dinky 220 watt PSU.

        As we all know PSU requirements are greatly overestimated by NV and AMD just so they can make sure that power is not a problem when installing a bricked new graphics card.

          • UberGerbil
          • 4 years ago

          So buy a molex-PCIE adapter cable like JAE mentions, if the card you bought didn’t include one. They’re literally a couple of bucks. Even [url=http://www.walmart.com/ip/41538913?wmlspartner=wlpa&selectedSellerId=130&adid=22222222227029397491&wl0=&wl1=g&wl2=c&wl3=58014663911&wl4=&wl5=pla&wl6=98279398631&veh=sem<]Walmart sells them[/url<] (that's a 6-pin, but still) -- which says a lot about the market segment you're talking about.

      • auxy
      • 4 years ago

      Lots of new PSUs don’t have molex connectors. (‘ω’)

      • Demetri
      • 4 years ago

      My understanding is that the segment you’re talking about is a very low-margin segment; not much money to be made. When you’re bleeding the way AMD is, you just have to slow it as much as you can (focus on the high dollar chips like Fiji) and hope the medic arrives soon (Microsoft? Samsung? Anybody?)

    • vargis14
    • 4 years ago

    I really wish Nvidia would make a spinoff of the 950 that did not need the power adapter and could run off any 16x slots 75 watts of power…possibly called the GT 950 with slightly lower clocks. I know they could do it with out decreasing performance much at all. Or give the GTX 950 a Bios that when installed without a 6 pin PCIE power connector it runs at a 70 watts power envelope. That would be a good way for Nvidia to sell a ton of GTX 950’s that is for sure

    It would be the best card to install in pre-built PC’s like Dell’s Acers’s Gateways etc. Making a true replacement for the GTX 750ti that is still at the moment the top performing graphics card for anyone that want to get into PC gaming a power a HDTV/1080p at a low cost and halfway decent performance. But I still feel Nvidia is dropping the ball on a full DX12 GTX750ti replacement. Especially with The performance of the current Haswell i3’s and current and coming Slylake i3’s and even Pentium Pre built PC’s begging to get into gaming.
    IMHO of course.

      • travbrad
      • 4 years ago

      I think an updated 750ti style card (very low power consumption) would be cool, but the fact is they have no competition whatsoever from AMD in that area, so from a business perspective why not just keep selling the 750ti?

      • tipoo
      • 4 years ago

      “possibly called the GT 950 with slightly lower clocks”

      That’s a terrible marketing name.

      • Yumi
      • 4 years ago

      Green edition, just like the GeForce 9800 GT Green Edition from a long time ago.

      The “big” problem with targeting such a card, is that they start to get close to high end IGP performance, making it a bad perf/$$$ comparison.

      • TruthSerum
      • 4 years ago

      We talked about this before.

      You don’t want a card trying to squeeze into a power envelope’s outer limits.

      For obvious reasons.

        • Chrispy_
        • 4 years ago

        Actually, Nvidia’s powertune algorithms are ridiculously good and they already have all the experience necessary from butting their mobile GPU’s right up against the TDP limits of laptop coolers. If you furmark a Maxwell GPU and log GPU-Z at its fastest sampling rate, you’ll see that Nvidia never EVER goes more than 2.5% above the rated TDP.

        [quote=”TruthSerum”<]For obvious reasons[/quote<] What are your "obvious reasons", because as far as I can tell, you're just scaremongering without any evidence.

    • ronch
    • 4 years ago

    AMD just can’t continue lopping a few bucks just to beg people to choose their inferior products over Nvidia’s. Not only would most sensible folks gladly fork over the extra cash for Nvidia, AMD also earns a few bucks less from each card they manage to sell. So, fewer units sold because the competition is only slightly more expensive and are better products overall, and less money for each unit sold. Can’t be a winning strategy for AMD. They’d better come up with winning solutions soon.

      • HisDivineOrder
      • 4 years ago

      When you’re treading water, you’re not worried about how well you move your arms. By the end, you’re just trying to keep moving in any way you can, even if you have to reach over and grab a few old pieces of rotten wood to stay up.

    • ronch
    • 4 years ago

    This could actually be my next graphics card.

      • joselillo_25
      • 4 years ago

      I would wait until the DX12 drama conclusion.

        • Meadows
        • 4 years ago

        Why, will AMD have new cards out by then?

          • travbrad
          • 4 years ago

          They will become more powerful than you can possibly imagine.

            • K-L-Waster
            • 4 years ago

            This is the end of their insignificant rebellion. Come over to the green side.

            • Nevermind
            • 4 years ago

            I’ll never join you. You betrayed and murdered my power bill.

            • HisDivineOrder
            • 4 years ago

            No. AMD IS your power bill.

            • Nevermind
            • 4 years ago

            That’s not true… That’s impossiburu!

            • Voldenuit
            • 4 years ago

            Obi-wan lied. *I* am your power bill!

            EDIT: Ninja’d.

            • K-L-Waster
            • 4 years ago

            And in the future, Nvidia and Intel will be defeated by Ewoks!!

            • albundy
            • 4 years ago

            there’s nothing green about team green.

    • Meadows
    • 4 years ago

    [quote<]"When filtering the fp16 format, though, the GTX 950 is roughly twice as fast as the R7 370."[/quote<] You probably meant [i<]int16[/i<].

    • NeelyCam
    • 4 years ago

    Fury Nano…?

      • Herem
      • 4 years ago

      Wasn’t the Nano review only supposed to take a couple of days to complete?

      • TwoEars
      • 4 years ago

      My thoughts exactly.

      Did something happen Scott? Did you elect to not review the AMD card since you weren’t sent one officially? The current review is an nvidia card… I guess that’s by pure chance right… wuah wuah wuah 😀

        • Damage
        • 4 years ago

        The GTX 950 came out before the R9 Nano. I have been way behind, but I am working on things in order.

        A review like this one takes substantially longer than three days to produce on any humane work schedule. My productivity is perceptibly lower since I hit a wall after about six months of working through weekends and late-night writing on deadlines. I have since dialed it back, slowing my pace of publication.

        The sly accusation of bias is nice, though. Thanks for that. Makes my day a little brighter!

          • TwoEars
          • 4 years ago

          Thanks Scott, no worries. I was only curious. Take your time, get well 🙂

      • derFunkenstein
      • 4 years ago

      I don’t know that this is anybody’s reasoning on the topic – I’m only guessing. But when it comes to “yet another $650 card that the only people who are buying it knew they were buying it before the reviews” vs. “a card that people might actually buy” I’d go with the latter every time.

    • NoOne ButMe
    • 4 years ago

    Very nice. $10 for a solid ~5% performance bump given both at stock. With a lot more overclocking headroom to get that to a nice 15-20%.

    So AMD is going to release their full bins now which should result in a slight price drop on the 370 and a slightly faster than the 950 (at stock) 370x for the same price(?).

    Good for consumers! Yay!

    • JustAnEngineer
    • 4 years ago

    It might be interesting to compare to some other cards in a similar price range as the GeForce GTX960 that was included in the review.

    [url=http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007709%204814%20600489396%20600565503&IsNodeId=1&bop=And&Order=PRICE&PageSize=30<]$130 -30MIR[/url<] / [url=http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007709%204814%20600565503%20600489396%20600007787&IsNodeId=1&bop=And&Order=PRICE&PageSize=30<]$155 -20MIR[/url<] Radeon R7-265/370 2 GiB / 4 GiB [url=http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007709%204814%20600582123%20600007782&IsNodeId=1&bop=And&Order=PRICE&PageSize=30<]$155 -10MIR[/url<] GeForce GTX950 2 GiB [url=http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007709%204814%20600481061%20600473875&IsNodeId=1&bop=And&Order=PRICE&PageSize=30<]$173 -30MIR[/url<] / [url=http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007709%204814%20600473875%20600481061%20600007787&IsNodeId=1&bop=And&Order=PRICE&PageSize=30<]$190-20MIR[/url<] Radeon R9-270/270X 2 GiB / 4 GiB [url=http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007709%204814%20600531760%20600565502%20600007782&IsNodeId=1&bop=And&Order=PRICE&PageSize=30<]$180 -10MIR[/url<] / [url=http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007709%204814%20600531760%20600565502%20600007787&IsNodeId=1&bop=And&Order=PRICE&PageSize=30<]$200 -20MIR[/url<] Radeon R9-285/380 2 GiB / 4 GiB [url=http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007709%204814%20600007782%20600537575&IsNodeId=1&bop=And&Order=PRICE&PageSize=30<]$180[/url<] / [url=http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007709%204814%20600537575%20600007787&IsNodeId=1&bop=And&Order=PRICE&PageSize=30<]$221 -30MIR[/url<] GeForce GTX960 2 GiB / 4 GiB [url=http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007709%204814%20600489223&IsNodeId=1&bop=And&Order=PRICE&PageSize=30<]$200 -30MIR[/url<] Radeon R9-280 3 GiB [url=http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007709%20600473876%204814&IsNodeId=1&bop=And&Order=PRICE&PageSize=30<]$215 -20MIR[/url<] Radeon R9-280X 3 GiB

      • auxy
      • 4 years ago

      Keep in mind the R9 270 is a fully-enabled Pitcairn GPU with 1280 shaders, while the R7 370 is a not, to just be even more confusing.

        • JustAnEngineer
        • 4 years ago

        Doh! Fixed.

        P.S.: I wish that Bruno would write a preview function for editing comment posts.

          • auxy
          • 4 years ago

          And fix that niggling unicode-on-the-64th-character bug!

      • Mr Bill
      • 4 years ago

      I second this, especially for the upcomming nano review. Sapphire makes a $207 2GB mITX R9-380 suitable for the nano review. XFX offers a 4GB double D R9-380 for $220. I’m curious how the Tonga performs with 4GB and a bit of vavoom.

    • the
    • 4 years ago

    AMD probably went with the specs for the R9 370 as they didn’t specifically know how the GTX 950 would perform. If need be, they could always launch the R9 370X if they needed to actively compete and slash the price of the R9 370 accordingly. If the R9 370 held up, no need for the extra SKU in the line up.

    AMD could also want to distinguish the R9 370X by packing 4 GB of memory standard but needs to wait for GDDR5 memory prices to come down for it to fit into the appropriate price bracket.

      • HERETIC
      • 4 years ago

      I’m guessing apple scabbed all the “X’s” for their aio monstrosity’s…………………….

        • NoOne ButMe
        • 4 years ago

        Not realistic. Maybe for Tonga. Pitcairne is so old and 28nm supply is enough that AMD should have no problem producing fully enabled chips. Unless TSMC yields took a nosedive only for smaller 28nm chips recently.

          • HERETIC
          • 4 years ago

          Just copied this off notebook review-

          “The design of the Radeon R9 M290 series has its roots to the Pitcairn chip as found on the desktop Radeon HD 7870. The M290X in particular is similar to its desktop Radeon HD 7870 counterpart with 1280 1D shader cores and 80 texture units, but has a core clock of only 900 MHz (boost) versus the 1000 MHz clock of the HD 7870.”

          Best guestimation
          Bin 1.Chips that will run at low voltage/less heat-sell to apple as M290/M290X
          Bin 2.Chips that will run at full speed but not pass bin 1-stockpile until we have
          enough to release 370X
          Bin 3.Broken chips-sell as 370

    • Krogoth
    • 4 years ago

    Tl dr version = 950 is slight faster and slightly more expensive than 370

    You really cannot go wrong with either choice if budget is a severe concern. Otherwise, if you can save an extra $100+. You can get a much faster 290X or 970.

      • NoOne ButMe
      • 4 years ago

      an extra $100 can get you a 290, an “extra” $150+ can get you a 290x/390*/970.

      Honestly, an “extra” 66/100%+ of the card is hardly what I would call “extra”

      An extra $30-50 or so to get a 4GB 380/960 is what I would really call “extra”.

      *you can get a 390 for an extra $150 or so, not a 390x as originally stated.

        • Krogoth
        • 4 years ago

        290X/390X and 970 are much faster than the 960 and 950, especially if want to throw in AA/AF.

        GM206 becomes severely memory bandwidth starved when you want to run AA/AF.

          • Meadows
          • 4 years ago

          But that’s not an “extra”, that’s a whole different class of product.

            • nanoflower
            • 4 years ago

            Which was Krogoth’s point. By spending the extra $100 you get a much better product (if you have the money to spare.) Don’t get distracted by the other post using ‘extra’ to mean the GPU when Krogoth was only using ‘extra’ for the budget.

            • NoOne ButMe
            • 4 years ago

            Which for people on a budget for a gaming PC with this type of card would be quite a bit!

            Spending 100-150 on CPU+Mobo, up to $50 on RAM, $50 total on PSU+case and $150 on GPU.

            $100 on a 350-400USD build is a substantial increase.

            Of course, usually it is better for them to spend more on the CPU to get an Intel i5 K series and than buy a more expensive GPU later down the road.

            • Krogoth
            • 4 years ago

            Get a gaming console instead if you have that kind of budget.

            • Meadows
            • 4 years ago

            Consoles are more expensive than PCs. The cost of the games themselves is so high that a low-end gaming PC will not only give comparable performance but save considerable money year after year, too.

            • Krogoth
            • 4 years ago

            Wrong.

            The cost of AAA games are same on both platform. The only games that are cheaper on the PC are old and indie titles.

            Unlike gaming consoles, you have either have to spend time on getting a “open-source” OS working or get a legit version of Windows for $99 or more. You also will have fiddle around with drivers and other non-sense which may cost you more time and $$$$.

            PC have never been cheaper than gaming consoles. It has never been the platform’s strength. The main strengths of a gaming PC is that you get more powerful system (better framerate and graphical fidelity) at a cost, better peripheral support and have [b<]massive mod support[/b<].

            • I.S.T.
            • 4 years ago

            Really, it depends. Discounts for games are easier to find on PC, but consoles have the used market…

            • Meadows
            • 4 years ago

            High-profile games regularly go on sale for the PC even at launch, bringing $60 titles down to $50, sometimes $45. You get even better deals if you only wait 6-12 months, while such a thing is nearly nonexistent for the consoles.

            You don’t have to “fiddle around” with drivers, you just have to install them. It’s not 1998 anymore, you know.

            If you get a pre-built computer, a Windows licence will often be included and you still end up with a machine comparable to a console in both performance and price; [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16883220689<]sometimes even cheaper, too[/url<]. Perfectly fine for 720p gaming (which, let's face it, is what consoles are about nowadays). Even if you decide to get a proper pre-built PC for around $600 (which includes a Windows licence and a low-midrange GPU for 1080p gaming), the surplus upfront cost of around $200 will have been saved on games in 1-2 years, and over that time you will have had better/more fun and enjoyed a system that's good for more than one thing.

            • Meadows
            • 4 years ago

            One more point: people do frequently compare PCs explicitly marketed as “for gaming” to consoles, and in that sense it’s true that they’re noticeably more expensive up front, but comparing that level of PC performance to consoles is comparing apples to oranges. They’re a different class of product entirely, no matter the primary purpose.

            • Ninjitsu
            • 4 years ago

            Well, in India, PC games at retail usually cost 1/3rd of what they do on consoles, with only Activision and EA selling at full $60 for the last two or three years.

            MGSV for example costs $15 right now.

            • Meadows
            • 4 years ago

            I’m aware of a similar relationship in Europe, but the difference is much less jarring.

            • Meadows
            • 4 years ago

            I’m not distracted, we’re all talking about the budget. In this category of product, $100 is not an “extra” but a large step up.

            It’s like going to an estate agent to look at the available homes, and he says “we have this nice little home for $150,000, but don’t worry, if you save up another $100,000 (what?), then you can have this entirely better home instead”.

            • Chrispy_
            • 4 years ago

            With you here.

            Not only that, but the “extra” $150 for a 290x or 390 doesn’t account for the increase in GPU power. You can run a 950 on a 300W PSU but I’d be recommending 550W+ for a Hawaii-based card, especially the factory OC ones that always seem to be on discount.

            If it’s a new build, that’s an extra $50 for the PSU. If you have to replace a PSU to accommodate the more potent card then pricing, value and budget all go to hell.

            • JustAnEngineer
            • 4 years ago

            If it’s a new build, the difference in power supply pricing is more like $25 than $50.

            Small:
            [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16817103083<]$60[/url<] Sparkle R-FSP400-60ETN (31 A @ +12 V, 80+ Platinum = 92% efficient at half-load) [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16817151117<]$63[/url<] SeaSonic SSR-360GP (30 A @ +12 V, 80+ Gold = 90% efficient at half-load) [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16817139058<]$63 -20MIR[/url<] Corsair CS450M (35½ A @ +12 V, 80+ Gold, Modular) [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16817151127<]$69[/url<] SeaSonic SSP-450RT (37 A @ +12 V, 80+ Gold) [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16817151146<]$38[/url<] SeaSonic SSP-300ST (24 A @ +12 V, 80+ Bronze = 85% efficient at half-load) [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16817139026<]$40½[/url<] Corsair CX430 (32 A @+12 V, 80+ Bronze) [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16817151076<]$49[/url<] SeaSonic SS-400ET (30 A @ +12 V, 80+ Bronze) Medium: [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16817371055<]$86[/url<] Antec EA-550 Platinum (43 A @ +12 V, 80+ Pt) [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16817151119<]$82[/url<] SeaSonic SSR-550RM (45 A @ +12 V, 80+ Gold, Modular) [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16817139059<]$81 -20MIR[/url<] Corsair CS550M (43 A @ +12 V, 80+ Gold, Modular [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16817151094<]$61 -5MIR[/url<] SeaSonic S12II-520 (40 A @ +12 V, 80+ Bronze) [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16817139048<]$65 -10MIR[/url<] Corsair CX600M (46 A +12 V, 80+ Bronze, Modular) [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16817139028<]$65[/url<] Corsair CX600 (46 A @ +12 V, 80+ Bronze) [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16817151096<]$74 -5MIR[/url<] SeaSonic S12II-620 (48 A @ +12 V, 80+ Bronze)

            • Chrispy_
            • 4 years ago

            Yes you’re right if you compare individual PSUs against individual PSUs but this is a budget card aimed for budget builders.

            A GTX950 is either going into a new or used off-the-shelf PC with an anaemic <300W OEM PSU or it’s potentially going to be added to a new build using a PSU+Case bundle. Whilst there’s a lot of cheap chinese junk with non-80+ supplies and dubious Wattage claims on the market, there are also many bundles from Antec, Coolermaster, Rosewill, Thermaltake where you can get a decent case and a proper 80+ Bronze PSU for significantly less than the price of a name-brand 80+ supply alone.

            True, these supplies may not be great but for a GTX950 that doesn’t matter. Case+PSU+cooling for a 250W GPU might easily total $100 more than a $50 case+PSU bundle that’s just fine for a GTX950 build. Add the extra $100 for the more expensive GPU and you’re suddenly looking at a $200 premium on top of a $400 build…

            At least there are options for those with weak PSUs and poor case cooling. The GTX950 doesn’t have to be the best value on the planet if it fits enough peoples’ needs.

      • DoomGuy64
      • 4 years ago

      The best bang for your buck card is the 380/285, which wasn’t included in the benchmarks. I’ve seen either card selling less than $200, and they should be slightly faster than the 960. Some of the 380’s have 4GB as well. The 970/290 is only necessary for resolutions higher than 1080p.

        • Mr Bill
        • 4 years ago

        [quote<]Some of the 380's have 4GB as well.[/quote<] [url=http://www.amazon.com/XFX-Dissipation-Display-Graphics-R9-380P-F24M/dp/B00ZFND0RM<]XFX 4GB 380[/url<]

    • Anovoca
    • 4 years ago

    just curious if you got a chance (maybe separate from this review) to test the stock card/cooler.

      • Damage
      • 4 years ago

      Nope. I don’t have a reference board, and I’m not sure there is one for the GTX 950, aside from the GTX 960 board design. It may just be a CAD file at this point.

        • Chrispy_
        • 4 years ago

        Yeah, I’m not sure anyone would want a reference board at this price point.

        The metal-shrouded vapor chamber cooler that debuted with the Titan and was used again on the 780 series and 980 series is awesome (and expensive).

        The [url=http://international.download.nvidia.com/geforce-com/international/images/geforce-gtx-960/nvidia-geforce-gtx-960-photo-3.png<]plastic TTM cooler[/url<] that has been doing the rounds since the GTX660 is pretty poor. It feels pretty nasty, its actual cooling performance is "adequate" if you're being kind but the worst aspect by far is the audible idle growl and obnoxious whine under load.

          • auxy
          • 4 years ago

          Vapor chamber?! What the heck? It’s just a heatsink and fan! I had a GTX TITAN for a long time and took it apart a bunch of times. It’s seriously just a heatsink and fan! ( ゚Д゚)

            • Chrispy_
            • 4 years ago

            What you think is the thicker chunk of copper on the bottom of your titan heatsink block is a vapor chamber. They’re really thin, but they’re effectively large flat heatpipes.

            [url=http://www.ocaholic.ch/modules/xcgal/albums/userpics/nVidiaGFGTX780/nVidia_GeForce_GTX_780_17.jpg<]780/Titan/980Ti/Titan X vapor chamber[/url<] [url=http://cdn.overclock.net/0/06/900x900px-LL-06d19d99_heatpipe.jpeg<]This is the cheap version[/url<] that the 770, and then the 980 and OEM970 got. Fits into the same shroud as the more expensive vapor chamber, but it is just a copper slab with three flat C-shaped heatpipes sandwiched between the other side and the fins. As you can see, it's nigh-on impossible to tell the difference once they're hidden into the coolers baseplate - the only hint is that the vapor chamber has [url=http://i.imgur.com/a9ikEPY.jpg<]rounded corners poking through the baseplate[/url<].

            • auxy
            • 4 years ago

            Huh. I didn’t know that. Thanks Chrispy_! (*‘ω‘ *)

            • Chrispy_
            • 4 years ago

            Now I’m tempted to buy a chreap, faulty 780 from ebay and see if I can swap out the heatpipe block for the more effective vapor chamber now.

            It was only when looking for photos to link you above that I realised just how similar they are – perhaps similar enough to be a straight swap?

            • Mr Bill
            • 4 years ago

            I keep looking for a cheap CPU vapor chamber for similar reasons; but so far no luck.

    • auxy
    • 4 years ago

    I don’t really understand why AMD used the 7850 again either. They must have just had some extra GPUs laying around.

    I also feel pretty vindicated in pointing out how bad of a value the GTX 960 is these last few months! (‘ω’)

      • NovusBogus
      • 4 years ago

      Last decent AMD GPU, I suppose. Not that Nvidia didn’t do the same thing with the 8800 GTS, of course. At least it’s better this way than giving us a brand new Rendozer architecture that has ten thousand shader cores but can’t keep up with Minecraft.

        • auxy
        • 4 years ago

        Ehh? What? What about Bonaire, Hawaii, Tonga? All newer than this slightly-retarded Pitcairn and great GPUs. (Well, Bonaire is too small for me to care about, but it’s a nice GPU anyway.)

        My point was just that I don’t know why they used Pitcairn Pro instead of Pitcairn XT on the R9 370.

          • NoOne ButMe
          • 4 years ago

          People see a Nvidia GPU stripped off all the compute resources and FP64 against an AMD GPU with them OR in Fiji’s case that is painfully aimed at compute throughput and go “well, it loses in da games” and call it not decent.

          I mean, being able to cut the buss to 2/3rds of the original and get better performance per shader with a minor architecture revision is weak.

          But cutting all the FP64 out along with completely redesigning the shaders along with half the bus size for about the same performance? That’s epic (GM206).

          Neither side has had any “bad” GPUs on 28nm. Unless your focus is purely on gaming, in which case Fiji has shown to be a “bad” GPU.

            • nanoflower
            • 4 years ago

            That’s the thing. If I don’t care about compute throughput then why should I care if some company has a product that gives me great compute throughput but less performance in the areas I do care about. Most people don’t care about compute throughput on their GPU since they buy a GPU for gaming only. The only time that wasn’t the case was for a short while during the Bitcoin madness when there were a lot of cards being sold to build Bitcoin farms but that’s gone away now due to better solutions being available (as everyone expected would happen.)

            • NoOne ButMe
            • 4 years ago

            calling it the “last decent AMD GPU” is the point I was arguing against.

            Good for gamers? Maybe not. not a Decent or better GPU? What are you smoking.

            That’s my point. The only GPU AMD has a chance to be classified as “below decent” is Fiji for all of 28nm. I don’t believe Nvidia has had any GPUs classified below “decent” for 28nm.

          • JustAnEngineer
          • 4 years ago

          The omission of Tonga Pro (Radeon R9-285 / R9-380) from the comparison when a slightly more expensive GeForce GTX960 was included is unfortunate.

            • HERETIC
            • 4 years ago

            Remember the golden rule-
            We will continue to send you products for review-We expect a-
            “Honest review but with positive spin”

            Omitting the 285/380 allowed positive spin.

      • NoOne ButMe
      • 4 years ago

      TL;DR: If you can sell a cut down part for the price of a full part and be competitive you should.

      Do you mean used the 7850, or used Pitcairn?

      The cut down Pitcairn was enough to fill a gap in the market. Meaning that they don’t have to worry about any binning issues and can almost always get 90%+ of the dies on a wafer to market.

      If you mean why Pitcarn, well. It still is competitive. The cost of the GM206 excluding R&D for Maxwell is at least in the 10’s of millions. Nvidia had to spend 10’s of millions of dollars to make a die that is competitive with Pitcarn in performance and slightly better in performance per watt.

      Why spend tens of millions of dollars on a new part when your old one is competitive? Just goes to show how GCN beat Kepler in perf/watt when you stripped out the compute (like how most of Kepler was) and how Maxwell barely beats GCN with an extra few hundred million spent.

      Now, the moment you stop talking about stock clocks… Maxwell’s performance improvements over Kepler become super impressive and it puts GCN into the slammer there at the cost of being slightly behind in total power draw but still ahead in pref/watt.

      • derFunkenstein
      • 4 years ago

      It takes time and money to design something from scratch, neither of which AMD has right now. Pitcairn isn’t terrible, which doesn’t hurt. What hurts is that they turned off a bunch of the GPU.

        • auxy
        • 4 years ago

        [quote=”auxy, in another post,”<]My point was just that I don't know why they used Pitcairn Pro instead of Pitcairn XT on the R9 370.[/quote<]

          • HERETIC
          • 4 years ago

          See my reply to-NoOneButMe.

      • Demetri
      • 4 years ago

      I didn’t understand it either, but some of the explanations seem to make sense; basically it sounds like they’re just trying to fill the segment in the cheapest way possible. Pitcairn was a very good and efficient chip at the time which is why they’ve been able to hang on to it for this long, but the thing is, why buy this for $150 when there are plenty of used 7800 cards on the market with the same chip for half the price? Just last week I bought a 7870 on Ebay for $70. That has the full GPU, and I’ve been able to overclock it to where it’s pretty much on par with a reference 7970/280X. It does have 2GB of VRAM instead of 4, but I don’t think you’re going to need it at the resolution and settings that this chip is most comfortable at.

        • auxy
        • 4 years ago

        R7 370 has much faster memory and core speeds than the 7850. 7970/280X comes with 3GB.

        Pardon me for saying so but allow me to express my very sincere doubt that you have managed to overclock a Pitcairn XT (Radeon HD 7870/R9 270) enough to match a card with half again the fillrate and half again the memory bandwidth. (‘ω’)

          • Demetri
          • 4 years ago

          I was comparing it to the 370 when I mentioned the 2GB vs 4GB.

          [url<]http://www.guru3d.com/articles_pages/radeon_hd_7870_overclock_guide,14.html[/url<] Mine is clocked the same as these guys; 1200mhz on the core (+200 over stock) 1450 on the memory (+250) with a voltage bump. It can get very close to 7970 in some tests/games; in others, it's more like a 7950 or halfway between that and a stock clocked 7870. And those are reference Tahiti boards; OCd, it would be a different story as Tahiti overclocks VERY well. Probably the best card you can get for $100 if you want to overclock is a 7950. So... I would agree with you that I overstated how fast it is, but I do think it's an excellent card for the prices it can be had for.

        • Bensam123
        • 4 years ago

        A lot of eBay AMD cards are ex’miner hardware. You could and probably still can get a R9-290 for around $170-180 dollars if you watch for it.

          • K-L-Waster
          • 4 years ago

          Which is probably contributing to this:

          [url<]https://techreport.com/news/29133/more-cuts-amd-to-reduce-global-workforce-by-5[/url<]

      • HisDivineOrder
      • 4 years ago

      You use what you have. You have what you make. And AMD hasn’t made much lately.

      • slaimus
      • 4 years ago

      This card is AMD’s G92b. When everyone was stuck on 55nm while 40nm GPUs were having problems, Nvidia also reused their G92b chip for the 8800GT/GTS, 9800GT/GTS/GTX, GTS 150, and GTS 250. The cards completed fairly well with the Radeon 3870, 4850, and 5770.

    • Mikael33
    • 4 years ago

    I figured the 370 was in trouble when I saw the theoreticals compared to the 950 and my suspicion was conformed, not sure why AMD would release this when you can get a much better card for a measly $10 more, then again it’s AMD.

      • travbrad
      • 4 years ago

      [quote<]not sure why AMD would release this when you can get a much better card for a measly $10 more, then again it's AMD.[/quote<] When they released the R7 370 (a couple months ago) the GTX 950 didn't exist. At the time Nvidia had a big gap in their lineup in between the 960 and the 750ti, which the R7 370 fit into reasonably well. I would expect price drops on the 370 and/or a 370X to compete with the 950. Despite what some people seem to think, AMD is just a company trying to maximize profits. They will sell their GPUs and CPUs for as much as they can get away with, just like Nvidia and Intel.

      • NoOne ButMe
      • 4 years ago

      Yeah, it’s a shame that AMD released the GTX 950. Oh, wait. GTX cards aren’t AMD’s products.

    • christos_thski
    • 4 years ago

    I’m curious how the 370 compares to the 7870, where the Pitcairn core first debuted. In any case, both cards (the 950 and 370) seem capable and it probably boils down to the particular rebates and prices what you end up choosing…

      • JustAnEngineer
      • 4 years ago

      As Auxy pointed out, R7-370 is a faster version of HD7850.

      • Damage
      • 4 years ago

      I suppose I could have tested it, but the comparison seems like a lot of hair-splitting. The 7870 is a fully-enabled Pitcairn with slightly lower GPU clocks and much lower memory speeds than the R7 370. Folks who are saying the R7 370 is “a faster 7850” are glossing over the extent to which “faster” makes the R7 370 similar to a 7870 in practice.

      Peak rates for the HD 7870/R7 370:

      32/34 Gpx fill
      80/67 Gtex filtering
      156/182 GB/s mem BW (!)
      2.6/2.2 tflops arithmetic

      Now consider that those are peaks, and AMD has said PowerTune has been refined recently to improve performance. That likely means more residency at higher clocks for the newer card.

      Any edge the old 7870 has in texturing, or shader math likely gets swallowed up by the 370’s big advantage in memory bandwidth, in the overall mix.

      Combine that with the fact that we’ve seen and tested Pitcairn in five different SKUs prior to the 370, each with slightly different mixes of resources, and I didn’t see the point of testing again. I still don’t, really. I mean, I know everybody likes more testing, but it’s time and energy that could be spent on more interesting subjects.

        • NoOne ButMe
        • 4 years ago

        Would have varied game by game. It makes me wonder if Beyond 3D benchmark creator(s) will ever look at a wide mix of games to see what the average compression NVidia gets is. Would be more informative for gaming performance than what it can currently show.

        The lost memory bandwidth for Pitcairn here should only hit performance 2-3%. Pitcairn never seemed to be memory bottlenecked. Of course, never seen anyone playing with it’s memory bus to much.

    • DrDominodog51
    • 4 years ago

    A 750 ti is now a great value with price reductions and fairly competent at 1080×1920, if you don’t mind lowering the details a bit on newer titles. I would recommend going for the 380/960 if you want more than the 750ti.

      • Peldor
      • 4 years ago

      Yeah, it’s even cheaper than its placement in the price/performance chart suggests. Looks to be $129 in the charts rather than $119. And there are several below that at retail already.

    • chrcoluk
    • 4 years ago

    Sorry I laughed after the first paragraph.

    The 960 is not even good enough for 2015+ 1080p games never mind the 950, so why is the reviewer labelling this as a 1080p card, did nvidia put that in the review guide?

    Of course its fine for last gen 1080p games such as dark souls and other games released prior to 2014.

      • UberGerbil
      • 4 years ago

      The first paragraph of the review says absolutely nothing about 1080p. The first (and only) paragraph of the summary does. Which suggests to me you commented without actually reading the review. So downthumb for you.

        • Nevermind
        • 4 years ago

        You mean.. there’s more?

      • auxy
      • 4 years ago

      I’m an AMD fangirl and even I find this post obnoxious. Banned in 3 … 2 … 1 …

      • K-L-Waster
      • 4 years ago

      If by 2015+ you mean DX12 games, uhhh, when are those actually shipping? By the time there are more than 50, odds are every card in this review will be ready for replacement.

        • _ppi
        • 4 years ago

        But not everyone buying card now will be able to buy a new one next year.

        Which is why bit of foresight helps. Too much foresight can actually hurt, though (*cough*AMD*cough*).

          • K-L-Waster
          • 4 years ago

          Fair point. However, one has to ask how many people fit in all of these groups:

          * are buying a budget card today
          * need to run DX12 games that won’t be released for another 9 months a max 1080p quality
          * aren’t willing to buy another budget card in 12 months

          I’m sure there are people who fit that description, but I doubt it’s an enormous percentage of the market.

      • Gyromancer
      • 4 years ago

      I’m not exactly sure where you are getting the idea that the 960 can’t run 2015 games at 1080p. I ran with a 960 for a couple months on a 2560×1440 monitor almost without hiccups at all in 2015 games including project cars. Our review of the 960 shows the same. You seem to be quite confused.

        • Meadows
        • 4 years ago

        Define “almost without hiccups” please, as it’s not a number.

        • derFunkenstein
        • 4 years ago

        I don’t have any experience with the GTX 960, but based on my experience with a GTX 970 at 1440p I don’t think I’d enjoy the 960 at that resolution. Everybody has his own tolerances, though, and I think it’s most likely a great card for 1080p gaming.

        • Ninjitsu
        • 4 years ago

        I think the benchmarks are evidence enough, the 960 is acceptable at 1080p and so chrcoluk is not correct; however I highly doubt a 960 is a good idea at 1440p without seriously reducing detail levels and rendering complexity.

        Currently, the 1080p @ 60fps mark is a 970 in general or 290/390 with DX12.

      • NoOne ButMe
      • 4 years ago

      I’m currently running a game released in 2015 at 1080p maxed settings on what is equal to just below a 750ti… It runs about 45fps being bottlenecked by my mobile QC Haswell i7. Turning off all the CPU intensive stuff and it shoots to 55-60+ fps.

      Most games released in 2015 will run at 1080p just fine with this card. Most big budget games and large indie titles? Unlikely.

      However, most games are smaller budget studios or small indies.

      • Krogoth
      • 4 years ago

      960 operates fine with games under 1080p unless you want to throw in all of the works (SSAA and AF). 960 may struggle a bit with certain titles under such conditions.

      The intended market for 960 doesn’t care for AA or AF.

        • Meadows
        • 4 years ago

        AF hasn’t mattered for performance for years, by the way.

          • Krogoth
          • 4 years ago

          It still pulls some considerable overhead when you start going beyond 2Megapixels.

          • auxy
          • 4 years ago

          Errm, no. AF can still be a huge performance hit if the scene has very complex geometry. Tessellation increases overhead for AF by a large portion of the tessellation factor.

          Don’t get me wrong, I would never, ever, ever recommend against using it! AF is not optional! (*’▽’)

        • mnemonick
        • 4 years ago

        [quote<]The intended market for 960 doesn't care for AA or AF.[/quote<] As a Gigabyte 960 owner,* I wouldn't say that. I do enjoy a bit of AA and AF (8x or better) is an absolute must. And the added performance really pays off in more demanding games like Witcher III. I bought my card because I went small (Carbide Air 240, etc.) for my new PC, and it's worked out very well so far. *[sub<]I do agree that at msrp it's not a great deal, but I got mine for about the price of a 950 after discount and a rebate. [i<]And[/i<] I got The Witcher III for free. :D[/sub<] [sub<]Edits: typos![/sub<]

        • Ninjitsu
        • 4 years ago

        I have a [i<]GTX 560[/i<], and I care for AA and AF.

      • maxxcool
      • 4 years ago

      USER STATISTICS
      Joined: Thu Sep 24, 2015 10:43 am
      Last visited: Wed Sep 30, 2015 5:58 pm
      Total posts: 0
      (0.00% of all posts / 0.00 posts per day)

      • maxxcool
      • 4 years ago

      Bad shill is shill

      • Voldenuit
      • 4 years ago

      ROY TAYLOR, IS THAT YOU?

      Fair enough.

      • maroon1
      • 4 years ago

      No, even GTX 950 should be good enough for 2015 and 2016 games at 1080p

      Some games might not run on 60 fps or ultra settings, but you don’t need 60fps or ultra settings to play games. Also, the image quality difference between ultra and high settings is small in most modern games but you get big performance penalty when running them on ultra

        • Ninjitsu
        • 4 years ago

        It’s not, really – the 950 is perfect for 1366×768 at max settings, however. You’ll get at least 60 fps.

      • albundy
      • 4 years ago

      “not good enough” …your subjective reasoning is so well laid out in detail that you’ve totally convinced me!

      • killadark
      • 4 years ago

      Lets all join the BOOO WAGON, BOOOO down voted

Pin It on Pinterest

Share This