AMD’s Radeon HD 7950 graphics processor

If you’re plugged into the PC hardware and gaming worlds at all, you probably already know about the Radeon HD 7970, AMD’s world-beating graphics card that debuted recently. As the world’s first GPU built with 28-nm fabrication tech, the 7970 asserted its dominance in our tests by humbling the prior king of the hill, the GeForce GTX 580, through higher performance and lower power consumption. Not only that, but the 7970 is based on a new GPU architecture—oh-so creatively named Graphics Core Next—that establishes rough parity with Nvidia in terms of GPU-computing capabilities, as well. What’s not to like about that?

Well, the price, for one thing.

As the most desirable single-GPU graphics card in the world, the 7970 commands a premium heftier than Kloe Kardashian—prices start at 550 bucks and go up from there, depending on the variant. Right now, street prices are hovering somewhere between $550 and $600 at online vendors, although we’d expect things to settle down a bit in the coming weeks. Any way you cut it, that’s a tremendous ransom for a graphics card, one that most folks—even most dedicated PC gamers—will be hesitant to fork out.

Happily, AMD is practicing the time-honored tradition of product segmentation, spinning off a new version of the product that’s slightly hobbled and somewhat cheaper, in order to service multiple points along the supply and demand curves. (Yes, sexy, I know. Economics has that undeniable allure.) That’s where the subject of our attention today, the Radeon HD 7950, comes into the picture. AMD has tied the Radeon HD 7970 to the bed, handed the axe to Kathy Bates, and told her to swing away. The result is a graphics card that may never quite live up to its former potential but is much easier to catch in the wild. And heck, once it’s healed up, you might never know about that harrowing experience in a secluded cabin. Its final specs are pretty darned good, after all.

Radeon HD 7970 Radeon HD 7950
GPU
core clock
925 MHz 800 MHz
ROP pixels/clock 32 32
Peak pixel fill rate 30
Gpixels/s
26
Gpixels/s
Filtered texels/clock
(int8/FP16)
128/64 112/56
Peak bilinear
filtering (int8/FP16)
118/59
Gtexels/s
90/45
Gtexels/s
Shader ALUs 2048 1792
Peak shader
arithmetic
3.8
TFLOPS
2.9
TFLOPS
Memory spec 3GB GDDR5 3GB GDDR5
Memory bus
width
384 bits 384 bits
Memory
transfer
speed
5500 MT/s 5000 MT/s
Memory
bandwidth
264 GB/s 240 GB/s
Max board
power
250W 200W

The same GPU silicon, a chip code-named Tahiti, drives both of these graphics cards. In the 7970, it’s at the height of its massively parallel powers, with 32 “compute units” (CUs) and clock speeds approaching one gigahertz, quite a lot for a graphics chip. For the 7950, AMD has disabled four of Tahiti’s CUs, leaving it with 28 CUs intact and operational. That means a minor reduction in a few key graphics capabilities, including FLOPS and textures filtered, in each clock cycle. AMD’s recommended clock speeds are also down, from 925 to 800 MHz, further tempering the 7950’s potential throughput. The consequences of these changes may look pretty big on paper. After all, the 7950 gives up nearly a teraflop of computing power and an equal percentage of texture filtering prowess versus the 7970.

But Tahiti arguably has an abundance of those attributes. What hasn’t changed so much may be more consequential. All of the chip’s ROP units—used for pixel output and antialiasing—remain intact, as are all six of its memory controllers. Furthermore, memory clocks are down less than 10%, so the resulting drop in memory bandwidth is smaller than Rick Santorum’s share of a primary vote, from 264 to 240 GB/s. These things, memory bandwidth and ROP throughput, are perhaps more likely to be at a premium in a Tahiti-based graphics card running today’s games. In other words, the 7950 may not be giving up much in terms of real-world gaming performance.

The most notable thing it’s giving up? About a hundred bucks worth of sticker shock. AMD says prices for 7950 cards will start at $449 (by which it means $450 minus a penny.) As if to underscore the relatively small reductions in this product’s real-world performance, AMD expects the 7950 to undercut the GeForce GTX 580 on price while offering superior performance. Since the 7970 isn’t that much faster than the GTX 580, well, you do the math.


The 7950 with its spiritual predecessor, the Radeon HD 6950

Thanks to the drops in clock speed and the number of active transistors, the 7950’s max power requirement is 200W, or 50W lower than the 7970’s. That difference allows the 7950 to get away with only two six-pin auxiliary power inputs, cementing its man-of-the-people status. The 7970’s eight-pin power input will likely require a different, more expensive class of PSU.

Beyond that change, AMD’s reference version of the 7950 is superficially very similar to the 7970, with the same 10.5″ board length and the same glossy plastic cooling shroud. Most of the first wave of cards available will probably look like the reference model, until the various board makers have had time to develop custom variants. Then again, there are already exceptions to that rule.

Stacking the deck

XFX’s Radeon HD 7950 Black Edition

Even before we received a reference version of the Radeon HD 7950 from AMD, XFX’s custom Black Edition of the card touched down in Damage Labs. This card is based on the same circuit board design as the reference cards—board makers haven’t yet had a chance to build enhanced or cost-reduced versions of the PCB—but otherwise, it has been transformed with XFX’s own cooler and a swanky custom expansion slot cover with a red DVI port.

With dual fans and that lightweight aluminum shroud, the XFX cooler looks like an obvious upgrade from the stock unit, though we’ll have to test it to be sure. Believe it or not, the more important changes to XFX’s 7950 Black Edition aren’t visible to the naked eye.

You see, for several years now, AMD has set the default clock speeds for its GPUs relatively high, leaving little wiggle room for individual board makers to create hot-clocked Radeon cards. Meanwhile, Nvidia has left ample headroom in its lineup, and board makers have taken advantage by shipping a much broader variety of GeForce cards, some with pretty drastically increased default clocks. Although it wreaked havoc at times on our carefully laid plans for testing equivalent products, Nvidia’s looser business model created some nice options, especially for smart customers.

With the Radeon HD 7900 series, AMD has decided two can play that game. Tahiti seems to have quite a bit of headroom in it, and board makers are capitalizing on that fact. The default GPU core clock for XFX’s Radeon HD 7950 Black Edition is 900MHz, a full hundred megahertz beyond the 7950’s stock speed—and only 25MHz shy of the 7970’s. What’s more, the 7950 Black Edition’s memory runs at 5500 MT/s, exactly matching the stock 7970 both in transfer rate and total bandwidth. XFX says the 7950 Black Edition will fetch $499 at Newegg, so the performance gains won’t come for free, but this card should come extremely close to matching the 7970’s real-world performance for that price.

XFX’s Radeon HD 7970 Black Edition

That’s not to say the 7970 is entirely threatened by hot-clocked 7950s. AMD’s apparent policy change extends to the Radeon HD 7970, too, courtesy of the Tahiti chip’s ample clock speed headroom. Pictured above is XFX’s 7970 Black Edition, with the same dual-fan-and-aluminum cooler. This card’s default GPU core speed is a nice, round 1GHz; its memory clock runs a bit faster than stock, too, at 5700 MT/s. That’s enough to give the 7970 Black Edition a clear edge on the 7950 Black Edition, restoring balance to the Force.

The arrival of these hot-clocked Radeons in Damage Labs presented us with something of a dilemma. With fairly wide gaps between the different variants, we’d prefer to test both the stock-clocked and hopped-up versions of each, but time constraints made that impractical—as did our fancy new GPU testing methods, which we believe are the best in the industry, but which require a lot more manual intervention than a scripted test that spits out an FPS average. What to do?

Well, for some time now, our means of dealing with hot-clocked GeForces has been to ensure that we’re testing actual, shipping products and then factor both the higher performance and any price premium into the mix. Now that similarly hopped-up Radeons have finally arrived, gosh darn it, we’re going to keep that approach and go for broke, with hopped-up cards nearly across the board. After all, we’ve already tested the stock 7970 in our initial review. Also, we wanted to include a 3GB version of the GeForce GTX 580 in the mix, and the only one we had available was a hot-clocked board from Zotac. Yes, this review will be a little bit like a home-run derby involving Barry Bonds and Mark McGwire at the height of their, er, chemically enhanced powers. Or, you know, like the old SNL skit about the all-drug Olympics:

But at least most of the key participants will be on the juice.

Of course, even without all of these varied clock speeds, the big two GPU makers have a crazy number of fine gradations in their product lineups—especially Nvidia, at present. Tracking ’em all can be daunting, as the table below illustrates, and it’s limited to the cards we tested along with the stock reference clocks from the GPU makers.

GPU
core

clock

(MHz)

Peak pixel

fill rate

(Gpixels/s)

Peak bilinear

filtering

int8/FP16

(Gtexels/s)

Peak shader

arithmetic

(TFLOPS)

Memory

transfer

speed

(MT/s)

Memory

bandwidth

(GB/s)

GeForce GTX 560
Ti
882 26 53/53 1.3 4008 128
Asus GTX 560 Ti 900 29 58/58 1.4 4200 134
GeForce GTX 560
Ti 448
732 29 41/41 1.3 3800 152
Zotac GTX 560
Ti 448
765 31 43/43 1.4 3800 152
GeForce GTX 570 732 29 44/44 1.4 3800 152
GeForce GTX 580 772 37 49/49 1.6 4000 192
Zotac GTX 580
AMP²!
815 39 52/52 1.7 4104 197
Radeon HD 6950 800 26 70/35 2.3 5000 160
Radeon HD 6970 880 28 84/42 2.7 5500 176
Radeon HD 7950 800 26 90/45 2.9 5000 240
XFX HD 7950
Black
900 29 101/50 3.2 5500 264
Radeon HD 7970 925 30 118/59 3.8 5500 264
XFX HD 7970
Black
1000 32 128/64 4.1 5700 274

Just think. Half of the folks who found this article via Google glanced at that table, their eyes glazed over, and they left to go find a video review on YouTube where some dude raves about frame rates for three minutes. Those of you still left will understand how minor the differences between the different models of cards can be.

We’ll admit the new Radeons have some of the biggest deltas we’ve seen between stock and hot-clocked versions of the same model, which is why we’ve taken a closer look at all of the different 7900-series variants in the later portions of this review, including some performance testing and the power and acoustic tests. In the earlier pages of this review, please note, the Radeon HD 7950 and 7970 and all of the GeForce models will be represented by hot-clocked cards.

One other dynamic we should point out is the very tight grouping of several Nvidia cards, including the GeForce GTX 560 Ti 448 and the GTX 570. As we said in our review, the GTX 560 Ti 448 amounts to a temporary price cut on the GTX 570. At 765MHz, the Zotac GTX 560 Ti 448 card performs almost identically to the stock GTX 570 but costs less, so it’s easily the better buy. Given those facts, we’ve chosen to exclude the GTX 570 from this contest.

Another look at geometry throughput

Since we’ve already covered Tahiti and the GCN architecture in some depth in our Radeon HD 7970 review, those who are unfamiliar with this chip owe it to themselves to read that article before finishing this one. Before we move on, though, let’s pause for an architecture-related note. One thing we saw in that review was relatively poor performance from 7970 in TessMark, a problem we attributed to driver issues, since Tahiti is purportedly much improved for tessellation. Now, AMD claims to have resolved those driver problems. Here’s how TessMark looks with the new drivers.

Wow. In our original review, the 7970 scored so well in our Direct3D tessellation test with Unigine Heaven that we attributed its victory, potentially, to other factors like pixel shader throughput coming into play. Now, we have to reevaluate that sentiment. Looks like the 7970 is legitimately as fast as, or faster than, the GeForce GTX 580 with moderate to high levels of tessellation. As one might expect, the 7950 isn’t far behind the 7970 in TessMark, since the clock speeds of the two XFX cards are only 100MHz apart.

Now, on to the games…

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core
i7-980X
Motherboard Gigabyte EX58-UD5
North bridge X58 IOH
South bridge ICH10R
Memory size 12GB (6 DIMMs)
Memory type Corsair Dominator CMD12GX3M6A1600C8

DDR3 SDRAM at 1333MHz

Memory timings 9-9-9-24 2T
Chipset drivers INF update
9.2.0.1030

Rapid Storage Technology 10.8.0.1003

Audio Integrated ICH10R/ALC889A

with Realtek 6.0.1.6482 drivers

Hard drive Corsair
F240 240GB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition

Service Pack 1

DirectX 11 June 2009 Update

 

Driver
revision
GPU
core

clock

(MHz)

Memory

clock

(MHz)

Memory 

size

(MB)

Asus GeForce
GTX 560 Ti DirectCU II TOP
ForceWare

290.53 beta

900 1050 1024
Zotac GeForce
GTX 560 Ti 448
ForceWare

290.53 beta

765 950 1280
Zotac GeForce GTX 580
AMP²!
ForceWare

290.53 beta

815 1026 3072
Radeon HD
6950
Catalyst

8.921.2-120119a

800 1250 2048
Radeon HD 6970 Catalyst

8.921.2-120119a

890 1375 2048
XFX Radeon HD
7950 Black Edition
Catalyst

8.921.2-120119a

900 1375 3072
XFX Radeon HD 7970
Black Edition
Catalyst

8.921.2-120119a

1000 1425 3072

Thanks to Intel, Corsair, Gigabyte, and PC Power & Cooling for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

Some further notes on our methods:

  • We used the Fraps utility to record frame rates while playing a 90-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We tested each Fraps sequence five times per video card in order to counteract any variability. We’ve included frame-by-frame results from Fraps for each game, and in those plots, you’re seeing the results from a single, representative pass through the test sequence.
  • We measured total system power consumption at the wall socket using a Yokogawa WT210 digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

    The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Skyrim at its Ultra quality settings with FXAA enabled.

  • We measured noise levels on our test system, sitting on an open test bench, using an Extech 407738 digital sound level meter. The meter was mounted on a tripod approximately 10″ from the test system at a height even with the top of the video card.

    You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

  • We used GPU-Z to log GPU temperatures during our load testing.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

The Elder Scrolls V: Skyrim

Our test run for Skyrim was a lap around the town of Whiterun, starting up high at the castle entrance, descending down the stairs into the main part of town, and then doing a figure-eight around the main drag.

Since these are pretty capable graphics cards, we set the game to its “Ultra” presets, which turns on 4X multisampled antialiasing. We then layered on FXAA post-process anti-aliasing, as well, for the best possible image quality without editing an .ini file.

The plots above show the time required to render the individual frames of animation produced by the game during our 90-second test run. The uninitiated will probably want to read my article explaining our new testing methods in order to get a sense of what we’re doing. Just looking at these raw plots, though, one can see that the vast majority of the frames are rendered in under 30 milliseconds, which is quite good. A steady stream of frames at 30 ms would translate into an average frame rate of over 30 FPS. More importantly, the relatively small number of high-latency frames from these video cards means all of them should deliver a relatively smooth sense of motion in the game.

However, I saved one plot for last. Notice that the vertical axis stretches to higher values here, and there are lots of really rather long-latency frames. The GeForce GTX 560 Ti is the only card of the bunch with just 1GB of video RAM onboard, and its memory capacity is overwhelmed at this four-megapixel resolution and image quality level. Several of the other cards have 3GB of memory, which is probably overkill for most single-display configs these days, but having more than 1GB on tap can sometimes be very helpful.

The 7970 and 7950 take the top two spots in the traditional average-FPS sweeps. Notice how the GTX 560 Ti manages to pull off an average of 36 FPS. Just looking at that outcome, one might be tempted to think it performed reasonably well here, which is not the case. Truth is, averaging out frame rates over a full second doesn’t offer enough resolution to capture those frame time spikes that can make games feel laggy and sluggish.

Another way to quantify gaming performance that sidesteps some of the problems with FPS averages is to consider the graphics subsystem’s transaction latency, much like one might do in benchmarking a database server. Seems odd, perhaps, but I think consistent delivery of low frame times is what a graphics card should do. The chart above shows the 99th percentile frame time—that is, 99% of all the frames were returned within x milliseconds—during our five 90-second test runs.

As you can see, the differences between the cards are really very minor here. Everything but the GeForce GTX 560 Ti delivers the vast majority of frames in well below 40 milliseconds. This measure tends to focus on the pain points, the most difficult spots during the test run. In this case, that would be the first few hundred frames of the test session, where you can see in the plot above that the 7950’s frame times are just slightly higher than those produced by the GTX 580 and the GTX 560 Ti 448.

This last graph looks at how much time each video card spent spinning its wheels on frames that simply took too long to render. The idea is to quantify the depth of the problem when a card’s performance begins to dip into unacceptable territory. Our threshold of 50 milliseconds would average out to a rate of 20 FPS. We figure anything slower than that probably isn’t getting the job done in terms of maintaining a fluid sense of motion. For more on our rationale for this one, please see here.

Only the GTX 560 Ti, with its memory size issues, spends any time at all beyond 50 ms, it turns out. The 560 Ti wastes nearly two seconds during a 90-second run working on frames above our threshold, which is pretty awful. This is not smooth animation by any stretch, and our seat-of-the-pants impressions back up that assessment. I don’t know if I’d say Skyrim is entirely unplayable at these settings, but it’s definitely not fun or even acceptable.

On another note, I think we can say with confidence that there’s almost no perceptible difference between the 7950 and the 7970 here.

Batman: Arkham City

We did a little Batman-style free running through the rooftops of Gotham for this one.

We tried testing this game in its DirectX 11 mode at 2560×1600 in our Radeon HD 7970 review, but even the fastest cards suffered quite a few long frame times. Here, we’ve stuck with DX11, but we’ve dialed back the resolution to 1920×1200, to see if that helps.

Testing in DX11 makes sense for benchmarking ultra high-end graphics cards, I think. I have to say, though, that the increase in image quality with DX11 tessellation, soft shadows, and ambient occlusion isn’t really worth the performance penalty you’ll pay. The image quality differences are hard to see; the performance differences are abundantly obvious. This game looks great and runs very smoothly at 2560×1600 in DX9 mode, even on a $250 graphics card.

Well, so much for eliminating those long-latency frames. Every card here produces quite a few.

Fortunately, our new tools allow us to put even these spiky, inconsistent frame times into perspective. The fact that the 99th percentile result is essentially the inverse of the FPS average tells us that all of the cards are afflicted pretty similarly by those long frame times. Also, the fact that the 7970 and 7950 turn in 99th percentile frame times of about 40 milliseconds tells us they both perform reasonably competently—99% of the frames are returned at the equivalent of 25 FPS or better. The Radeons do the best job of avoiding the worst cases, too. Even the Radeon HD 6970 spends less time working on frames that take longer than 50 milliseconds than the GeForce GTX 580 does.

Battlefield 3

We tested Battlefield 3 with all of its DX11 goodness cranked up, including the “Ultra” quality settings with both 4X MSAA and the high-quality version of the post-process FXAA. We tested in the “Operation Guillotine” level, for 60 seconds starting at the third checkpoint.

The frame times in this level of BF3 are much more consistent than what we saw in Skyrim and Arkham City. That fact has some intriguing implications. Although FPS averages aren’t very high here, the true performance of four fastest video cards isn’t bad. Even the GeForce GTX 560 Ti 448, which averages a relatively pokey 26 FPS, spends very little time above 50 ms.

Once again, the Radeon HD 7950 and 7970 take the top two spots, and they remain fairly close together.

Serious Sam 3: BFE

Here’s a new addition to our test suite. Serious Sam 3 is, in many ways, an old-school PC game, right down to the exquisitely detailed graphics options menus. Since these are very fast graphics cards, we tweaked the game to be nearly as high quality as possible, with the exception of antialiasing, where we went with 8X multisampling rather than supersampling.

Our test run came from one of the first few levels in the game, where we did battle with a nasty boss character. In order to make our test feasible (since we did a lot of dying), we restricted our test runs to 60 seconds each.

Yikes. This one is clean sweep for the Radeons, by any measure. The GeForce GTX 580 does perform acceptably here, but it’s no faster than the Radeon HD 6950. The Radeons deliver frames at more consistently low latencies, so even their 99th percentile frame times scale down nicely with the faster cards. There’s a clear difference between the top two, the 7950 and the 7970, although both are incredibly fast.

Crysis 2

Our cavalcade of punishing but pretty DirectX 11 games continues with Crysis 2, which we patched with both the DX11 and high-res texture updates.

Notice that we left object image quality at “extreme” rather than “ultra,” in order to avoid the insane over-tessellation of flat surfaces that somehow found its way into the DX11 patch. We tested 90 seconds of gameplay in the level pictured above, where we gunned down several bad guys, making our way up the railroad bridge.

Once more, the 7950 and 7970 lead the pack, and once more, the difference between the two doesn’t amount to much.

All of the drama happens among the older, slower cards. The Radeon HD 6950, for instance, manages a higher FPS average than the GeForce GTX 560 Ti, but it has a problem with high-latency frames, as expressed in its last-place finish in our two frame time-centric graphs.

Civilization V

We’ll round out our punishment of these GPUs with one more DX11-capable game. Here, we simply used the scripted benchmarks that come with Civilization V. One of those benchmarks tests DirectCompute performance with a texture compression workload.

The Radeon HD 7950 inherits the compute-focused improvements in the GCN architecture, obviously, and its new, more efficient shader scheduling scheme allows it to outperform the GTX 580 yet again.

The “Leader” test shows a series of scenes depicting the leaders one can play in this game. Those characters are nicely textured and lit, and we get the sense this test especially stresses the GPU’s pixel shader throughput. Radeons have long performed relatively well in this test, and the 7950 is no exception.

One final conquest here for the 7950 over the GTX 580, although this one-FPS difference isn’t exactly a huge margin.

Fun with clock speeds

Although our XFX 7950 Black Edition card came out of the box at 900MHz, 100MHz above AMD’s baseline clock frequency, it still had plenty of overclocking headroom waiting to be exploited. We were able to take it up to 1025MHz using AMD’s Overdrive control panel, simply by sliding the GPU core speed slider, at the default voltage of 1031 mV. (Ok, so we also raised the PowerTune TDP cap by the maximum 20% allowed in the control panel, to avoid any power-based frequency capping.) That’s pretty darned good by itself.

To go further, we fired up MSI’s Afterburner utility, which allows for voltage tweaking. After some experimentation, we got the core clock up to 1175MHz at 1162 mV, with GPU temperatures around 89-90° C. Higher clock speeds produced visual artifacts, even at higher voltages. Also, when pushing to higher voltages, we seemed to be surpassing the limits of the card’s cooler; temperatures quickly crept up to around 96° C, which is a bit uncomfortable. We were also able to take the card’s memory clock quite a bit higher. After some experimentation, we settled on a memory frequency of 1575MHz, well above the XFX card’s default of 1375MHz

Bottom line: that is a frickin’ lot of clock speed headroom for a GPU.

While we were messing with clock speeds, I figured we’d take this opportuntity to look at the performance of the 7950 and 7970 at their non-Black Edition clock speeds, as well.

When overclocked to 1175/1575MHz, the 7950 turns out to be even faster than the hot-clocked XFX 7970. Still, the difference between the various Tahiti-based products only adds up to a handful of frames per second, and all of them achieve a higher average than the GTX 580. When we turn to the 99th percentile frame times, the gaps between the solutions grow even smaller. In this case, frame latencies are probably limited primarily by other factors, such as CPU performance or driver execution speed.

Power consumption

Since the different versions of these cards have different coolers and clock speeds, we’ve tested power consumption, noise, and temperatures for both reference designs and the branded cards, as labeled below. We’ve also included the overclocked 7950 in the mix.

Uniquely, the Tahiti GPU is able to turn off power to most of the chip when the system has sat idle long enough for the display to go into power-saving mode. The fans on the cooler stop spinning, and the GPU requires almost no power.

The stock 7950 draws less power while running Skyrim than any other card we tested, and even the overclocked (and overvolted) XFX 7950 is relatively tame, requiring less power than a Radeon HD 6970. Note that the GeForce GTX 580-based system requires over 100W more power at the wall socket than a 7950-based system, even though the 7950-based config is faster overall.

Noise levels and GPU temperatures

Since the 7900-series cards turn off their fans when the display is off, they fare very well here. I believe some of the variance among the 7900-series cards on the dB meter comes from some electrical chatter coming from the PSU on our test system. I think it’s time to replace it something a little newer, perhaps.

The most impressive combination of peak noise levels and GPU temperatures has got to be the XFX Radeon HD 7970 Black Edition’s. That card, with its dual-fan cooler, is among the quietest we’ve tested, and the corresponding GPU temperature is only 72° C. Strangely enough, the XFX 7950 is a little bit louder despite running just one degree cooler. Regardless, both cards perform well here, as does AMD’s reference-design 7950 card. We only wish the reference 7970 were a little quieter.

Conclusions

Let’s bust out some of our famous value plots, taken from the results of the five games we tested manually, in order to give us a sense of price and performance.

The XFX Black Edition versions of the Radeon HD 7950 and 7970 are a little more expensive and a little bit faster than the bone-stock versions of these cards. That’s accounted for both in the prices and the performance results shown above. Even so, the Radeon HD 7950 looks to be in an enviable position, closer to the top left corner of the plot than either the GeForce GTX 580 or the Radeon HD 7970.

Let’s try something a little different and bring our 99th percentile frame times into the mix. To keep things readable, we’ve converted those frame times into their FPS equivalents, so the top left corner of the plot remains the most desirable place to be.

Using this metric squeezes all of the solutions together a little more closely, but it doesn’t hurt the 7950’s competitive position at all, which remains quite nice. The only major change is that the GeForce GTX 560 Ti 448 improves its position relative to the Radeon HD 6970.

Any way you slice it, the Radeon HD 7950 does what it set out to do: undercut the GeForce GTX 580 in price while trumping it in performance. Since the 7950’s power consumption is ridiculously, remarkably lower than the GTX 580’s, there’s little left to do but declare the 7950 the clear winner. That’s before one even figures in the new bells and whistles built into the Tahiti GPU, including a hardware H.264 video encoding engine and support for PCI Express 3.0, neither of which the GTX 580 can match.

We are kind of left wondering why anyone would pony up for a Radeon HD 7970 now that cards like this XFX 7950 Black Edition will be selling for $499. This hot-clocked 7950 card performs very much like a stock-clocked 7970 but costs substantially less. I suppose those folks who want the very best will pay the premium for an up-clocked version of the 7970 like XFX’s Black Edition, which really is the finest video card we’ve ever tested, with a much quieter cooling solution than AMD’s reference design. Still, with all of the overclocking headroom in the 7950, paying more for the 7970 seems… unnecessary.

Before we go, I should narrow my recommendations a little bit based on some important considerations. If you have “only” a two-megapixel display, something with a 1920×1200 or 1920×1080 resolution, then a Radeon HD 7950 is probably overkill, even for the very latest games. Gorgeous, fluid, seriously desirable overkill, but still—you could get away with a card like the GeForce GTX 560 Ti 448 and play nearly all of today’s games at very nice settings. The 7950 is probably best suited for four-megapixel displays or multi-monitor gaming setups.

That’s my take, at least. The rest of us with smaller monitors may want to wait for the next entry in the Radeon HD 7000 series, which may appeal to broader audience.

Comments closed
    • paulsz28
    • 8 years ago

    Would it be possible to see a Frame Latencies by Percentile graph for the 7950, like was included in the GTX 680 review? Since the 7950 wasn’t a point of reference in the 680 review, I’d like to see how it fares at the other percentiles.

    Thanks!

    • ghjtdge
    • 8 years ago
    • sdghjyukty
    • 8 years ago
    • safghtjrtj
    • 8 years ago
    • focusedthirdeye
    • 8 years ago

    Thanks for the excellently written and very informative article! -dougrochford

    • Fighterpilot
    • 8 years ago

    LOL @ posters replying to Silicondoc.
    He’s been banned from just about every reputable tech site on the Net for his constant thread crapping and trolling on anything to do with AMD.
    Actually I thought he was already banned here some time ago?

    Anandtech banned him ages ago for his vicious,hate filled posts there….

      • Silicondoc
      • 8 years ago

      For telling the truth and pointing out the big LIE about the over 11″ length of the AMD videocard the reviewer claimed was not a hair longer than 10.5″ – because the fanboy bias was so thick in his head he couldn’t believe the picture he took and showed, and claimed he was not allowed to post in the article, but shared with us in the replies.
      Once you catch the site admins red handed, and I did, they aren’t very friendly anymore.
      Very unfriendly, like you just were, in fact.

        • clone
        • 8 years ago

        the moment you type fanboy….. no one listens, from that word forward it only blows back on the author.

        as for the video cards length, no one cares, $550 video cards aren’t destined for Shuttle mini pc’s…. dare I say the only ones who worry about video card length are…. fanboys.

        performance
        power consumption
        noise
        warranty

        all good

        video card length concern regarding the high end… c’mon, beyond that I suspect it was a mistake because only an idiot would try to lie about the physical length of something so easily verified.

      • Silicondoc
      • 8 years ago

      PS – no one replied, but you and another made smart aleck rude comments. That’s not replying, that’s trolling.

    • Xenolith
    • 8 years ago

    Any leaked firmware to convert these into 7970s?

    • amythompson172
    • 8 years ago

    Fastest single chip video card.Runs cool and very quiet even at full load, gave me 40-45% extra performance than my 6970 HIS icecool,no crashes after hours of gaming!
    Best ever, pricey but you pay for performance I am very happy with my purchase.I had before sapphire 6990 and while performance was better it was little too much heat and noise from that card so I sold it and got one 6970 from his and it was very good but I missed my 6990 performance…not anymore!!!!!

    [url<]http://www.amazon.com/gp/product/B006P88VO8/ref=as_li_ss_tl?ie=UTF8&tag=emjay2d-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B006P88VO8[/url<]

      • Palek
      • 8 years ago

      Uhh… Why is the text of your post the same as the first review on the Amazon product page?

    • moose17145
    • 8 years ago

    By the sounds of it, you had two cards of each, and could have done a CF setup with these. Personally I would have liked to see this so that I could see the microstuttering graphed out, or to see which games suffer from it (or perhaps which games are more affected and which ones seem to be less affected). Since TR has started the new benchmarking system and moved away from raw FPS numbers, I do not think any CF / SLI setups have been tested using this new system. If I am wrong, and a CF/ SLI setup has been tested using the new benchmarking system please correct me and sorry for the mistake.

    Also, I am aware that time constraints probably did not give you enough time to do a CF/SLI test, and the article was excellent as always and please keep up the good work. I guess as the old saying goes, the more you have the more you want… 🙂

    • wingless
    • 8 years ago

    Where are the GPGPU/Folding@Home tests?

    FYI, the 7xxx series works with Folding@Home as is! If I spend this much on a GPU, gaming is not the ONLY thing I’m going to do with it…

      • Flying Fox
      • 8 years ago

      Yes it works, but how well does it compare to Nvidia’s GPUs, even current gen?

    • no51
    • 8 years ago

    How far we’ve gone. It used to be that SLI/CF was considered a gimmick at best but now it’s used to justify why this card is ‘bad’.

      • Krogoth
      • 8 years ago

      HD 7950 is superior to 6950 CF.

      It has more consistent performance, only needs two slots (6950 CF eats up four to five slots), generates less heat/noise and consumes less power. More importantly for the FPS junkies, it doesn’t suffer from micro-shuttering.

        • Airmantharp
        • 8 years ago

        This being true, it’s still considerably slower.

        In the case of BF3, a single HD79xx might be an improvement due to the rampant micro-stutter issues seen, something I’m also dealing with, but in many other games the HD6950 CFX pair will be faster and smoother overall.

        I’ve personally considered ditching my HD6950’s for the 79xx’s, but right now the cost is unjustifiable for just BF3, which I wind up running at all Medium without MSAA due to the overall performance constraints in multiplayer. Given that even a single HD7970 wouldn’t be as fast and thus would require a CFX (or SLi) solution, I’d still be concerned about micro-stutter.

        This is at 2560×1600 with a 2500k at 4.8GHz- below that resolution, a single high-end 28nm card of any make should be more than sufficient for high quality micro-stutter free framerates.

          • Krogoth
          • 8 years ago

          On paper, the 6950 CF has more resources at its disposal and can throw out higher FPS peaks. This is assuming that the game in question has nearly 80-100% scalability. In reality, this is rarely the case. In the worse case, it will be “slower” overall and with micro-shuttering on top of that.

          I have played around with the SLI/CF bandwagon and painfully aware of the weaknesses. In most instances, it makes more fiscal sense to stick with a single-end card then slapping two mid-range cards that in theory can outperform the single high-end card, but rarely do it in practice.

            • Airmantharp
            • 8 years ago

            I agree that it really comes down to the game.

            In the fully-patched Skyrim, I’ll bet that my 6-series CFX will be both faster and just as smooth as a single 7-series, while in BF3 it would probably be faster but suffer from enough micro-stutter and other frame rate inconsistencies that the resulting performance could be deemed to be slower, as you said above.

            With the new 7-series cards, it’s more of a resolution conundrum than it is one of price/performance- at 4MP and greater I’ve found that not only does one need the fillrate but also the frame-buffer to match, and that severely limits your options.

            Due to switching up from a 1920×1200 (2MP) to a 2560×1600 (4MP) display, I swapped my slightly used (and awesome) GTX570 for an NIB HD6950 2GB, and then bought a second. It had nothing to do with one card vs. two or Green Team vs. Red, but that the HD6950 2GB CFX solution was literally the least expensive setup I could find that would probably handle BF3 at 4MP.

        • Bensam123
        • 8 years ago

        God I hate microstuttering. More sites should make an issue of this. Completely putting aside all the negatives, that alone is enough of a reason for me to never buy a multi-gpu setup.

          • Airmantharp
          • 8 years ago

          I’m with you- except that like many more enthusiasts/gamers these days, I’m trying to push a 4MP 30″ panel, and that both the frame-buffer and fillrate requirements limit your options considerably.

          I don’t deem a single HD79xx to be fast enough, particularly for BF3 multiplayer, and thus I’d need two cards of either generation or team.

            • BestJinjo
            • 8 years ago

            Airmantharp has this right!

            HD79xx series barely manages 30-35 fps in BF3 multiplayer at 2560×1600 with AA. When you turn off AA, the new cards are still barely faster than HD6970 and GTX570/580.

            [url<]http://www.computerbase.de/artikel/grafikkarten/2012/test-amd-radeon-hd-7950-crossfire/26/#abschnitt_battlefield_3[/url<] Basically, they aren't fast enough to drive modern games with AA at 30 inch resolutions.

        • BestJinjo
        • 8 years ago

        For the nth time, not even close!

        Even an overclocked HD7950 is far far slower than HD6990 (HD6950 CF unlocked/overclocked to 6970 speeds).

        [url<]http://hothardware.com/Reviews/AMD-Radeon-HD-7950-Tahiti-Pro-GPU-Review/?page=10[/url<] Even after a 1.05ghz overclock on the GPU, the 7950 was still 20% slower in Just Cause 2 and a whopping 32% slower in AvP (specifically important since it uses SSAO and Tessellation). Let's try this again: [url<]http://www.computerbase.de/artikel/grafikkarten/2012/test-amd-radeon-hd-7950-crossfire/5/#abschnitt_leistung_mit_aaaf[/url<] At 1920x1080 8AA -- HD7950 is only 10% faster than HD6970 but loses to HD6990 by 44% At 2560x1600 8AA -- HD7950 is only 12% faster than HD6970 but loses to HD6690 by 48% Considering HD6950 was available for < $260 for 12 months and unlocked/overclocked into an HD6970, HD7950 is only 20% faster than HD6950 @ HD6970 at best for $200 more!! But once you crank the level of detail in modern games, the HD7950 chokes (and so do all the 6970/GTX580/HD7970 cards). Basically, these cards are faster on paper, but not fast enough for 2560x1600 gaming and not enough to drive 3 monitors either by itself. Unless Kepler is way faster, this generation can be easily skipped by HD6970/GTX570/580 users. They would need at least 75-100% more performance to actually improve playability.

          • no51
          • 8 years ago

          I wish I had as much disposable income as those people who upgrade every gen.

          • Krogoth
          • 8 years ago

          LTR, I said more “consistent” performance not faster.

          Performance on SLI/CF is heavily tied to the drivers and the game in question.

    • kamikaziechameleon
    • 8 years ago

    Ok, so I begin waiting for this to be price cut to 300 dollars so I can buy it.

      • cynan
      • 8 years ago

      I’d even settle for $350 (and $450 for the 7970). Come on Kepler!

    • TheMonkeyKing
    • 8 years ago

    I wonder if the first generation of 7950’s (using the AMD reference design) will end up like the 6950’s. Someone figures that since it uses the same silicon as the 7970’s, it should be able to mod the BIOS to unlock the shaders and speed to a 7970.

      • cynan
      • 8 years ago

      I think the 7950 has a slightly shorter PCB due to having somewhat less robust power supply circuitry compared to the 7970. Whereas, as I recall, the PCBs for the 6950 and 6970 (launch cards) were identical. While this may not preclude an unlock of the extra shaders, it sort of indicates that the overclocking potential of the 7970 should be superior, especially compared to a 7950 with extra shaders unlocked, regardless of the binnings of Tahiti chips.

    • General Blue
    • 8 years ago

    Excellent review!
    But why use TES5 to test the overclock gains? Something less CPU limited would have been better no?
    Good job anyway.

      • Damage
      • 8 years ago

      With small differences between the cards, we wanted a very closely repeatable test, and our Skyrim scenario was best on that front. I agree, though, that we may want to try something less CPU-limited next time–although I think maybe the latest Skyrim patch, which we didn’t use here, may change the dynamics some.

        • General Blue
        • 8 years ago

        Ok! Thank for the reply 🙂

    • cynan
    • 8 years ago

    A bit off topic, but I’d thought I’d mention, FYI, that XFX no longer offers lifetime warranties for non Black Edition cards. I think the HD 79XX are the first products to reflect this. I think the warranties for the “regular” cards are now only 2 years.

    • cynan
    • 8 years ago

    Am I the only one that is having a bit of trouble getting used to stock OC’ed AMD cards at launch? This really plays havoc with AMD’s original product segmentation. A 900mhz HD 7950 when the stock HD 7970 is 925 mhz? What a mess.

    Also, it is unclear to what degree XFX bins their Tahiti chips for their Black Edition cards. I wonder how reflective the overclocking results are of the HD 7950 in general…

    • dpaus
    • 8 years ago

    So many of you are talking about how important it is for Nvidia to deliver competing cards (to get prices down); allow me to offer a thought to ruin your day: with the demand for development resources for ARM chips and the relatively pedestrian GPU modules to go with them, and with the rapidly shinking size of the market for discrete graphics cards, [i<]especially[/i<] ultra-high-end discrete graphics cards, at what point is it no longer worth it for Nvidia to even compete in this segment? Just askin'....

    • flip-mode
    • 8 years ago

    It would be real nice for Nvidia to show up to 28nm ASAP so we can get some price competition. As things are going, the pricing trend that AMD is setting is less than exciting.

      • HisDivineOrder
      • 8 years ago

      AMD is milking the market while they can because they have a bad feeling about what Kepler is going to show up and do to them. I can see why, too. 580 is hanging in the same neighborhood as 7970 and 7950, but the 79xx series smokes the 69xx series because of fab advantage. So imagine what the 28nm improvement to 580 is going to be like.

      AMD needs to get as many people as possible to pony up $450+ before nVidia shows up and rains on their parade. After that, they’ll have to price lower to get anyone to buy their cards, so they’ll once again have the razor thin profits of yesteryear.

        • ish718
        • 8 years ago

        lol Nvidia has a lot of catching up to do. By the time Kepler is released, there will already be an HD7990 on the market…

      • Xenolith
      • 8 years ago

      Let AMD have their margins. They have an excellent product here, they should be rewarded.

    • yogibbear
    • 8 years ago

    This card is too expensive. I could purchase 2 x 6870’s and still have change to buy some blu-rays, a coffee and new jeans left over.

      • geekl33tgamer
      • 8 years ago

      My favourite e-tail store in the UK has 6950’s for £190. Buy 2 in Crossfire for £380 and you will get better FPS on average in games over a single 7950 thats currently going (cheapest) for £399.

      Some will argue the £20 extra is worth it for a single card solution, but IMO the Crossfire set-up is cheaper to buy at least, and will perform faster in *most* games.

        • Meadows
        • 8 years ago

        That £20 will also buy you a quieter, cooler PC case with a lot less power used, and support for upcoming standards and no pressure to wait for “crossfire patches” and the like.

        Never underestimate £20.

        • sschaem
        • 8 years ago

        Not true for Battlefield3, and you will need a power supply with at least 100w extra power for the 6950 rig.

        This might be true for older games, but do you really want to deal with micro stutters and all the issue related with crossfire?

        I can see your point if it was half price and 90% the performance, but two 6950 vs a 7950 is a bad, bad idea.

          • BestJinjo
          • 8 years ago

          HD7950 giving 90% of the performance of HD6950 in CF in Battlefield 3?

          hahaha. That’s a joke if there ever was one.

          [url<]http://www.bit-tech.net/hardware/graphics/2012/01/31/amd-radeon-hd-7950-3gb-review/5[/url<] 2x HD6950s unlocked ~ HD6990 and that is 46% faster at 1080P 4AA and 40% faster at 2560x1600. The 7950 in that review is an overclocked 925mhz version too..... If you scroll to the bottom at Bit-Tech for their Eyefinity benchmarks, a single 7950 is barely faster than an HD6970 (basically both are unplayable). The difference is, HD6950 was $240-250 12 months ago and could unlock into an HD6970. Here AMD is asking us $450-480 for just 20% higher performance 1 year later.....with questionable chances of unlocking. So basically 2x the price for 20% more performance and w/e extra you can squeeze with overclocking. Great!!!! This card is just as much of overpriced as GTX580 was, arguable even worse since it comes out almost 14 months later. Most of us who have been building computers for a long time understand why now is not really a good time to buy a new GPU, unless you can easily afford one. This is especially true since the entire 1st half of 2012 is all console ported games. So it doesn't particularly hurt to wait a bit. We might see a price war, or much higher performance from Kepler. Considering GPUs are often the highest priced component in a gaming rig, it's pretty difficult to blindly drop $450-550 on a GPU without seeing what the competition is bringing, especially since these cards are barely faster than the GTX580 (but that card was built on old 40nm manufacturing process). HD7950/7970 only look great because people keep comparing their price/performance to the GTX580's pricing. The problem with that argument is the price of GTX580 was itself too high, and is esp. now since that card is 14 months old! GTX580's exhorbitant $480-500 prices shouldn't even be taken into consideration when trying to assess the 'relative value' of HD7900 series. Overclocked GTX570's delivered GTX580 level of performance for $370 12-months ago, while HD6950 @ 6970 now gives 80% of the performance of the 7950 for half the price.....ouch. I think HD7950 is priced too closely to HD7970 imho. The entire series is pretty underwhelming at the moment until we see more competition to gauge what the 28nm generation brings in total.

            • sschaem
            • 8 years ago

            The link show the 2GB 6950 to have 23fps avg, vs 35fps for the 7950

            The 6950 is over 50% slower… 50% s l o w e r

            And no two 6950 != 6990, not by a long shot… and we are not talking about hacked bios/ unlock parts.
            And the 6990 is $730, how does that makes any sense !?

            [url<]http://www.amd.com/us/products/desktop/graphics/amd-radeon-hd-6000/hd-6950/Pages/amd-radeon-hd-6950-overview.aspx#2[/url<] [url<]http://www.amd.com/us/products/desktop/graphics/amd-radeon-hd-6000/hd-6990/pages/amd-radeon-hd-6990-overview.aspx#3[/url<] Also crossfire most often then not deliver non linear scaling. Lets not forget that the 7950 OCed jump from 35 to 42fps. I'm not saying nvidia will never release a card cheaper/better then the 7950 or 7970, you are welcome to wait. (I am) My main gripe: if you already have a 6950, a second 6950 make sense... but not as a new purchase. And I'm fully aware that people can burn their money anyway they want.

            • BestJinjo
            • 8 years ago

            Of course cross-fire has inherent limitations. But it’s ridiculously easy to unlock and overclock an HD6950 is what I was saying. Therefore, what you are really comparing are 2x HD6970s vs. an overclocked HD7950.

            In the link provided, HD6990 has between 40-46% more performance over a 925mhz overclocked HD7950. Even if you overclock the HD7950 more, it won’t ever beat HD6990 in BF3. In other games, sure, but not in BF3.

            Also, contrary to your view, HD6950 CF is extremely close in performance to HD6990. Tested on this site:

            [url<]https://techreport.com/articles.x/20629/4[/url<] Unlocking HD6950s doesn't add that much performance. It has been shown that overclocking is what matters. Either way, HD7950 is barely 10% faster than GTX580 and less than 20% faster than an HD6970, but it's priced at $450+: [url<]http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/51076-amd-radeon-hd-7950-review-tahiti-pro-arrives-24.html[/url<] That seems ridiculous, regardless whether or not Kepler is still here. I mean really? More than a year later after unlocked HD6950 for 2x the price for 20% more performance? That's underwhelming. The fact that HD7970 has so much overclocking headroom on the table almost guarantees that AMD is waiting for Kepler to 'refresh' 7900 series. I mean, if you can't wait, sure upgrade. But right now is an awful time imho. Pricing is completely out of sync due to lack of competition.

            • derFunkenstein
            • 8 years ago

            You never buy in this price range because it’s a great value.

            • cosminmcm
            • 8 years ago

            Splendid comment

            • can-a-tuna
            • 8 years ago

            Why are you linking some nvidia-biased site’s battlefield 3 graphs here when the game itself has been tested here? [url<]https://techreport.com/articles.x/22384/6.[/url<] And why are you arguing based on a single game? Check those Serious Sam 3 results if you dare. There is some serious oomph in the Tahiti lineup! It's pretty clear that HD7950 is more powerful than GTX580 consuming substantially less energy plus it has advanced features like DX11.1. Drivers of GTX580 are pretty much "maxed out" but there will be more performance improvements with Tahiti lineup. And price is what it is. It's the second fastest GPU on the planet so I don't see a 400-500$ price tag a problem. Don't expect to get Ferrari at the price of Toyota. If Krapler is better (when it ever gets out), then AMD sure as hell will drop those prices.

          • geekl33tgamer
          • 8 years ago

          Ok, consider this:

          1) Most people shopping for graphics cards like this are also the type most likley to have a power supply with a lot of overhead. For example, my PC’s PSU is 750W, but pulls no where near that under load.

          2) Powerload is also not a running cost concern. Crossfire enabled systems completly turn off the 2nd (or 3rd + 4th if your rich/bonkers) cards when your at your desktop*. The 1st card clocks down to use hardly any power at all neither. Peak load is usually only sustained for short periods of time, and will have a negligable impact on energy bills.

          3) Micro-Stutter IMO has been blown completly out of proportion. My current PC is Crossfire enabled with 2 6870’s, and before that I was rocking GTX 280’s in SLI, and before that, 8800GT’s in SLI. Other than some heat & noise issues from my single slot 8800GT’s back in the day, i’ve never had an issue. All games will play nicely with Crossfire, and if they don’t by default then RadeonPro is on-hand to step in and force that game to play nicely before AMD get their driver profiles into gear.

          I’ve also never experienced micro stutter myself – I believe it’s more of an issue on Crossfire / SLI if you leave V-Sync on and the frame rate is hitting that 59/60 Hz sync rate every time.

          4) BF:3 benchmarks with CF 68xx and 69xx hardware beat even the 7970 by a good 30% in terms of FPS. Just look at the benchmars you can find for these cards on Google.

          5) Finally, BF:3 is not the be-all-and-end-all for modern day PC game benchmarking. IMO, it’s an impressive graphics engine behind it, but it does still have some small optimization issues that need to be ironed out (It’s too new!)

            • sschaem
            • 8 years ago

            [url<]http://www.bit-tech.net/hardware/graphics/2012/01/31/amd-radeon-hd-7950-3gb-review/9[/url<] I know I'm taking BF3, but this engine is the closest to a forward indicator $730 - 6990 (cream of the crop crossfire?) 49fps (Its close to two 6970) $540 - 7970 occed 47fps $200 for an extra 2fps. BTW, I agree with most of whats you say. Crossfire / sli is not dog poop, but if I have a choice to spend $480 for 2x 6950 or $450 for 1x 7950, I'm grabbing the single 7950. (and getting another one in 2 years on ebay for a crossfire setup 🙂 So my main issue is with people recommending two 6950 or two 6870 as a new purchase over the 7950, it make no sense to me. specially if you plan to keep the card more then 18month.

            • d0g_p00p
            • 8 years ago

            don’t bring my name into this.

        • Xaser04
        • 8 years ago

        The cheapest HD7950 is £350 not £399…..

        For reference my HD7970 OC gives me 60% more performance in BF3 than my old HD6970 at 3560×1920 (or if you want 70-75% faster than a HD6950). The HD7970 minimums are also higher than the HD6970 Average.

        (3560×1920, All High bar shadows on medium, No HBAO, 5 minute FRAPS run through of Operation Swordbreaker – HD7970 – 40/58.6 | HD6970 – 24/36.5)

        Bumping the clocks a bit higher should push me towards 70-80% faster (80-90% faster than a HD6950). Given non linear CF scaling I should be knocking on the door of HD6990 performance quite easily.

        My HD7970 cost £416.

        Taking into account the advatages of running a single card over CF I would take the HD7950/HD7970 any day (I did as a matter of fact).

        Granted 1080p and even 1600p performance will tilt more in favour of the CF – in outright Average FPS terms, but the HD79xx series really comes into its own at very high resolutions.

      • flip-mode
      • 8 years ago

      Yes, Nvidia needs to show up to the party.

      • sschaem
      • 8 years ago

      Are you talking about the 1GB version?

      Also the 6870 seem to be in the 560ti class, and check BF3 score.

      14FPS for the 560ti vs *36* for the 7950

      And crossfire rarely delivers, its not like 2 card give you 2x the performance…
      And when its available, crossfire come with tons of problems VS a single card solution.

      And you will need to get a beefy power-supply for multiple 6870…

      Saving $50 for an almost guarantied slower performance, dramatic when falling back to single card, micro stutters, higher heat and power consumption in all cases, less features, no video encode (if this matter), pci2 vs pci3, and 2gig VS 3gig … etc…

      Bad, bad idea… If you are a gamer, this is not worth a movie and a box of folgers.

        • yogibbear
        • 8 years ago

        Well I don’t game at 2560×1600. :/ So yes, 2 x 6870’s would easily max BF3 at 1920×1080. As someone else suggested even going to 2 x 6950’s would still be cheaper than a 7950.

          • sschaem
          • 8 years ago

          Two 6950 would be ~$480 stateside, so ~10% higher then the 7950.

          For BF3 @ 2560×1600 the 7950 is cheaper and faster then two 6950. (Not sure how much better the 6950 gets at 1080.)

          From the little I read of the 7950, this 6950 or 6870 cross fire setup only make sense if you already own one of those cards and want to upgrade.

            • BestJinjo
            • 8 years ago

            Incorrect.

            HD7950 faster than HD6950 in CF in Battlefield 3?

            hahaha. That’s a joke if there ever was one.

            [url<]http://www.bit-tech.net/hardware/graphics/2012/01/31/amd-radeon-hd-7950-3gb-review/5[/url<] 2x HD6950s unlocked ~ HD6990 and that is 46% faster at 1080P 4AA and 40% faster at 2560x1600. The 7950 in that review is an overclocked 925mhz version too..... That's also not even considering that GTX560 Ti overclocked ~ GTX570. GTX570 in SLI will stomp an overclocked HD7950 in BF3 like no one's business.

          • Kaleid
          • 8 years ago

          Even at 1680×1050 1GB is close to not enough in games such as skyrim, GPU-Z tells me that the memory usage is up to about 1000MB, I’m quite sure that at 19×10 the memory would not be enough.

          I’d still go with a 2GB card, no hassles with crossfire drivers and no microstuttering.

        • l33t-g4m3r
        • 8 years ago

        Doesn’t stop the 580 SLI Buyers, then again they don’t have much sense.

          • Airmantharp
          • 8 years ago

          GTX580 3GB cards in SLi made perfect sense until the release of these AMD cards- and if Nvidia runs your stuff better, they still do.

          -pats box with a pair of HD6950 2GB cards

            • Silicondoc
            • 8 years ago

            Thanks for saying that, and… It still makes sense to those not wanting to deal with the near constant driver problems and crashes from AMDATI cards.
            It still looks to be a 50% gamble if you go red – you’ve got a chance in two that driver problems will literally plague your gaming experiences.
            Nvidia, not so much, not much at all.
            I really started hoping things were getting ironed out when the 6000 series hit, now with this new architecture on the 7000 series I have my fingers crossed again.
            I am really sick of GSOD’s. The green or yellow speckled screens are kinda pretty…
            It’s not worth the hassles.
            WHEN will amdati get it together is my question…

            • Krogoth
            • 8 years ago

            Oh great, not this troll…..

            • Silicondoc
            • 8 years ago

            You fit your own description perfectly

            • Airmantharp
            • 8 years ago

            I haven’t played nearly as many games with my CFX setup (and I do have a backlog), but any real driver issues have also been game issues. You can place that blame anywhere you want.

            For me, Skyrim and then BF3 were pains; Skyrim with the negative CFX scaling and BF3 with it’s micro-stutter. Thing is, once AMD got the profiles right, Skyrim was cool from a graphics perspective, and BF3, well, I’m pretty sure it’s just their engine. Patches to the game have definitely helped more than drivers.

            Also, note that our home team has measured micro-stutter with both companies’ stuff. So while I understand the dislike for AMD’s driver development lag, it looks like both companies have some driver issues to work out with regards to Multi-GPU configs.

            • no51
            • 8 years ago

            Funny, cause I had those issues with my 8800GTS’s back when i first got them. And they had drivers that bricked said cards as recently as a year ago (it think while playing starcraft, basically turned off overtemp protection). Problems exist on both sides. Just sayin’

            • clone
            • 8 years ago

            given Nvidia released cards last year with a driver issue that allowed them to burn up, that their drivers allowed StarCraft II to burn up their cards and that their is a thread dedicated to longstanding driver issues with Nvidia cards I’d say it’s a wash.

            that you greatly exagerate the issues is more bothersome given the stupid 50% gamble comment is blown out of any proportion, been using Nvidia and ATI for almost 15 years now and both companies are great and both companies have driver problems…. although ATI hasn’t bricked any cards due to drivers that I can remember while Nvidia did it with 2 minimum just last year.

    • ptsant
    • 8 years ago

    Great review, the 7950 is THE card to get and it absolutely pwnz the competition. Too bad I’ll have to wait…

    I suggest you use the log(frame time) instead of frame time for the vertical axis, since fps=1/frame time. This would make differences in the 15-25 range easier to see. Variance of frame time would be nice, too. I wonder whether the eye is more sensitive to fluctuations in frame time (ie variance) or to frame times beyond a certain cut-off (ie 99 percentile).

      • yogibbear
      • 8 years ago

      In relation to your second statement… I think it’s probably multiple high frame time frames occurring close to each other that would result in a more noticeable hitch when gaming. Also extremely poor single frame times would probably be noticeable if they start exceeding what would be equivalent to multiple frames in length.

      • MrJP
      • 8 years ago

      Or maybe just use instantaneous FPS for the scale rather than frame time?

      It requires less mental adjustment on the part of the reader conditioned by years of looking at average FPS plots, and has the advantage of giving the highest-performing parts the biggest scores. You’d also save some writing time with all the “…this is equivalent to 20fps…” comments.

      In fact the value plot at the end of the article has already been plotted with frame time converted to instantaneous FPS to make it easier to understand and compare with previous reviews. The same logic could (and IMHO should) be applied to all the bar graph plots as well.

      I know this has been discussed several times before after previous articles, but hopefully the benefits of this approach can be appreciated without getting bogged down in irrelevant arguments about the most appropriate units to use depending on the duration of the test…

      I’d also like to be completely clear that I’m not complaining about TR’s approach. This new way of reviewing GPUs is clearly better than anything else that’s out there, and hopefully the analysis and presentation of results will continue to evolve in the future.

    • OneArmedScissor
    • 8 years ago

    I hope that by the time the Nvidia cards come out, you guys switch at least the power test to any different platform than X58 and a more recent power supply. It’s a bit misleading for someone building a new computer and trying to gauge what PSU is needed to be showing AC at the wall with these components in the year 2012.

    • Ryu Connor
    • 8 years ago

    Scott/Damage – Would you be willing to use GPU-Z 0.5.8 to show us the ASIC Quality of your 7950/7970?

    Curious the quality of chip you were sent.

      • Damage
      • 8 years ago

      I’m out of the office this week. Maybe when I get back…

        • Ryu Connor
        • 8 years ago

        Thanks!

      • merkill
      • 8 years ago

      My Sapphire 7950 OC has ASIC 55% and a vidd of 1.080v under load

      • Firestarter
      • 8 years ago

      My Powercolor 7950 (reference axial cooler) has 66.8% according to GPU-Z, VDDC (core voltage I guess?) is ~1.045 V. It seems pretty happy with the GPU at 1050mhz and RAM at 1550mhz, but I’m guessing I’ll have to settle at 1000/1500 unless I fiddle with the voltages or install a another cooler.

      edit: not so happy at 1050mhz apparently 🙂

    • flip-mode
    • 8 years ago

    Anand has a Sapphire 7950 that looks to be a lot better than the XFX 7950 in terms of the heatsink – it runs quieter and with lower temps.

      • cegras
      • 8 years ago

      XFX always like to cut down on heatsinks. I purchased a 4870 from them once that looked like it had the stock heatsink, but was actually a cut down aluminum part. Their warranty may be great, their customer service may be great, but I’m too wary of their cards.

      • sschaem
      • 8 years ago

      This card clock the gpu blow ref, at under 1v *overclocked*

      Thats 10% below the XFX, thats HUGE in term of heat produced.

    • chuckula
    • 8 years ago

    Looks like a winner. Nvidia is SOL until April at the earliest. If nothing else, hopefully the GF104 parts will force some price reductions by both sides.

    • ALiLPinkMonster
    • 8 years ago

    ”An investment in overkill is an investment in your future.”

    ALiLPinkMonster 2012

    • axeman
    • 8 years ago

    If only the other division of AMD could execute this well….

      • cynan
      • 8 years ago

      I just hope that ATI more or less remains intact after AMD finishes going through its death throws.

        • srg86
        • 8 years ago

        If (and I do mean if) that happened to AMD and they did end up not making CPUs any more, then maybe they should just rename themselves to ATI.

        • flip-mode
        • 8 years ago

        It’s throes. And do you realize how long AMD has been “going through death throes”. At this rate you’ll be dead before AMD.

          • cynan
          • 8 years ago

          [i<]And do you realize how long AMD has been "going through death throes". At this rate you'll be dead before AMD.[/i<] Oh, I don't know. I don't remember AMD ever cutting as much as 10% of their employees before (like they did in November). And I don't know how long any company can survive when actually turning a profit during a given quarter is met with surprise more than any other reaction from analysts. And then there is the recent introduction of Bulldozer. Who are they going to sell those to again? Just because AMD has persisted in finding funding in the past, whether through subsidies or selling of its resources, doesn't mean it will continue indefinitely. And on that note, I don't see how they ever will be able to truly compete with Intel again without their own fabs. I think as soon as Intel finally gets decent on-die graphics which trickle down into their budget and mobile chips - which seems to be shaping up finally - AMD will be in big trouble. Perhaps you are right and Future really Is Fusion. But tell me this. Would you buy AMD stock? [i<]It's throes[/i<] It most certainly is (as in severe pains or pangs). Thank you for the correction. I learned something.

    • wingless
    • 8 years ago

    Since I’m spending $500 already, I may as well spend the extra $50 and get a 7970. I understand why AMD priced it so close to the 7970. The diff in frame rates is negligible in a lot of instances so a $400 or $450 7950 would scavenge sales from the 7970 en masse.

    Hurry up with Kepler, Nvidia! I want AMD to lower their prices so I can buy a 7000 series.

      • sschaem
      • 8 years ago

      Why cant nVidia price the 580GTX 3 GB at $399 today against the $449 3GB 7950?
      Its slower, hotter, less feature, pcie2, etc…

      The cheapest 580GTX 3GB on newegg is $549, $100 more then the faster 7950.

      And If nvidia already sell the 580 GTX for more then the 7950 today,
      what make you think nvidia will sell a faster card then the 580 for less ???

      You and the other drone need to wake up, its AMD that is putting a huge price pressure on nvidia with the 7950.

    • flip-mode
    • 8 years ago

    [quote<]Although our XFX 7970 Black Edition card came out of the box at 900MHz, 100MHz above AMD's baseline clock frequency[/quote<]Should that be 7950? AMD is competing with itself at this point. I'd expect 7970 sales to crater once people see what an overclocked 7950 can do.

    • codedivine
    • 8 years ago

    [quote<]The rest of us with smaller monitors may want to wait for the next entry in the Radeon HD 7000 series[/quote<] As you say Mr Wasson

      • Aussienerd
      • 8 years ago

      Yeah doesnt he have a 30″ monitor…

        • Damage
        • 8 years ago

        Only two! I have way more 2MP displays. 😉

      • jensend
      • 8 years ago

      Always nice to see people give a nod to reality. Looking forward to Cape Verde reviews (which will be much more relevant to most of us) in about a month.

    • flip-mode
    • 8 years ago

    What’s up with the one-sided-ness of Serious Sam?

    And, that XFX card is very nice looking, isn’t it.

    • Firestarter
    • 8 years ago

    Welp, just ordered the Powercolor PCS+ version. I hope the post-purchase justification doesn’t turn me into a rabid fanboy 😀

    edit: Well looks like I wasn’t the only one, it was out of stock before I finished the order 🙁
    Still, the vanilla Powercolor looks like it’s good to go. All the other ones are marked up beyond what I would pay for them.

      • Arclight
      • 8 years ago

      Please do not buy a model with stock or lame cooler. I did the same as you are about to do and i regret it till this day. Trust me it’s well worth it waiting for a good custom cooled one to become available on stock.

      Looking back i would gladly wait now 2 months or more for a DirectCu II or XFX double D. Just waiiiiiiiit.

        • merkill
        • 8 years ago

        Depends, stock cooler is better in X-fire most of the time 🙂

        But for single card custom cooler is worth the wait.

        Here in sweden there are plenty of cards with custom coolers in stock.

        Im having a hard time deciding what one is quiet thou so many choices , asus direct cu II , xfx DOUBLE DISSIPATION ,sapphire oc, gigabyte windforce,powercolor pc+

        Any advise in helping me choose is welcome currently thinking asus direct cu ii or sapphire..

        EDIT: well i ordered the sapphire oc 7950 due to the asus direct cu II was not in stock until a month and the others didnt seem to be as good.
        Sure the card will likely drop a lot in price in a few month when stocks are plenty and kepler is out but i have a 2 week vacation now so …

      • flip-mode
      • 8 years ago

      Just wait a couple weeks, then, man. Patience is a virtue.

        • Firestarter
        • 8 years ago

        My PC is a laptop from 2006. My patience ran out today.

    • can-a-tuna
    • 8 years ago

    What? Techreport review on time?

      • Vivaldi
      • 8 years ago

      I’m sure I speak for most when I say I’m willing to wait patiently for TR’s reviews, even if that means waiting weeks. All other review sites pale in comparison to the quality found here–even more so recently with the new “inside-the-frame” analysis.

        • flip-mode
        • 8 years ago

        If need be, I’d rather see something on time, even if it’s less in depth in some ways, and then a follow up article that dives deep. Having articles launch late is probably not good for TechReport’s revenue stream.

          • JustAnEngineer
          • 8 years ago

          This. Do you remember the days when TR reviews used to cause a slashdot stampede?

            • flip-mode
            • 8 years ago

            I’d be over at Anand right now if TR didn’t have this up.

          • paulWTAMU
          • 8 years ago

          There’s a happy medium–2-3 days after release? Probably fine. 2-3 weeks? Not so fine.

            • flip-mode
            • 8 years ago

            Doubtful. Think about it from a revenue standpoint. 2-3 days after launch: people having already spent hours pouring over reviews from their primary and secondary – and maybe tertiary – websites. They may actually be quite fatigued at that point. And you’re also missing out on the casually curious who just wanted to look at one or two of the reviews and then probably won’t read another – they generate income too. And you’re not going to get Slashdotted when you’re 2-3 days late.

            Be there the moment the NDA lifts, period. That’s when you’ll get the most page hits, unique visits, etc.

            Of course, I’m totally guessing on all of this. Scott could come in and say that’s not the way it works at all. But that’s my guess about how it works.

      • luisnhamue
      • 8 years ago

      i was not expecting techreport review on time too. i got the link for the toms review on the facebook and it made me look techreport too, just for curiosity and its kinda surprising

      • tfp
      • 8 years ago

      This was more consistent in years past I’m not sure what has changed.

        • Damage
        • 8 years ago

        Strange how you guys perceive things. We were late on the 7970 (worth it, Christmas) and the GTX 560 Ti 448 (minor launch), but looking back, I believe we were ready at launch with the ones before: GTX 560, 6450, 6790, GTX 590, GTX 550 Ti, 6990, GTX 560 Ti, 6950/6970.

        I think those were all up immediately. Maybe not one or two, but they were close if not.

        Of course, we get really small windows for a lot of these launches, and producing quality reviews takes time. We do struggle with the balance between doing good work and being there first, which is built into our DNA, given the crazy-short time frames for a lot of these reviews. This gets harder over time.

        Still, I think the perceptions being expressed here are kind of odd.

          • yogibbear
          • 8 years ago

          You are correct. People remember bad things really well and are quick to forget/ignore when things go smoothly.

          • l33t-g4m3r
          • 8 years ago

          We understand. IMO the criticism is good natured, not negative, and the review is appreciated. I for one love the new benchmark methods that show the frame time. Don’t get that anywhere else.

          • flip-mode
          • 8 years ago

          What’s leading to the disparity between perception and reality? What about CPU launch timeliness – is the record just as good there? If it is and it’s really just 1/10 launches that don’t unfold quite right then that means some of us dear readers suffer some really bad withdrawal in those rare instances and should probably check into rehab.

          • clone
          • 8 years ago

          Kudo’s on finding a way to relate the micro stuttering data in a way that’s quick to discern.

          I had complained early on but TechReport has found a way to clean up the delivery while still keeping the meat… very nice job.

          • Yeats
          • 8 years ago

          I think you guys should take whatever time you need. TR consistently produces the best-written reviews on the web.

          • tfp
          • 8 years ago

          I think it’s reviews in general not just graphics cards that have been late and some of us have been far longer than the last year or two. I don’t remember it happening as often in the past but over the years this will be hard to remember as well.

          At least from my point of view, I like to read the reviews here but there are other sites that also produce quality reviews. If the review it’s not ready here then I can find out the results there. Sure I might read the review here once it is posted however it will be more of a quick skim and I might gloss over benchmarks of software I have already seen before. I know there are people that will not believe the number unless they come from TR but I don’t think the complete fan base is like that.

          Is it that big of deal to me? No not really but I would hope the perception that is forming about reviews at TR being generally late are a big deal to everyone that work at/runs TR.

      • PainIs4ThaWeak1
      • 8 years ago

      *Shrug* … “You can’t make everyone happy.”

    • odizzido
    • 8 years ago

    Nice review of a nice GPU. Kinda expensive, but I could certainly use it for stalker.

    • Krogoth
    • 8 years ago

    It feels like the 5850 and 5870 all over again, expect that the 7950 and 7970 command higher price points.

    I suspect what they command such high is prices is because demand is high and the yields on TSMC’s 28nm are probably low and AMD doesn’t want a repeat of 5850/5870 shortage.

      • shank15217
      • 8 years ago

      Why do you say TSMC yields are low?

        • Krogoth
        • 8 years ago

        7970 getting sold out within days after their release, despite their $550-600 price tag doesn’t build much confidence.

          • flip-mode
          • 8 years ago

          Too many variables to make that conclusion. AMD could be dedicating much of their capacity at TSMC to producing stockpiles of Pitcairn and Cape Verde. Those will sell in huge volumes so it would not be at all surprising if the majority of chips AMD is having produced is already dedicated to those.

          Also unknown is exactly how many Tahiti GPUs AMD is having produced and where they are going – there could possibly be a bunch going to HPC clients. How many cards are going to OEMs? How many are going into the retail channel? How many rabid Radeon fanboys are out there that have been desperately longing for a Radeon alternative to a GTX 580? How many people are shifting from Nvidia. How many people are still gaming on first generation 40nm parts – GTX 480s and HD 5870s – and have been waiting for 28nm?

          All of these variables makes it pretty hard to draw any conclusions about TSMCs yields. We haven’t heard AMD – or Nvidia for that matter – complaining about them.

          So for now, your speculations stand on a ground covered in fog too thick to see your feet.

            • Lazier_Said
            • 8 years ago

            So AMD would rather stockpile lower end, lower margin parts for tomorrow than sell higher end, higher margin parts today?

            Right.

            • flip-mode
            • 8 years ago

            Hey, maybe you’re right and microprocessor production lines can switch from chip A to chip B in 8 hours or so. If that’s the case and AMD is allocating all available production resources to Tahiti at the moment and can’t keep a couple thousand units supplied then there is no way in hell they’re going to be able to ship tens of thousands of units 2-3 weeks from now.

            The much more likely scenario is that AMD is allocating production resources congruent to the expected unit shipments. Thousands at the high end, tens of thousands at the mid level, and tens and tens of thousands at the mainstream.

            Running out of 7970s isn’t actually much of a big deal. You don’t piss off Dell or HP running out of those. You don’t piss of tens of thousands of people. You piss off a couple hundred and even then it’s only till the next batch.

            If you run out of your mainstream part and have to tell Dell and HP and all the other system builders that they’re going to have to delay the assembly of some of their higher volume systems and make tens of thousands of people wait you just might have a serious problem – maybe even one that the competition can turn into an opportunity.

            Odds are that AMD has most definitely begun producing Pitcairn and Cape Verde by now – maybe for a few or several weeks by now – and part of the reason that those products haven’t launched yet is because AMD is building a big enough stockpile to meet the initial supply.

      • tfp
      • 8 years ago

      Like they are selling to make a profit or something…

    • yogibbear
    • 8 years ago

    So across the board the XFX Black Edition 7950 trumps the Zotac AMP GTX 580 as follows:

    GPU core clock: +10%
    Peak pixel fill: -26%
    Peak bilinear filt int8: +94%
    Peak bilinear filt FP16: -4%
    Peak shader arithmetic: +88%
    Memory Transfer speed: +34%
    Memory bandwidth: +35%

    This translate to about an average of ~15% improvement in average FPS across the benchmarks…

    So… this would mean that architecture differences asides (which is probably more likely the culprit), gaming doesn’t seem to be benefiting from these increases in memory transfer, bandwidth, fp16 or shader arithmetic OR the card is bottlenecked elsewhere so these benefits do not translate across.

    Does this just mean I should only care about clock speak and clock for clock benchmarks to tell me which architecture wins?

      • sweatshopking
      • 8 years ago

      YES

      • cosminmcm
      • 8 years ago

      Yes

      • ish718
      • 8 years ago

      Blame the software?

    • berry
    • 8 years ago

    Great review as always!

    One thing I’m still unclear about though – how does this stack up in terms of power consumption in multi monitor scenarios compared to nVidia’s offerings (particularly in 2D mode)?

    • CampinCarl
    • 8 years ago

    So can we expect the (roughly) same time frame for a release of the 7870 from now? I.e. Beginning of March/end of February? Anyone know? Bueller? Bueller?

      • JustAnEngineer
      • 8 years ago

      [url<]https://techreport.com/discussions.x/22397[/url<]

    • yogibbear
    • 8 years ago

    Time to bunker down and get a mug of coffee as I read another TR review extravaganza!

    EDIT: Steven King reference by the third paragraph! WOW.

    EDIT #2: Okay I’m finished. The AMD 6950 on those value plots looks like the best proposition at 2MP and below resolutions…. Hmmm… tempting to get rid of my 260gtx and find a nice price cut on a 6950.

      • stupido
      • 8 years ago

      I think we will have to wait for nVidia’s new offerings to see some price cutting on the AMD side…

        • derFunkenstein
        • 8 years ago

        yeah, i think so. Based on the graph, AMD is doing well enough at each price point vs. nVidia that they have no reason to drop prices.

      • vargis14
      • 8 years ago

      With 2mp 120hrtz monitors avail I dont think either card is overkill,well perhaps the 7970.
      Now i know the 120hrtz monitors are designed for 3d but in 2d with vsync on i bet that 27 inch asus 120hrtz monitor would give the most fluid gaming you would ever experience in FPS games.The 23 inch monitors would too but bigger is more better and gooder:) btw those extra 3 words at the end i did on purpose
      Great review!
      P.S. A friend of mine has 580s in sli hooked to a 120hrtz monitor and a 60hrtz monitor……….the difference is night and day 120hrtz is sooo silky smooth compared to 60hrtz.We used hawx 2 to see the difference,sorry guys it was lunch hour and didnt have time to test anything else.
      Sorry yogi i hit reply by accident instead of post comment.
      TR review on 120 hrtz 2d gaming in future perhaps?

Pin It on Pinterest

Share This