If you’re plugged into the PC hardware and gaming worlds at all, you probably already know about the Radeon HD 7970, AMD’s world-beating graphics card that debuted recently. As the world’s first GPU built with 28-nm fabrication tech, the 7970 asserted its dominance in our tests by humbling the prior king of the hill, the GeForce GTX 580, through higher performance and lower power consumption. Not only that, but the 7970 is based on a new GPU architecture—oh-so creatively named Graphics Core Next—that establishes rough parity with Nvidia in terms of GPU-computing capabilities, as well. What’s not to like about that?
Well, the price, for one thing.
As the most desirable single-GPU graphics card in the world, the 7970 commands a premium heftier than Kloe Kardashian—prices start at 550 bucks and go up from there, depending on the variant. Right now, street prices are hovering somewhere between $550 and $600 at online vendors, although we’d expect things to settle down a bit in the coming weeks. Any way you cut it, that’s a tremendous ransom for a graphics card, one that most folks—even most dedicated PC gamers—will be hesitant to fork out.
Happily, AMD is practicing the time-honored tradition of product segmentation, spinning off a new version of the product that’s slightly hobbled and somewhat cheaper, in order to service multiple points along the supply and demand curves. (Yes, sexy, I know. Economics has that undeniable allure.) That’s where the subject of our attention today, the Radeon HD 7950, comes into the picture. AMD has tied the Radeon HD 7970 to the bed, handed the axe to Kathy Bates, and told her to swing away. The result is a graphics card that may never quite live up to its former potential but is much easier to catch in the wild. And heck, once it’s healed up, you might never know about that harrowing experience in a secluded cabin. Its final specs are pretty darned good, after all.
|Radeon HD 7970||Radeon HD 7950|
|925 MHz||800 MHz|
|Peak pixel fill rate||30
| Peak bilinear
| Peak shader
|Memory spec||3GB GDDR5||3GB GDDR5|
|384 bits||384 bits|
|5500 MT/s||5000 MT/s|
|264 GB/s||240 GB/s|
The same GPU silicon, a chip code-named Tahiti, drives both of these graphics cards. In the 7970, it’s at the height of its massively parallel powers, with 32 “compute units” (CUs) and clock speeds approaching one gigahertz, quite a lot for a graphics chip. For the 7950, AMD has disabled four of Tahiti’s CUs, leaving it with 28 CUs intact and operational. That means a minor reduction in a few key graphics capabilities, including FLOPS and textures filtered, in each clock cycle. AMD’s recommended clock speeds are also down, from 925 to 800 MHz, further tempering the 7950’s potential throughput. The consequences of these changes may look pretty big on paper. After all, the 7950 gives up nearly a teraflop of computing power and an equal percentage of texture filtering prowess versus the 7970.
But Tahiti arguably has an abundance of those attributes. What hasn’t changed so much may be more consequential. All of the chip’s ROP units—used for pixel output and antialiasing—remain intact, as are all six of its memory controllers. Furthermore, memory clocks are down less than 10%, so the resulting drop in memory bandwidth is smaller than Rick Santorum’s share of a primary vote, from 264 to 240 GB/s. These things, memory bandwidth and ROP throughput, are perhaps more likely to be at a premium in a Tahiti-based graphics card running today’s games. In other words, the 7950 may not be giving up much in terms of real-world gaming performance.
The most notable thing it’s giving up? About a hundred bucks worth of sticker shock. AMD says prices for 7950 cards will start at $449 (by which it means $450 minus a penny.) As if to underscore the relatively small reductions in this product’s real-world performance, AMD expects the 7950 to undercut the GeForce GTX 580 on price while offering superior performance. Since the 7970 isn’t that much faster than the GTX 580, well, you do the math.
Thanks to the drops in clock speed and the number of active transistors, the 7950’s max power requirement is 200W, or 50W lower than the 7970’s. That difference allows the 7950 to get away with only two six-pin auxiliary power inputs, cementing its man-of-the-people status. The 7970’s eight-pin power input will likely require a different, more expensive class of PSU.
Beyond that change, AMD’s reference version of the 7950 is superficially very similar to the 7970, with the same 10.5″ board length and the same glossy plastic cooling shroud. Most of the first wave of cards available will probably look like the reference model, until the various board makers have had time to develop custom variants. Then again, there are already exceptions to that rule.
Stacking the deck
XFX’s Radeon HD 7950 Black Edition
Even before we received a reference version of the Radeon HD 7950 from AMD, XFX’s custom Black Edition of the card touched down in Damage Labs. This card is based on the same circuit board design as the reference cards—board makers haven’t yet had a chance to build enhanced or cost-reduced versions of the PCB—but otherwise, it has been transformed with XFX’s own cooler and a swanky custom expansion slot cover with a red DVI port.
With dual fans and that lightweight aluminum shroud, the XFX cooler looks like an obvious upgrade from the stock unit, though we’ll have to test it to be sure. Believe it or not, the more important changes to XFX’s 7950 Black Edition aren’t visible to the naked eye.
You see, for several years now, AMD has set the default clock speeds for its GPUs relatively high, leaving little wiggle room for individual board makers to create hot-clocked Radeon cards. Meanwhile, Nvidia has left ample headroom in its lineup, and board makers have taken advantage by shipping a much broader variety of GeForce cards, some with pretty drastically increased default clocks. Although it wreaked havoc at times on our carefully laid plans for testing equivalent products, Nvidia’s looser business model created some nice options, especially for smart customers.
With the Radeon HD 7900 series, AMD has decided two can play that game. Tahiti seems to have quite a bit of headroom in it, and board makers are capitalizing on that fact. The default GPU core clock for XFX’s Radeon HD 7950 Black Edition is 900MHz, a full hundred megahertz beyond the 7950’s stock speed—and only 25MHz shy of the 7970’s. What’s more, the 7950 Black Edition’s memory runs at 5500 MT/s, exactly matching the stock 7970 both in transfer rate and total bandwidth. XFX says the 7950 Black Edition will fetch $499 at Newegg, so the performance gains won’t come for free, but this card should come extremely close to matching the 7970’s real-world performance for that price.
XFX’s Radeon HD 7970 Black Edition
That’s not to say the 7970 is entirely threatened by hot-clocked 7950s. AMD’s apparent policy change extends to the Radeon HD 7970, too, courtesy of the Tahiti chip’s ample clock speed headroom. Pictured above is XFX’s 7970 Black Edition, with the same dual-fan-and-aluminum cooler. This card’s default GPU core speed is a nice, round 1GHz; its memory clock runs a bit faster than stock, too, at 5700 MT/s. That’s enough to give the 7970 Black Edition a clear edge on the 7950 Black Edition, restoring balance to the Force.
The arrival of these hot-clocked Radeons in Damage Labs presented us with something of a dilemma. With fairly wide gaps between the different variants, we’d prefer to test both the stock-clocked and hopped-up versions of each, but time constraints made that impractical—as did our fancy new GPU testing methods, which we believe are the best in the industry, but which require a lot more manual intervention than a scripted test that spits out an FPS average. What to do?
Well, for some time now, our means of dealing with hot-clocked GeForces has been to ensure that we’re testing actual, shipping products and then factor both the higher performance and any price premium into the mix. Now that similarly hopped-up Radeons have finally arrived, gosh darn it, we’re going to keep that approach and go for broke, with hopped-up cards nearly across the board. After all, we’ve already tested the stock 7970 in our initial review. Also, we wanted to include a 3GB version of the GeForce GTX 580 in the mix, and the only one we had available was a hot-clocked board from Zotac. Yes, this review will be a little bit like a home-run derby involving Barry Bonds and Mark McGwire at the height of their, er, chemically enhanced powers. Or, you know, like the old SNL skit about the all-drug Olympics:
But at least most of the key participants will be on the juice.
Of course, even without all of these varied clock speeds, the big two GPU makers have a crazy number of fine gradations in their product lineups—especially Nvidia, at present. Tracking ’em all can be daunting, as the table below illustrates, and it’s limited to the cards we tested along with the stock reference clocks from the GPU makers.
| Peak bilinear
| Peak shader
|GeForce GTX 560
|Asus GTX 560 Ti||900||29||58/58||1.4||4200||134|
|GeForce GTX 560
|Zotac GTX 560
|GeForce GTX 570||732||29||44/44||1.4||3800||152|
|GeForce GTX 580||772||37||49/49||1.6||4000||192|
|Zotac GTX 580
|Radeon HD 6950||800||26||70/35||2.3||5000||160|
|Radeon HD 6970||880||28||84/42||2.7||5500||176|
|Radeon HD 7950||800||26||90/45||2.9||5000||240|
|XFX HD 7950
|Radeon HD 7970||925||30||118/59||3.8||5500||264|
|XFX HD 7970
Just think. Half of the folks who found this article via Google glanced at that table, their eyes glazed over, and they left to go find a video review on YouTube where some dude raves about frame rates for three minutes. Those of you still left will understand how minor the differences between the different models of cards can be.
We’ll admit the new Radeons have some of the biggest deltas we’ve seen between stock and hot-clocked versions of the same model, which is why we’ve taken a closer look at all of the different 7900-series variants in the later portions of this review, including some performance testing and the power and acoustic tests. In the earlier pages of this review, please note, the Radeon HD 7950 and 7970 and all of the GeForce models will be represented by hot-clocked cards.
One other dynamic we should point out is the very tight grouping of several Nvidia cards, including the GeForce GTX 560 Ti 448 and the GTX 570. As we said in our review, the GTX 560 Ti 448 amounts to a temporary price cut on the GTX 570. At 765MHz, the Zotac GTX 560 Ti 448 card performs almost identically to the stock GTX 570 but costs less, so it’s easily the better buy. Given those facts, we’ve chosen to exclude the GTX 570 from this contest.
Another look at geometry throughput
Since we’ve already covered Tahiti and the GCN architecture in some depth in our Radeon HD 7970 review, those who are unfamiliar with this chip owe it to themselves to read that article before finishing this one. Before we move on, though, let’s pause for an architecture-related note. One thing we saw in that review was relatively poor performance from 7970 in TessMark, a problem we attributed to driver issues, since Tahiti is purportedly much improved for tessellation. Now, AMD claims to have resolved those driver problems. Here’s how TessMark looks with the new drivers.
Wow. In our original review, the 7970 scored so well in our Direct3D tessellation test with Unigine Heaven that we attributed its victory, potentially, to other factors like pixel shader throughput coming into play. Now, we have to reevaluate that sentiment. Looks like the 7970 is legitimately as fast as, or faster than, the GeForce GTX 580 with moderate to high levels of tessellation. As one might expect, the 7950 isn’t far behind the 7970 in TessMark, since the clock speeds of the two XFX cards are only 100MHz apart.
Now, on to the games…
Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.
Our test systems were configured like so:
|North bridge||X58 IOH|
|Memory size||12GB (6 DIMMs)|
|Memory type||Corsair Dominator CMD12GX3M6A1600C8
DDR3 SDRAMat 1333MHz
|Memory timings||9-9-9-24 2T|
|Chipset drivers||INF update
Rapid Storage Technology 10.8.0.1003
with Realtek 188.8.131.5282 drivers
F240 240GB SATA
|Power supply||PC Power & Cooling Silencer 750 Watt|
|OS||Windows 7 Ultimate x64 Edition
Service Pack 1
DirectX 11 June 2009 Update
GTX 560 Ti DirectCU II TOP
GTX 560 Ti 448
|Zotac GeForce GTX 580
|Radeon HD 6970||Catalyst
|XFX Radeon HD
7950 Black Edition
|XFX Radeon HD 7970
Thanks to Intel, Corsair, Gigabyte, and PC Power & Cooling for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.
Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.
We used the following test applications:
- Batman: Arkham City
- Battlefield 3
- Crysis 2
- Serious Sam 3: BFE
- The Elder Scrolls V: Skyrim
- TessMark 0.3.0
- Fraps 3.4.7
- GPU-Z 0.5.8
Some further notes on our methods:
- We used the Fraps utility to record frame rates while playing a 90-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We tested each Fraps sequence five times per video card in order to counteract any variability. We’ve included frame-by-frame results from Fraps for each game, and in those plots, you’re seeing the results from a single, representative pass through the test sequence.
We measured total system power consumption at the wall socket using a Yokogawa WT210 digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.
The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Skyrim at its Ultra quality settings with FXAA enabled.
We measured noise levels on our test system, sitting on an open test bench, using an Extech 407738 digital sound level meter. The meter was mounted on a tripod approximately 10″ from the test system at a height even with the top of the video card.
You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.
- We used GPU-Z to log GPU temperatures during our load testing.
The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
The Elder Scrolls V: Skyrim
Our test run for Skyrim was a lap around the town of Whiterun, starting up high at the castle entrance, descending down the stairs into the main part of town, and then doing a figure-eight around the main drag.
Since these are pretty capable graphics cards, we set the game to its “Ultra” presets, which turns on 4X multisampled antialiasing. We then layered on FXAA post-process anti-aliasing, as well, for the best possible image quality without editing an .ini file.
The plots above show the time required to render the individual frames of animation produced by the game during our 90-second test run. The uninitiated will probably want to read my article explaining our new testing methods in order to get a sense of what we’re doing. Just looking at these raw plots, though, one can see that the vast majority of the frames are rendered in under 30 milliseconds, which is quite good. A steady stream of frames at 30 ms would translate into an average frame rate of over 30 FPS. More importantly, the relatively small number of high-latency frames from these video cards means all of them should deliver a relatively smooth sense of motion in the game.
However, I saved one plot for last. Notice that the vertical axis stretches to higher values here, and there are lots of really rather long-latency frames. The GeForce GTX 560 Ti is the only card of the bunch with just 1GB of video RAM onboard, and its memory capacity is overwhelmed at this four-megapixel resolution and image quality level. Several of the other cards have 3GB of memory, which is probably overkill for most single-display configs these days, but having more than 1GB on tap can sometimes be very helpful.
The 7970 and 7950 take the top two spots in the traditional average-FPS sweeps. Notice how the GTX 560 Ti manages to pull off an average of 36 FPS. Just looking at that outcome, one might be tempted to think it performed reasonably well here, which is not the case. Truth is, averaging out frame rates over a full second doesn’t offer enough resolution to capture those frame time spikes that can make games feel laggy and sluggish.
Another way to quantify gaming performance that sidesteps some of the problems with FPS averages is to consider the graphics subsystem’s transaction latency, much like one might do in benchmarking a database server. Seems odd, perhaps, but I think consistent delivery of low frame times is what a graphics card should do. The chart above shows the 99th percentile frame time—that is, 99% of all the frames were returned within x milliseconds—during our five 90-second test runs.
As you can see, the differences between the cards are really very minor here. Everything but the GeForce GTX 560 Ti delivers the vast majority of frames in well below 40 milliseconds. This measure tends to focus on the pain points, the most difficult spots during the test run. In this case, that would be the first few hundred frames of the test session, where you can see in the plot above that the 7950’s frame times are just slightly higher than those produced by the GTX 580 and the GTX 560 Ti 448.
This last graph looks at how much time each video card spent spinning its wheels on frames that simply took too long to render. The idea is to quantify the depth of the problem when a card’s performance begins to dip into unacceptable territory. Our threshold of 50 milliseconds would average out to a rate of 20 FPS. We figure anything slower than that probably isn’t getting the job done in terms of maintaining a fluid sense of motion. For more on our rationale for this one, please see here.
Only the GTX 560 Ti, with its memory size issues, spends any time at all beyond 50 ms, it turns out. The 560 Ti wastes nearly two seconds during a 90-second run working on frames above our threshold, which is pretty awful. This is not smooth animation by any stretch, and our seat-of-the-pants impressions back up that assessment. I don’t know if I’d say Skyrim is entirely unplayable at these settings, but it’s definitely not fun or even acceptable.
On another note, I think we can say with confidence that there’s almost no perceptible difference between the 7950 and the 7970 here.
Batman: Arkham City
We did a little Batman-style free running through the rooftops of Gotham for this one.
We tried testing this game in its DirectX 11 mode at 2560×1600 in our Radeon HD 7970 review, but even the fastest cards suffered quite a few long frame times. Here, we’ve stuck with DX11, but we’ve dialed back the resolution to 1920×1200, to see if that helps.
Testing in DX11 makes sense for benchmarking ultra high-end graphics cards, I think. I have to say, though, that the increase in image quality with DX11 tessellation, soft shadows, and ambient occlusion isn’t really worth the performance penalty you’ll pay. The image quality differences are hard to see; the performance differences are abundantly obvious. This game looks great and runs very smoothly at 2560×1600 in DX9 mode, even on a $250 graphics card.
Well, so much for eliminating those long-latency frames. Every card here produces quite a few.
Fortunately, our new tools allow us to put even these spiky, inconsistent frame times into perspective. The fact that the 99th percentile result is essentially the inverse of the FPS average tells us that all of the cards are afflicted pretty similarly by those long frame times. Also, the fact that the 7970 and 7950 turn in 99th percentile frame times of about 40 milliseconds tells us they both perform reasonably competently—99% of the frames are returned at the equivalent of 25 FPS or better. The Radeons do the best job of avoiding the worst cases, too. Even the Radeon HD 6970 spends less time working on frames that take longer than 50 milliseconds than the GeForce GTX 580 does.
We tested Battlefield 3 with all of its DX11 goodness cranked up, including the “Ultra” quality settings with both 4X MSAA and the high-quality version of the post-process FXAA. We tested in the “Operation Guillotine” level, for 60 seconds starting at the third checkpoint.
The frame times in this level of BF3 are much more consistent than what we saw in Skyrim and Arkham City. That fact has some intriguing implications. Although FPS averages aren’t very high here, the true performance of four fastest video cards isn’t bad. Even the GeForce GTX 560 Ti 448, which averages a relatively pokey 26 FPS, spends very little time above 50 ms.
Once again, the Radeon HD 7950 and 7970 take the top two spots, and they remain fairly close together.
Serious Sam 3: BFE
Here’s a new addition to our test suite. Serious Sam 3 is, in many ways, an old-school PC game, right down to the exquisitely detailed graphics options menus. Since these are very fast graphics cards, we tweaked the game to be nearly as high quality as possible, with the exception of antialiasing, where we went with 8X multisampling rather than supersampling.
Our test run came from one of the first few levels in the game, where we did battle with a nasty boss character. In order to make our test feasible (since we did a lot of dying), we restricted our test runs to 60 seconds each.
Yikes. This one is clean sweep for the Radeons, by any measure. The GeForce GTX 580 does perform acceptably here, but it’s no faster than the Radeon HD 6950. The Radeons deliver frames at more consistently low latencies, so even their 99th percentile frame times scale down nicely with the faster cards. There’s a clear difference between the top two, the 7950 and the 7970, although both are incredibly fast.
Our cavalcade of punishing but pretty DirectX 11 games continues with Crysis 2, which we patched with both the DX11 and high-res texture updates.
Notice that we left object image quality at “extreme” rather than “ultra,” in order to avoid the insane over-tessellation of flat surfaces that somehow found its way into the DX11 patch. We tested 90 seconds of gameplay in the level pictured above, where we gunned down several bad guys, making our way up the railroad bridge.
Once more, the 7950 and 7970 lead the pack, and once more, the difference between the two doesn’t amount to much.
All of the drama happens among the older, slower cards. The Radeon HD 6950, for instance, manages a higher FPS average than the GeForce GTX 560 Ti, but it has a problem with high-latency frames, as expressed in its last-place finish in our two frame time-centric graphs.
We’ll round out our punishment of these GPUs with one more DX11-capable game. Here, we simply used the scripted benchmarks that come with Civilization V. One of those benchmarks tests DirectCompute performance with a texture compression workload.
The Radeon HD 7950 inherits the compute-focused improvements in the GCN architecture, obviously, and its new, more efficient shader scheduling scheme allows it to outperform the GTX 580 yet again.
The “Leader” test shows a series of scenes depicting the leaders one can play in this game. Those characters are nicely textured and lit, and we get the sense this test especially stresses the GPU’s pixel shader throughput. Radeons have long performed relatively well in this test, and the 7950 is no exception.
One final conquest here for the 7950 over the GTX 580, although this one-FPS difference isn’t exactly a huge margin.
Fun with clock speeds
Although our XFX 7950 Black Edition card came out of the box at 900MHz, 100MHz above AMD’s baseline clock frequency, it still had plenty of overclocking headroom waiting to be exploited. We were able to take it up to 1025MHz using AMD’s Overdrive control panel, simply by sliding the GPU core speed slider, at the default voltage of 1031 mV. (Ok, so we also raised the PowerTune TDP cap by the maximum 20% allowed in the control panel, to avoid any power-based frequency capping.) That’s pretty darned good by itself.
To go further, we fired up MSI’s Afterburner utility, which allows for voltage tweaking. After some experimentation, we got the core clock up to 1175MHz at 1162 mV, with GPU temperatures around 89-90° C. Higher clock speeds produced visual artifacts, even at higher voltages. Also, when pushing to higher voltages, we seemed to be surpassing the limits of the card’s cooler; temperatures quickly crept up to around 96° C, which is a bit uncomfortable. We were also able to take the card’s memory clock quite a bit higher. After some experimentation, we settled on a memory frequency of 1575MHz, well above the XFX card’s default of 1375MHz
Bottom line: that is a frickin’ lot of clock speed headroom for a GPU.
While we were messing with clock speeds, I figured we’d take this opportuntity to look at the performance of the 7950 and 7970 at their non-Black Edition clock speeds, as well.
When overclocked to 1175/1575MHz, the 7950 turns out to be even faster than the hot-clocked XFX 7970. Still, the difference between the various Tahiti-based products only adds up to a handful of frames per second, and all of them achieve a higher average than the GTX 580. When we turn to the 99th percentile frame times, the gaps between the solutions grow even smaller. In this case, frame latencies are probably limited primarily by other factors, such as CPU performance or driver execution speed.
Since the different versions of these cards have different coolers and clock speeds, we’ve tested power consumption, noise, and temperatures for both reference designs and the branded cards, as labeled below. We’ve also included the overclocked 7950 in the mix.
Uniquely, the Tahiti GPU is able to turn off power to most of the chip when the system has sat idle long enough for the display to go into power-saving mode. The fans on the cooler stop spinning, and the GPU requires almost no power.
The stock 7950 draws less power while running Skyrim than any other card we tested, and even the overclocked (and overvolted) XFX 7950 is relatively tame, requiring less power than a Radeon HD 6970. Note that the GeForce GTX 580-based system requires over 100W more power at the wall socket than a 7950-based system, even though the 7950-based config is faster overall.
Noise levels and GPU temperatures
Since the 7900-series cards turn off their fans when the display is off, they fare very well here. I believe some of the variance among the 7900-series cards on the dB meter comes from some electrical chatter coming from the PSU on our test system. I think it’s time to replace it something a little newer, perhaps.
The most impressive combination of peak noise levels and GPU temperatures has got to be the XFX Radeon HD 7970 Black Edition’s. That card, with its dual-fan cooler, is among the quietest we’ve tested, and the corresponding GPU temperature is only 72° C. Strangely enough, the XFX 7950 is a little bit louder despite running just one degree cooler. Regardless, both cards perform well here, as does AMD’s reference-design 7950 card. We only wish the reference 7970 were a little quieter.
Let’s bust out some of our famous value plots, taken from the results of the five games we tested manually, in order to give us a sense of price and performance.
The XFX Black Edition versions of the Radeon HD 7950 and 7970 are a little more expensive and a little bit faster than the bone-stock versions of these cards. That’s accounted for both in the prices and the performance results shown above. Even so, the Radeon HD 7950 looks to be in an enviable position, closer to the top left corner of the plot than either the GeForce GTX 580 or the Radeon HD 7970.
Let’s try something a little different and bring our 99th percentile frame times into the mix. To keep things readable, we’ve converted those frame times into their FPS equivalents, so the top left corner of the plot remains the most desirable place to be.
Using this metric squeezes all of the solutions together a little more closely, but it doesn’t hurt the 7950’s competitive position at all, which remains quite nice. The only major change is that the GeForce GTX 560 Ti 448 improves its position relative to the Radeon HD 6970.
Any way you slice it, the Radeon HD 7950 does what it set out to do: undercut the GeForce GTX 580 in price while trumping it in performance. Since the 7950’s power consumption is ridiculously, remarkably lower than the GTX 580’s, there’s little left to do but declare the 7950 the clear winner. That’s before one even figures in the new bells and whistles built into the Tahiti GPU, including a hardware H.264 video encoding engine and support for PCI Express 3.0, neither of which the GTX 580 can match.
We are kind of left wondering why anyone would pony up for a Radeon HD 7970 now that cards like this XFX 7950 Black Edition will be selling for $499. This hot-clocked 7950 card performs very much like a stock-clocked 7970 but costs substantially less. I suppose those folks who want the very best will pay the premium for an up-clocked version of the 7970 like XFX’s Black Edition, which really is the finest video card we’ve ever tested, with a much quieter cooling solution than AMD’s reference design. Still, with all of the overclocking headroom in the 7950, paying more for the 7970 seems… unnecessary.
Before we go, I should narrow my recommendations a little bit based on some important considerations. If you have “only” a two-megapixel display, something with a 1920×1200 or 1920×1080 resolution, then a Radeon HD 7950 is probably overkill, even for the very latest games. Gorgeous, fluid, seriously desirable overkill, but still—you could get away with a card like the GeForce GTX 560 Ti 448 and play nearly all of today’s games at very nice settings. The 7950 is probably best suited for four-megapixel displays or multi-monitor gaming setups.
That’s my take, at least. The rest of us with smaller monitors may want to wait for the next entry in the Radeon HD 7000 series, which may appeal to broader audience.