Paired up: The latest GPUs

Usually, when we’re putting together an article like this one, it has an overriding theme. Either we’re reviewing a particular video card, or we’re conducting a comparative review of similar products. We do have elements of the latter in this article, but we don’t have a single, underlying theme other than this: like high-school seniors in summer camp, the graphics cards in question here have paired up. We have CrossFire setups for the latest Radeons, including the HD 6850 and 6870, and we have SLI configs for the latest GeForces, including the brand-spanking-new GeForce GTX 580.

If that’s not a coherent enough theme for you, well, our apologies. We’ve watched too much cable TV to make sense consistently any more. We simply had a few questions we wanted to see answered in the wake of our GeForce GTX 580 review. We wanted to know how that card performs in SLI, because after all, the fastest single GPU money can buy has tremendous potential as a building block of a multi-GPU setup. We also wondered whether a pair of relatively inexpensive video cards, like the Radeon HD 6850 or the GeForce GTX 460 768MB, might not be a viable alternative to a single high-end card like the GTX 580. In order to make sense of those comparisons, we were itching to do one of our patented value analyses on the various single- and multi-GPU offerings.

Undaunted by the multi-faceted nature of our task, we gathered up a bundle of intriguing new video cards and set to testing. We’ve included everything from a single Radeon HD 6850 to dual GeForce GTX 580s in SLI, along with many notable configs in between, for a total of 16 different combinations. The result is, for better or worse, what you’ll see below and on the following pages. Perhaps it will clarify some things for you, in spite of our best efforts.

Some new cards in the deck

In order to answer the questions we posed above, we naturally had to add some new video cards to our stable. That included, of course, a pair of GeForce GTX 580s.

We decided to go asymmetrical for our GTX 580 SLI config, kinda like the chick I dated in high school who had different colored eyes.

The first card in the pair comes from Zotac, and it’s a nicely bestickered version of Nvidia’s reference design, right down to the GTX 580’s standard core and memory clock speeds. This card is currently selling for $529.99 on Newegg, or 30 bucks over Nvidia’s suggested e-tail price, as are many other GTX 580 cards. Zotac covers its GTX 580 with a two-year warranty by default, but the term extends to “lifetime” if you register. Beyond that, this card’s primary distinguishing feature is a bundled copy of Prince of Persia: The Forgotten Sands. We’ve not yet played the game, but it has a semi-decent 75 rating on Metacritic, along with a less promising user score of 5.3. Still, the bundled game might be worthy of note, since otherwise, most GTX 580 cards are essentially the same.

Not that there’s anything wrong with that. In fact, we quite like the GTX 580 reference design, and we’ve found Nvidia’s stock cooler to be relatively quiet and effective.

Our second GTX 580 card comes from Asus, whose approach to the whole branding thing involves admirable restraint. You’ll find no atomic frogs or rocket-propelled fairy godmothers with magic swords on the Asus card itself, just a spare, bold sticker bearing the card maker’s name. Under the hood, though, this GTX 580 has been tweaked slightly: the core clock is 782MHz, up 10 whole megahertz from Nvidia’s default. In spite of the extra juice, Asus’ GTX 580 is going for $524.99 at Newegg, five bucks less than the Zotac, though without the bundled game. Asus covers its video cards with a three-year warranty, and happily, no registration is required to get the full warranty term.

Together, these wonder twins make a slightly asymmetrical $1054.98 graphics powerhouse. By rights, we should have tested this monster SLI setup against a similarly beastly config from AMD involving a pair of Radeon HD 5970 cards. However, we opted to test a pair of Radeon HD 5870s instead, as a sort of consolation prize, for various reasons—mainly because AMD is achingly close to rolling out its brand-new 6900-series GPUs, and we’ll be testing those shortly. Those will surely be the GTX 580’s real competition.

Ever since we published our review of the Radeon HD 6800 series, we’ve been hearing from a certain segment of the AMD fan base who desperately wants us to test a version of the Radeon HD 6850 that runs at higher clock speeds than AMD’s defaults. That’s only fair, they reason, since we also tested some of the many GeForce GTX 460 cards that run at higher-than-base clock frequencies. We were determined to accommodate these requests, a task we found was easier said than done. The Radeon HD 6850 has apparently been quite the hot product, as evidenced by rising prices and spotty availability.

Fortunately, Asus stepped in with a pair of its EAH6850 DirectCU cards for us to test. These have a 790MHz GPU core clock—15MHz higher than stock—and are presently listed for $199.99 at Newegg. That’s not a huge bump in clock speed, but it’s about par for the course among Radeon HD 6850s. You will find a few with 820MHz core clocks, but those run $10-20 more than the Asus model we’ve tested—and all are quite a bit above AMD’s original $179 suggested price.

At its initial suggested price, the Radeon HD 6850 was set to do battle against the 768MB version of the GeForce GTX 460. However, as the Radeon’s price has risen, the GTX 460 768MB’s price has remained steady at $169.99. The Asus card in question runs at a 700MHz core clock with 3680 MT/s memory, up from reference speeds of 675MHz and 3600 MT/s. The 6850 is now more direct competition for the 1GB version of the GeForce GTX 460, but we have included the 768MB cards for comparison nonetheless. Both the 6850 and the GTX 460 768MB are intriguing candidates for affordable multi-GPU mayhem, and none is more affordable than this 768MB card.

The Asus 6850 and GTX 460 768MB cards share another attribute, as well: Asus’ fancy DirectCU cooler that, per its name, routes a pair of copper heatpipes directly over the surface of the GPU. I don’t want to give away too much, but this little innovation may be something more than marketing hype, as we’ll soon see.

We should set the table for our testing by reminding you that we’ve been rather down on the recent trend toward fan-based GPU coolers that don’t pipe warm air out directly of an expansion slot opening. Such coolers are typically quiet in single-card configs, but don’t fare well when another video card (or any other expansion card) is nestled up against the fan, blocking its airflow. If this Asus cooler performs well in SLI and CrossFire, it will be an exception to the recent trend.

Our testing methods

Many of our performance tests are scripted and repeatable, but for some of the games, including Battlefield: Bad Company 2, we used the Fraps utility to record frame rates while playing a 60-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We raised our sample size, testing each FRAPS sequence five times per video card, in order to counteract any variability. We’ve included second-by-second frame rate results from Fraps for those games, and in that case, you’re seeing the results from a single, representative pass through the test sequence.

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-965 Extreme 3.2GHz
Motherboard Gigabyte EX58-UD5
North bridge X58 IOH
South bridge ICH10R
Memory size 12GB (6 DIMMs)
Memory type Corsair Dominator CMD12GX3M6A1600C8

DDR3 SDRAM at 1600MHz

Memory timings 8-8-8-24 2T
Chipset drivers INF update 9.1.1.1025

Rapid Storage Technology 9.6.0.1014

Audio Integrated ICH10R/ALC889A

with Realtek R2.51 drivers

Graphics
Asus Radeon HD
5870 1GB

with Catalyst 10.10c drivers

Asus Radeon HD
5870 1GB + Radeon HD
5870 1GB

with Catalyst 10.10c drivers

Asus ROG
Matrix Radeon HD
5870 2GB

with Catalyst 10.10c drivers

Radeon HD 5970
2GB

with Catalyst 10.10c drivers

Asus Radeon HD
6850 1GB

with Catalyst 10.10c drivers

Dual Asus Radeon HD
6850 1GB

with Catalyst 10.10c drivers

XFX Radeon HD
6870 1GB

with Catalyst 10.10c drivers

Sapphire
Radeon HD 6870 1GB + XFX Radeon HD
6870 1GB

with Catalyst 10.10c drivers

Asus GeForce GTX 460
768MB

with ForceWare 260.99 drivers

Dual Asus GeForce GTX 460
768MB

with ForceWare 260.99 drivers

MSI Hawk
Talon Attack GeForce GTX 460 1GB 810MHz

with ForceWare 260.99 drivers

MSI Hawk
Talon Attack GeForce GTX 460 1GB 810MHz +

EVGA GeForce GTX 460 FTW 1GB 850MHz

with ForceWare 260.99 drivers

Galaxy GeForce GTX 470 1280MB GC

with ForceWare 260.99 drivers

GeForce GTX 480 1536MB

with ForceWare 260.99 drivers

GeForce GTX 580 1536MB

with ForceWare 262.99 drivers

Zotac GeForce GTX 580 1536MB +

Asus GeForce GTX 580 1536MB

with ForceWare 262.99 drivers

Hard drive WD RE3 WD1002FBYS 1TB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition

DirectX runtime update June 2010

Thanks to Intel, Corsair, Western Digital, Gigabyte, and PC Power & Cooling for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

Some further notes on our methods:

  • We measured total system power consumption at the wall socket using a Yokogawa WT210 digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

    The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead 2 at a 1920×1080 resolution with 4X AA and 16X anisotropic filtering. We test power with Left 4 Dead 2 because we’ve found that the Source engine’s fairly simple shaders tend to cause GPUs to draw quite a bit of power, so we think it’s a solidly representative peak gaming workload.

  • We measured noise levels on our test system, sitting on an open test bench, using an Extech 407738 digital sound level meter. The meter was mounted on a tripod approximately 10″ from the test system at a height even with the top of the video card.

    You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

  • We used GPU-Z to log GPU temperatures during our load testing.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Power consumption

We’ll kick off our testing with power and noise. Notice that the cards marked with asterisks in the results below have custom cooling solutions that may perform differently than the GPU maker’s reference solution.

Turn your attention first to our comparison between the pairs of sub-$200 video cards, the 6850 and GTX 460 768MB, versus a single GeForce GTX 580. The GTX 580 consumes a little less power at idle, but when running a game, the 580 pulls more juice than either the 6850 CrossFireX config or the GTX 460 768MB SLI. The multi-GPU solutions don’t look too bad on that front, at least.

As for the GTX 580 SLI rig, well, that requires considerably more power than anything else we tested, a hefty 575W at the wall socket under load. That’s a lot, but as we’ll soon see, such power draw may be appropriate to the performance achieved.

Noise levels and GPU temperatures

Let’s focus first on the Asus 6850 and GTX 460 768MB cards with DirectCU coolers. In single-card configs, they achieve among the lowest noise levels and GPU temperatures we measured. They have substantially less power to dissipate in the form of heat than some of the higher-end solutions, but still—this is an impressive performance from that DirectCU cooler. I prefer the tuning of the 6850 to the GTX 460 768MB; the 6850’s GPU temperatures are a little higher—though still lower than most—and it’s measurably quieter under load. That difference comes to the fore when we add a second video card next door. The GTX 460 768MB works hard to keep the primary GPU’s temperature relatively low, at 73°C, and that shows on the decibel meter, where only the GTX 460 1GB cards are louder. The 6850 CrossFireX pairing, meanwhile, maintains decent temperatures but registers substantially lower noise levels.

Happily, the bottom line here is that Asus’ DirectCU cooler is excellent when the adjacent slot is unobstructed and, crucially, still adequate in multi-GPU configurations. With similar thermal loads, though, the 6850’s fan control tuning is quite a bit more acoustically friendly.

Meanwhile, the GeForce GTX 580 has a very good cooler of its own, and as a result, it’s quieter than both the GTX 460 768MB SLI and 6850 CrossFireX setups. Adding a second GTX 580 results in slightly higher noise levels and GPU temperatures, but heck, the 580 SLI setup isn’t much louder than a single Radeon HD 6870.

Pixel fill and texturing performance

I’ll own up to the fact that the next few pages are graphics egghead type stuff, and some of you may want to skip directly to the game tests. There’s no shame in doing so, mainly because no one will know.

Peak pixel
fill rate
(Gpixels/s)


Peak bilinear

integer texel
filtering rate
(Gtexels/s)


Peak bilinear

FP16  texel
filtering rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)
GeForce GTX 460 768MB

16.8 39.2 39.2 88.3
GeForce GTX 460 1GB 810MHz

25.9 47.6 47.6 124.8
GeForce GTX 470 GC

25.0 35.0 17.5 133.9
GeForce GTX 480

33.6 42.0 21.0 177.4
GeForce GTX 580

37.1 49.4 49.4 192.0
Radeon HD 5870

27.2 68.0 34.0 153.6
Radeon HD 6850

25.3 37.9 19.0 128.0
Radeon HD 6870

28.8 50.4 25.2 134.4
Radeon HD 5970

46.4 116.0 58.0 256.0

These figures aren’t destiny for a video card. Different GPU architectures will deliver on their potential in different ways, with various levels of efficiency. However, these numbers do matter, especially among chips with similar architectural DNA.

I’ve not included numbers in the table above for multi-GPU configurations, but you can essentially double the figures for dual-GPU solutions, at least in theory. A single GTX 580 has roughly twice the potential of the GTX 460 768MB in both pixel fill rate (including antialiasing power) and memory bandwidth, so those cases would be an even match. In the other categories, both related to texture filtering, two GTX 460 768MB cards should theoretically be superior. Against the 6850, a single GTX 580 doesn’t come close to twice the peak capacity in any category except FP16 texture filtering rates, where the GF110 GPU’s new secret sauce gives it well over double the potential. Two 6850s should be substantially ahead of a single GTX 580 in every other category.

Apply that doubling principle to dual GTX 580s, and you’ll see they should soundly eclipse any other solution we’ve tested in terms of pixel fill rate, FP16 filtering rate, and memory bandwidth.

This color fill rate test tends to be limited by memory bandwidth more than anything else, and so the dual GTX 580s in SLI take the top spot, more or less as expected. Notice, also, how well the 6850 CrossFireX setup fares, thanks to its copious memory bandwidth. The GTX 460 768MB, with one of the GF104 chip’s three memory controllers and ROP partitions disabled, simply lacks the bandwidth to keep pace.

We’ve shunned 3DMark’s texture fill test recently because it doesn’t involve any sort of texture filtering. That’s tragic and sad, since texture filtering rates are almost certainly more important than sampling rates in the grand scheme of things. Still, this is a decent test of FP16 texture sampling rates, so we’ll use it to consider that aspect of GPU performance. Texture storage is, after all, essentially the way GPUs access memory, and unfiltered access speeds will matter to routines that store data and retrieve it without filtering.

Given the fact that Nvidia’s most powerful GPUs are much larger, AMD’s unfiltered sampling rates are relatively high—witness the much cheaper and smaller Radeon HD 6870 matching the GTX 580. However, the gap between the GTX 460 768MB and the 6850 is quite small.

Here’s a more proper test of texture filtering, although it’s focused entirely on integer texture formats, not FP16. Texture formats like these are still widely used in games.

This test isn’t compatible with SLI, so we’ve left out the multi-GPU results for everything but the dual-GPU, single-card Radeon HD 5970.

Happily, after struggling in the dark for a while, we finally have a proper test of FP16 filtering rates, courtesy of the guys at Beyond3D.

The contest between the GTX 460 768MB and the Radeon HD 6850 in the three sets of results above is instructive. For simple integer, bilinear filtering, the 6850 has a slight edge. However, as the filtering challenge steps up to higher anisotropy levels and then to FP16 texture formats, the GTX 460 768MB pulls ahead. That’s generally the way of things with the current GPU architectures from AMD and Nvidia.

That trend is much more dramatic in the case of the GTX 580, whose substantial memory bandwidth allows it to take good advantage of its prodigious capacity for FP16 filtering. If you doubled the FP16 filtering scores for the GTX 460 768MB and the 6850, as one might do for SLI and CrossFireX, the GTX 580 would still be faster.

Shader and geometry processing performance

Peak shader
arithmetic

(GFLOPS)

Peak

rasterization

rate

(Mtris/s)


Peak
memory
bandwidth
(GB/s)
GeForce GTX 460 768MB

941 1400 88.3
GeForce GTX 460 1GB 810MHz

1089 1620 124.8
GeForce GTX 470 GC

1120 2500 133.9
GeForce GTX 480

1345 2800 177.4
GeForce GTX 580

1581 3088 192.0
Radeon HD 5870

2720 850 153.6
Radeon HD 6850

1517 790 128.0
Radeon HD 6870

2016 900 134.4
Radeon HD 5970

4640 1450 256.0

The second stage of our graphics-egghead challenge involves a couple of related aspects of GPU performance, computational power and geometry throughput. As you can see in the numbers above, we have some wide divergences between the two major GPU architectures in this generation. AMD’s Radeons have much higher peak theoretical shader throughput, and Nvidia’s GeForces can, in theory, achieve substantially higher rates of triangle rasterization. To take things even further down the egghead hole (wait, what?), current Nvidia GPUs should be capable of higher geometry processing rates at other stages in the DirectX 11 pipeline, too, but those are a bit harder to quantify than the rasterization rate.

As with the stage-one egghead stuff, the peak rates here are only a small part of the story. We’ll want to measure the chips’ actual throughput in various ways to get a clearer sense of their true performance.

The first tool we can use to do so is ShaderToyMark, a pixel shader test based on six different effects taken from the nifty ShaderToy utility. The pixel shaders used are fascinating abstract effects created by demoscene participants, all of whom are credited on the ShaderToyMark homepage. Running all six of these pixel shaders simultaneously easily stresses today’s fastest GPUs, even at the benchmark’s relatively low 960×540 default resolution.

Sadly, this test isn’t multi-GPU compatible, but we don’t need to guess whether a pair of sub-$199 video cards would beat out a GTX 580 here: the 580 is more than twice as fast as either the 6850 or the GTX 460 768MB, so its lead is secure. The GTX 580’s total package of shader execution efficiency, memory bandwidth, and cache size gives it an uncontested win. The GTX 460 768MB is also quicker than the 6850 here, despite having a much lower theoretical FLOPS peak.

Up next is a compute shader benchmark built into Civilization V. This test measures the GPU’s ability to decompress textures used for the graphically detailed leader characters depicted in the game. The decompression routine is based on a DirectX 11 compute shader. The benchmark reports individual results for a long list of leaders; we’ve averaged those scores to give you the results you see below.

This test does work with multiple GPUs, but adding a second graphics chip only provides a minor boost in performance here.

Finally, we have the shader tests from 3DMark Vantage.

Clockwise from top left: Parallax occlusion mapping, Perlin noise,

GPU cloth, and GPU particles

The first two tests measure pixel shader performance, and the Radeons excel. The Perlin noise test looks to be a case where the Radeons actually achieve something close to their peak arithmetic rates, as the 6850 beats out the much larger GeForce GTX 480.

These two tests involve simulations of physical phenomena using vertex shaders and the DirectX 10-style stream output capabilities of the GPUs. The tables turn in dramatic fashion here, as the GeForces dominate. Adding a second video card isn’t much help in this test, and may be a hindrance, as it is in the case of the GeForce GTX 580.

Geometry processing throughput

The most obvious area of divergence between the current GPU architectures from AMD and Nvidia is geometry processing, which has become a point of emphasis with the advent of DirectX 11’s tessellation feature. We can measure geometry processing speeds pretty straightforwardly with a couple of tools. The first is the Unigine Heaven demo. This demo doesn’t really make good use of additional polygons to increase image quality at its highest tessellation levels, but it does push enough polys to serve as a decent synthetic benchmark.

We can push into even higher degrees of tessellation using TessMark’s multiple detail levels.

These results offer us two clear outcomes. One, doubling up on GPUs can nearly double your geometry processing throughput, so long as the GPU can properly balance the load between them, as they’re doing here. Two, the Radeons are a bit slower than the competing GeForces at lower tessellation levels, but the Radeons fall far behind once the polygon detail gets too high. That’s an architectural difference that may or may not matter, depending on how games developers make use of tessellation.

HAWX 2

We already commented pretty extensively on the controversy surrounding tessellation and polygon use in HAWX 2, so we won’t go into that again. I’d encourage you to read what we wrote earlier, if you haven’t yet, in order to better understand the issues. We have included scores from the HAWX 2 benchmark in our tests below for your information, but be aware that this test’s results are the subject of some dispute.

Clearly, the large number of polygons HAWX 2 is pumping out plays to the GeForces’ current architectural strengths. With that said, dual Radeon HD 6850s achieve very nice frame rates here at the highest display resolution you’re likely to encounter on a single display.

Lost Planet 2

Our next stop is another game with a built-in benchmark that makes extensive use of tessellation, believe it or not. We figured this and HAWX 2 would make a nice bridge from our synthetic tessellation benchmark and the rest of our game tests. This one isn’t quite so controversial, thank goodness.

This benchmark emphasizes the game’s DX11 effects, as the camera spends nearly all of its time locked onto the tessellated giant slug. We tested at two different tessellation levels to see whether it made any notable difference in performance. The difference in image quality between the two is, well, subtle.

The Radeon HD 6850 CrossFireX pair performs quite well, shadowing the GeForce GTX 580. Meanwhile, the GTX 460 768MB SLI setup runs into problems, likely because it’s running low on local memory. The effective memory size of a multi-GPU pairing is actually a little bit less than that of a single card, because memory can’t be shared between the two cards and multi-GPU synchronization requires some additional overhead. At very high resolutions like this one—the sort, you know, for which you’d want a multi-GPU rig—768MB of memory appears to be rather cramped.

Civilization V

In addition to the compute shader test we’ve already covered, Civ V has several other built-in benchmarking modes, including two we think are useful for testing video cards. One of them concentrates on the world leaders presented in the game, which is interesting because the game’s developers have spent quite a bit of effort on generating very high quality images in those scenes, complete with some rather convincing material shaders to accent the hair, clothes, and skin of the characters. This benchmark isn’t necessarily representative of Civ V‘s core gameplay, but it does measure performance in one of the most graphically striking parts of the game. As with the earlier compute shader test, we chose to average the results from the individual leaders.

This may be the clearest case in a real game where the formidable peak arithmetic rates of today’s Radeons actually pay off.

Another benchmark in Civ V focuses, rightly, on the most taxing part of the core gameplay, when you’re deep into a map and have hundreds of units and structures populating the space. This is when an underpowered GPU can slow down and cause the game to run poorly. This test outputs a generic score that can be a little hard to interpret, so we’ve converted the results into frames per second to make them more readable.

The GTX 580 SLI setup struggles a bit here, only delivering a couple more frames per second than a single GTX 580. Meanwhile, the GTX 460 768MB SLI config looks to be running low on memory again. The Radeons have no similar issue with multi-GPU scaling, and the dual 6850 setup proves nearly as fast as one GTX 580.

StarCraft II

Up next is a little game you may have heard of called StarCraft II. We tested SC2 by playing back a match from a recent tournament using the game’s replay feature. This particular match was about 10 minutes in duration, and we captured frame rates over that time using the Fraps utility. Thanks to the relatively long time window involved, we decided not to repeat this test multiple times, like we usually do when testing games with Fraps.

We tested at the settings shown above, with the notable exception that we also enabled 4X antialiasing via these cards’ respective driver control panels. SC2 doesn’t support AA natively, but we think this class of card can produce playable frame rates with AA enabled—and the game looks better that way.

Generally speaking, the Radeons tend to outperform similarly-priced GeForces in SC2. The GTX 460 768MB card, in both single and dual configs, really struggles once again.

Battlefield: Bad Company 2
BC2 uses DirectX 11, but according to this interview, DX11 is mainly used to speed up soft shadow filtering. The DirectX 10 rendering path produces the same images.

We turned up nearly all of the image quality settings in the game. Our test sessions took place in the first 60 seconds of the “Heart of Darkness” level.

Here’s a case where the GTX 460 768MB doesn’t appear to suffer too much as a result of its lower video memory size, but the relative performance results look familiar: dual GTX 460 768MB cards in SLI aren’t quite as fast as a single GTX 580, but dual 6850s are. And dual GTX 580s in SLI are the fastest of all the solutions tested.

Metro 2033

We decided to test Metro 2033 at multiple image quality levels rather than multiple resolutions, because there’s quite a bit of opportunity to burden these GPUs simply using this game’s more complex shader effects. We used three different quality presets built into the game’s benchmark utility, with the performance-destroying advanced depth-of-field shader disabled and tessellation enabled in each case.

The Radeons become a little stronger as the detail level increases in this game. In the end, the GTX 580 and the 6850 CrossFireX solutions are essentially tied, while the GTX 460 768MB SLI setup once more lags behind. The GTX 580 SLI rig remains in a class of its own.

Aliens vs. Predator
AvP uses several DirectX 11 features to improve image quality and performance, including tessellation, advanced shadow sampling, and DX11-enhanced multisampled anti-aliasing. Naturally, we were pleased when the game’s developers put together an easily scriptable benchmark tool. This benchmark cycles through a range of scenes in the game, including one spot where a horde of tessellated aliens comes crawling down the floor, ceiling, and walls of a corridor.

For these tests, we turned up all of the image quality options to the max, with two exceptions. We held the line at 2X antialiasing and 8X anisotropic filtering simply to keep frame rates in a playable range with most of these graphics cards.

At this point in the review, you’re growing veeery sleepy, as you realize the results from one game to the next all follow the same basic trends. Sip some coffee and try to pay attention, slacker.

DiRT 2: DX9

This excellent racer packs a scriptable performance test. We tested at DiRT 2‘s “ultra” quality presets in both DirectX 9 and Direct X 11. The big difference between the two is that the DX11 mode includes tessellation on the crowd and water. Otherwise, they’re hardly distinguishable.

DiRT 2: DX11

Once more, the GTX 460 768MB SLI setup performs quite well at 1920×1080 but runs out of room at 2560×1600, especially with DirectX 11.

The value proposition

By now, we’ve craftily lulled you into a benchmark results trance. We’ll try to awaken you with a look at the value propositions presented by each of these solutions. We’ll start by compiling an overall average performance index, based on the highest quality settings and resolutions tested for each of our games, with the notable exception of the disputed HAWX 2. We’ve excluded directed performance tests from this index, and for Civ V, we included only the “late game view” results.

The GTX 460 768MB SLI’s struggles at higher resolutions really hurt it in our overall index, an unavoidable consequence of our decision to emphasize high resolutions. Keep in mind that, at lower resolutions, the 768MB cards often perform relatively well.

With this performance index established, we can consider overall performance per dollar by factoring price into the mix. Rather than relying on list prices, we grabbed our prices off of Newegg. Because of the holiday sales going on, we decided to grab our prices last week, right before the Black Friday/Cyber Monday sales took hold and warped things a bit. Generally, for graphics cards with reference clock speeds, we simply picked the lowest priced variant of a particular card available. For instance, that’s what we did for the GTX 580. For the cards with custom speeds, such as the Asus GTX 460 768MB and 6850, we used the price of that exact model as our reference.

AMD card Price Nvidia card
$169.99 GeForce GTX 460 768MB
Radeon HD 6850 $189.99
$214.99 GeForce GTX 460 1GB 810MHz
Radeon HD 6870 $249.99
Radeon HD 5870 $279.99
$299.99 GeForce GTX 470
$399.99 GeForce GTX 480
$499.99 GeForce GTX 580
Radeon HD 5870 2GB $499.99
Radeon HD 5970 $499.99

A simple mash-up of price and performance produces results like these:

Generally, the cheaper solutions tend to do best in terms of raw performance per dollar, but the 6850 CrossFireX config lands in third place, surprisingly enough.

We can get a better sense of the overall picture by plotting price and performance on a scatter plot. On this plot, the better values will be closer to the top left corner, where performance is high and price is low. Worse values will gravitate toward the bottom right, where low frame rates meet high prices.

The various solutions tend to cluster along a line extending from the bottom left to the top right, which suggests prices are usually pretty rational and track fairly well with performance. We do have some stand-outs among the crowd, including, most prominently, our Radeon HD 6850 CrossFireX setup. The 6870 CrossFireX setup looks strong, too, as does the GeForce GTX 460 1GB SLI config. The weakest values of the group include the single and dual-GPU versions of the GeForce GTX 460 768MB, for reasons we’ve already discussed, and the pricey, older Asus ROG Matrix Radeon HD 5870 2GB card. The GTX 580 SLI config isn’t a great value, but it is truly in a class by itself, in terms of both performance and price, among the contenders here.

Another way we can consider GPU value is in the context of a larger system purchase, which may shed a different light on what it makes sense to buy. These multi-GPU solutions are relatively expensive, so we’ve paired them with a proposed system config that’s very similar to the hardware in our testbed system.

CPU Intel Core i7-960 $579.99
Cooler Thermaltake V1 $51.99
Motherboard Gigabyte GA-X58A-UD5 $289.99
Memory 6GB Kingston HyperX DDR3-1600 $104.99
Storage Western Digital Caviar Black 1TB $89.99
Asus DRW-24B1ST $19.99
Audio Asus Xonar DG $29.99
PSU PC Power & Cooling Silencer Mk II 750W $119.99
Enclosure Corsair Graphite Series 600T $159.99
Total $1,446.91

That system price will be our base. We’ve added the cost of the video cards to the total, factored in performance, and voila:

Pairing a nice system with a powerful graphics solution is, in fact, quite rational from a price-performance perspective. We can even justify the GTX 580 SLI setup, since it vastly improves performance without doubling the system cost. The Radeon HD 6850 CrossFireX solution again looks quite good here, although dual GTX 460 1GB cards do, too.

Of course, the results would look different with a less expensive system, so your mileage may vary.

Conclusions

We came into this review with multiple purposes, and so we leave it confused and bewildered. Or maybe not entirely. I believe we’ve established several things, so let’s consider them one by one.

First, the easiest call we can make is that the Radeon HD 6850 CrossFireX setup does indeed offer performance comparable to a single GeForce GTX 580 at a lower price. That combo adds up to a very nice value proposition, provided you are willing to sacrifice four expansion slots worth of space in your PC for graphic cards—and provided you’re willing to live with the occasional compatibility and performance pitfalls one will inevitably encounter with a multi-GPU solution.

Not that we’d buy a pair of 6850s right now, honestly. The Radeon HD 6900 series should be hitting the market any day now, and we’re not entirely comfortable recommending a dual-6850 purchase with higher-end single cards so close.

Fickle, aren’t we? But the market’s changing, folks.

Next, we’ve pretty much established that the GeForce GTX 460 768MB’s smaller memory size can be a hindrance, especially in SLI where one would expect to be able to run games at higher resolutions and detail levels. With that said, our extensive use of the 2560×1600 resolution really hurt the GTX 460 768MB. In the cases where we tested at multiple resolutions, dual GTX 460 768MB cards in SLI proved to be just slightly slower than a GeForce GTX 580—for substantially less money. If you know for certain you want to drive a two-megapixel display and nothing more, a couple of GTX 460 768MB cards might not be a bad proposition. Even so, you may find your performance hindered by the smaller amount of video RAM at higher quality levels, as we did at 1920×1080 in Metro 2033.

Finally, the performance of two GeForce GTX 580 cards in SLI is well and truly bitchin’. The power draw and noise levels are at least reasonable for this class of setup, too. There are few substitutes for a GTX 580 SLI rig at this very moment, although as we’ve noted, some potential competition from AMD is imminent. Beyond that, an intriguing mystery package arrived unexpectedly in Damage Labs just today. Watch this space for even more of the same, hopefully with a tighter focus, in the coming weeks.

Comments closed
    • rUmX
    • 9 years ago

    Surprised the 460 SE SLI was not tested. Too bad. I would have really enjoyed the beating it would have taken. But it wouldn’t take last place since that’s reserved for the pathetic <1GB card from team green.

    • can-a-tuna
    • 9 years ago
      • PixelArmy
      • 9 years ago

      Or… just do some /[

    • Silus
    • 9 years ago

    Nice review Scott!

    Although I would never run SLI or Crossfire setups (I prefer single card setups), those dual 6850s and GTX 460s 1 GB do indeed look awesome from a price/performance perspective.

    And that mystery package is no doubt the GTX 570 🙂

    • muyuubyou
    • 9 years ago

    TL;DR: no

    • Chrispy_
    • 9 years ago

    I really like the SLI tests hammering the GTX460 768 here.

    You wouldn’t /[

    • KoolAidMan
    • 9 years ago

    I would like to know what sort of performance increase you generally see in Starcraft 2 with all settings being the same but with FSAA disabled.

    Thanks

    • Jambe
    • 9 years ago

    Gosh darn reply. Borked.

    • can-a-tuna
    • 9 years ago

    l[

    • phez
    • 9 years ago

    I’m not really sure how useful this article is if you only test just a uber high resolution that most people don’t run at.

    The 768mb 460, while on its way out, can still hold its own at lower resolutions; considering its price point. But this article makes it look totally worthless even in SLI.

      • rUmX
      • 9 years ago

      Is there any point in buying a gaming card with 768mb of VRAM in 2010 (soon to be 2011)?

    • Arag0n
    • 9 years ago

    From my point of view the CPU used in the system build it’s out of place.

    1st, you won’t buy the best CPU for not the best GPU. So, if someone buys a radeon 6850 will surely buy some CPU in the range of 180-260$.

    2nd, if you buy a 580 SLI system, you need better CPU, adding that difference to the system total price and not just the GPU price.

    3th, as far as more constrained is the GPU in the system (extremly high resolutions), less important becomes the CPU. So if the main point of a SLI 580 it’s to get acceptable frame-rates at 4MP displays, and not as much frames as you can at 2MP or less resolutions, maybe the CPU doesn’t needs to be so good neither.

    So, in conclusion, you need to pair a system that it’s properly fitted for every GPU. Where the CPU doesn’t become a big constraint for the card performance, but you aren’t expending a lot of money into CPU horsepower that goes god-knows where. This should be the proper way to compare systems and not just setting up an standart high-end, high-priced system and fit every card inside to see what’s the result.

      • Voldenuit
      • 9 years ago

      I would like to see midrange card reviews paired with slower CPUs* to see what the scaling and (more importantly) real world performance will be like.

      How many people run a GTX 460 768MB on a Corei7 965?

      Thought so.

      * When reviewed separately, not as part of a comparison article such as today’s.

        • green
        • 9 years ago

        l[

          • StuG
          • 9 years ago

          Actually alot of people I know have extremely high end i5’s with the middle ground GPU’s. They can justify a strong CPU as it lasts them 3-5 years easy, where as a GPU will only last them 2ish and they can get really good details with a middle ground GPU that they upgrade every year or so.

            • Arag0n
            • 9 years ago

            But from my point of view that it’s not the reason to pair a high-end CPU with almost any GPU. I would agree with your friends point of view because I usually do the same but I never buy the high end but I also sell on eBay and buy again my graphic card every 1-2 years. I get subsidized the 60-75% of the price of my next card and I can keep the specs up without breaking my bank account.

            But the point is, that with the proper build, you can do the same with the CPU. That’s why I choosed AMD, because I was sure that by the time beeing I was going to be able to upgrade my system to at the next generation, and then the same things come. You can have a middle-pack CPU that you can sell later to buy a newer one with discount without having to change the cooler, the motherboard or anything.

            Every 4-5 years I need to do the big step and sell motherboard-ram-cpu anyways. My worst choice was when I paired a Pentium D socket 775 with DDR2 long ago and by god knows why, Intel breaked compatibility inside the same socket to newer CPU’s, and I couldn’t upgrade from my Pentium D to a Q6600. That’s why I moved to AM2+ with a Phenom 9950 that should be updated in this year or next to a newer Phenom II x6 or similar.

      • Meadows
      • 9 years ago

      3th?

    • kilkennycat
    • 9 years ago

    Dual GTX560’s with 1GB (or greater) should prove a very interesting future comparison…. but I’m getting ahead of myself………..

      • Voldenuit
      • 9 years ago

      Can I borrow your DeLorean for the weekend? ^_~

    • Voldenuit
    • 9 years ago

    If you’re testing an overclocked card, it should be named as such in the graphs. So that ASUS EAH6850 DirectCU card should be labelled as ‘Radeon HD 6850 790 MHz’ in the graphs.

    Also, what’s with all the asterixes? They only mean something if it’s easy to figure out – say, with a legend? You should have a legend with each graph saying ‘* – custom cooler’ for comprehension’s sake.

      • flip-mode
      • 9 years ago

      page 3, paragraph 1, sentence 2

        • Voldenuit
        • 9 years ago

        Yes, I know it’s there (otherwise I wouldn’t have been able to say what legend I recommended, right?).

        My point was that having to comb the text to find the meaning of the graphs is counter-productive to their informative nature. It would have been trivial to add a single line of information to a handful of graphs, and made them more legible as a result.

    • flip-mode
    • 9 years ago

    q[

    • ClickClick5
    • 9 years ago

    The question is where will the 69xx series lay?

    • ssidbroadcast
    • 9 years ago

    q[< kinda like the chick I dated in high school who had different colored eyes.<]q Really? I'm jealous. Nothing is cooler than having different colored iris'.

      • xtremevarun
      • 9 years ago

      So when the zombie virus apocalypse happens, keep this girl safe. She might be humanity’s only hope for a vaccine ;D

        • sweatshopking
        • 9 years ago

        a vaccine, and “THE BUSINESS”!!!

      • Meadows
      • 9 years ago

      It’s spelt “ires”.

        • poulpy
        • 9 years ago

        In which language exactly?
        Unless you tried, and probably failed, to “make a funny” in good old English it’s “irises” or “irides”..

          • ssidbroadcast
          • 9 years ago

          Irises or irides, it’s super cool. No higher barrier-of-entry when it comes to individuality. (Note: people who wear fake-colored contact lens are total tools.)

            • Meadows
            • 9 years ago

            Differently coloured eyes are bad either way.

    • Krogoth
    • 9 years ago

    Damage, thanks for the review.

    Basically, single-card solutions for the most part make more fiscal sense if you are running 2 megapixels. You need to amp up with 4 Megapixel along with heavy doses of AA/AF to justify getting some kind of SLI/CF setup. 6870, 6850 and 460 1GiB CF/SLI seemed to do the job for the most part without depleting the savings account.

    580 SLI is only good for bragging rights if anything else. A single 580 is practically good enough. Unless you want high levels of AA/AF along with 4 megapixels.

    • moshpit
    • 9 years ago

    Hints on mystery package? I’m betting on an OC flavor of 580.

      • mczak
      • 9 years ago

      Nah, that should be GTX 570. Apparently other sites have received them now too, launch date is Dec 7.
      Could be quite nice – 320bit memory, 15 SMs, and clocks a bit higher than GTX 480. Should perform quite close to a GTX 480, though I haven’t seen any pricing information yet.

      • d0g_p00p
      • 9 years ago

      Think higher man… What’s coming out from AMD in the next month or so?

    • elmopuddy
    • 9 years ago

    I almost bought a second 460 1gb when I upgraded my monitor to a Dell 27″, but decided on a 580.. very happy, and no worries about SLI-Xfire issues.

      • Deanjo
      • 9 years ago

      Same here, plus the performance carries over to other OS’s where multicard/multigpu setups are largely useless. Not to mention that a single GTX-580 uses less power then any of the multi-card setups and is a quieter solution.

        • mczak
        • 9 years ago

        Actually, a 6850CF is so close in power draw (both idle and load) that it probably depends on sample variance which one draws less. Noise also seems to be quite similar. So you basically trade off the multi-gpu issues of a HD 6850CF setup with the higher price of the GTX 580.

    • Mithent
    • 9 years ago

    Two Radeon 68xx cards are a pretty good value option at the moment, but I’m concerned about whether they’re the most future-resistant*. Will tessellation become widely-used?

    Will be interesting to see how the Radeon 69xx cards fit into the mix.

    (* My Q6600 and 8800-series card have served me well for 3 years now, yet I have no intention of changing the CPU, and only now am I getting tempted by graphics cards. That’s pretty future-resistant.)

    • sweatshopking
    • 9 years ago

    bad link….?

    • cegras
    • 9 years ago

    Is there any justification for noise measurements from 10 feet? Normally a desktop will at most be within 5″ or less, whether it’s under the table or not.

    Can you compare these to results taken from a case in a ‘typical’ position?

      • Damage
      • 9 years ago

      You may want to revisit the age-old ” vs. ‘ abbreviation question.

        • cegras
        • 9 years ago

        Ah, sorry! I don’t normally think in imperial, but that was a bit of a careless oversight.

        I’m still surprised at the overall noise levels you are measuring though, considering it’s an open test bench.

    • Majiir Paktu
    • 9 years ago

    Am I behind the times with my 1680×1050 LCD, or are you guys being a bit absurd with resolutions higher than 1920×1200?

      • bthylafh
      • 9 years ago

      Probably you need insanely high resolutions to really show the differences between the higher-end cards.

      This is especially true considering they were concentrating on SLI/Crossfire, which are a waste if you /[

        • LawrenceofArabia
        • 9 years ago

        Pretty much, notice how Dirt 2 with DX11 at 1920×1080 graph’s lowest fps value was 47 on the 768mb GTX460. 47 frames is still an excellent framerate by most standards. Dirt 2 isn’t the most demanding game tested but it is hardly disappointing in graphical quality. I doubt any of the cards tested would drop below 20fps at a 2 megapixel resolution, maybe even 30.

        For the vast majority, basically everyone that doesn’t own a 4MP display, a single reasonably high end gpu will take care of all their gaming needs, keeping framerates at an average of over 60 fps and the minimum above 40. But multi gpu setups are for enthusiasts, and you really need and enthusiast display to let them shine.

      • JustAnEngineer
      • 9 years ago

      1920×1080 is the target resolution for almost everything these days. If you’re running with less than this, you probably are slightly behind the times. 1920×1200 is a great resolution that provides more vertical space for text, but it’s becoming less common as 1920×1080 displays fill the shelves.

      Those of us with 2560×1600 displays really appreciate that Damage includes this resolution in nearly every review. Considering that 2560×1600 is the maximum resolution supported by the past several generations of graphics cards (Radeon x1### and newer), it’s a good top-end resolution to consider, too.

      Because he was testing SLI/Crossfire configurations, the high resolutions were needed to show a performance benefit. It’s exactly the sort of person who already dropped $1012 +tax on an UltraSharp U3011 monitor who would consider spending another $400 on graphics cards, so the high resolution matches up well with the configurations that were tested.

    • paulWTAMU
    • 9 years ago

    Those Civ scores…since when should a g-d TBS require 2 uber-cards to get 50 FPS??? That’s nucking futz

      • derFunkenstein
      • 9 years ago

      Look at the settings – 2560×1600, 8x MSAA. And while I don’t own the game (hoping for a Christmas present) it’s BEAUTIFUL.

        • paulWTAMU
        • 9 years ago

        yeah but it still has lower frames per seconds than some FPS’s.

        And the graphics…eh, they’re not good enough ot justify the performance hit.

          • Voldenuit
          • 9 years ago

          Do you need 60 fps in a (slow-paced) strategy game? I’m sure it’s perfectly playable in the 20s to 30s.

            • paulWTAMU
            • 9 years ago

            It isn’t. It hangs like mad. I had to upgrade to a 460 1 gig (on sale) to get playable frames at 1680×1050.

            • Meadows
            • 9 years ago

            It’s not the pace, it’s the responsiveness.

Pin It on Pinterest

Share This