Home SLI vs. CrossFireX: The DX11 generation
Reviews

SLI vs. CrossFireX: The DX11 generation

Scott Wasson
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

As I learned from a trip to KFC this summer, doubling down can have its risks and its rewards. Sadly, the Colonel’s new sandwich wasn’t exactly the rewarding explosion of bacon-flavored goodness for which I’d hoped. Eating it mostly involved a lot of chewing and thinking about my health, which got tiresome. Still, I had to give it a shot, because the concept held such promise for meat-based confections.

If there’s one thing I enjoy as much as dining on cooked meats, it’s consuming the eye candy produced by a quality GPU. (Yes, I’m doing this.) Happily, doubling down on a good graphics card can be much tastier than anything the Colonel has managed to serve in the past 15 years, and thermal grease isn’t nearly as nasty as the stuff soaking through the bottom of that red-and-white cardboard bucket. The latest GPUs support DirectX 11’s secret blend of herbs and spices, and the recently introduced GeForce GTX 460 has set a new standard for price and performance among them.

In fact, at around 200 bucks, the GTX 460 is a good enough value to raise an intriguing question: Is there any reason to plunk down the cash for an expensive high-end graphics card when two of these can be had for less?

With this and many other questions in mind, we fired up the test rigs in Damage Labs and set to work, testing a ridiculous 23 different configurations of one, two, and, yes, three graphics cards against one another for performance, power draw, noise, and value. Could it be that doubling down on mid-range graphics cards is a better path to gaming enjoyment? How does, well, nearly everything else perform in single and multi-GPU configs? Let’s see what we can find out.

The case for multiple GPUs
Multi-GPU schemes have been around for quite a while now, simply because they’re an effective way to achieve higher performance. The very parallelizable nature of graphics as a computing problem means two GPUs have the potential to deliver nearly twice the speed of a single chip a pretty high percentage of the time. These schemes can have their drawbacks, when for one reason or another performance doesn’t scale well, but both of the major graphics players are very strongly committed to multi-GPU technology.

Heck, AMD has replaced its largest graphics processor with a multi-chip solution; its high-end graphics card is the Radeon HD 5970, prodigiously powered by dual GPUs. Multiple Radeon cards can gang up via CrossFireX technology into teams of two, three, or four GPUs, as well.

Nvidia’s SLI tops out at three GPUs and is limited to fewer, more expensive cards, but then Nvidia is still making much larger chips. A duo of GeForce GTX 480s is nothing to sneeze at—the mist would instantly vaporize due to the heat of hundreds of watts being dissipated. Also, they’re pretty fast. The green team hasn’t yet introduced a dual-GPU video card in the current generation, but it has a long history of such products stretching from the GeForce GTX 295 back to the GeForce 7950 GX2, which essentially doubled up on PlayStation 3-class GPUs way back in 2006. (Yeah, the whole PCs versus next-gen consoles hardware debate kinda ended around that time.)

Nvidia arguably kicked off the modern era of multi-GPU goodness by resurrecting the letters “SLI”, which it saw sewn into a jacket it took off the corpse of graphics chip pioneer 3dfx. Those letters originally stood for “scanline interleave” back in the day, which was how 3dfx Voodoo chips divvied up the work between them. Nvidia re-christened the term “scalable link interconnect,” so named for the bridge connection between two cards, and turned SLI into a feature of multiple generations of GeForces. Since then, Nvidia has expended considerable effort by working with game developers to ensure smooth compatibility and solid performance scaling for SLI configurations. These days, Nvidia often adds support for new games to its drivers weeks before the game itself ships to consumers.

AMD’s answer to SLI was originally named CrossFire, but it was later updated to “CrossFireX” in order to confuse people like me. Mission accomplished! AMD hasn’t always been as vigilant about providing CrossFire support for brand-new games prior to their release, but it has recently ratcheted up its efforts by breaking out CrossFire application profiles into a separate download. Those profiles can be updated more quickly and frequently than its monthly Catalyst driver drops, if needed.

Game developers are more aware of multi-GPU solutions than ever, too, and they generally have tweaked their game engines to work properly with SLI and CrossFireX. As a result, the state of multi-GPU support is pretty decent at present, particularly for games that really need the additional graphics horsepower.

Multi-card graphics solutions can make more sense inside of a desktop gaming rig than you might first think. For instance, a pair of graphics cards can use twice the area of a single, larger card for heat dissipation, making them potentially quieter, other things being equal. Two mid-range graphics cards will draw power from two different PCIe slots, which may save you the trouble of having to accommodate a card with one of those annoying eight-pin auxiliary power connectors. And these days, the second graphics card in a pair is generally pretty good about shutting down and not requiring much power or making much noise when it’s not in use. Add up all of the considerations, and going with dual graphics cards might be less trouble than some of the pricey single-card alternatives.

The value equation can tilt in the direction of multiple cards, too, in certain cases. Let’s have a look at some of the options we’re faced with in the current market, and then we’ll consider more specifically what combinations of cards might be the best candidates for a pairing.

The cards
We’ve gathered together quite a collection of the latest graphics cards in order to make this article possible. The GeForce GTX 400 series is still relatively new, so this endeavor has given us the chance to look at quite a few new graphics cards that deviate from the reference designs established by Nvidia. There’s quite a bit of creativity in terms of custom coolers, higher clock speeds, and the like in some of these cards, sweetening the pot a bit for potential buyers.


MSI’s GeForce GTX 460 Cyclone 1GB

MSI’s GeForce GTX 460 Cyclone 1GB is emblematic of the variety in this first crop of GTX 460 cards. Although it retains the compact length of the reference cards, this puppy’s heatpipe-infused, dual-slot cooler is nothing like the usual shrouded number. The thing looks amazing, and without giving away too much, it’s pretty darned effective, too. The caveat here is that the cooler protrudes above the top edge of the card by roughly an inch, which could make it a tight fit in quite a few shallow enclosures. Your case will need some extra room above the expansion slot area in order for this card to fit comfortably.

With the upgraded cooling comes some additional clock speed headroom, and MSI capitalizes by raising clock speeds from the GTX 460’s stock 675MHz to 725MHz. (Because GPUs based on the Fermi architecture run their shader cores at twice the GPUs’ base clock frequency, that raises the shader clock from 1350MHz to 1450MHz, too.) MSI holds the line on the memory clock, though, at 900MHz—or 3.6 GT/s, since GDDR5 memory transfers data four times per clock cycle. If you’d like some additional speed beyond that, MSI’s Afterburner software allows for GPU overclocking and overvolting from within Windows.

Right now, the Cyclone is selling for $234.99 at Newegg with free shipping.

Because the Cyclone is such a departure from the reference design, we’ve tested it in a single-card config in the following pages. We’ve also paired it in SLI with a stock-clocked GTX 460 1GB from Zotac. In that case, the pair should perform just like two stock-clocked cards, since the slower of the two cards will constrain their performance.


Gigabyte’s GV-N460OC-768I sports twin fans

The more affordable variant of the GeForce GTX 460 has 768MB of memory, less memory bandwidth, less pixel filling and antialiasing capacity, and a smaller L2 cache. Its shader core and base clock speeds are the same as the 1GB version’s, though, and the price is lower. End result: the GTX 460 768MB may be a pretty good value, particularly if you’re buying two.

Gigabyte has provided some additional incentive at the GTX 460’s 768MB standard $199.99 price by throwing in a custom cooler with two fans, two heat pipes, and loads of surface area. In a pinch, it could double as a means of propulsion through the Everglades. This cooler doesn’t stick up beyond the top edge of the card as much as MSI’s Cyclone, but the heatpipes do poke up a quarter of an inch or so. The cooler protrudes about three-quarters of an inch beyond the rear edge of the board, too, although that only makes the card about 9″ long in total.

Gigabyte has raised clock speeds to 715/1430MHz, with the standard 900MHz memory. Once again, we’ve paired this card with a reference one for SLI testing. The Gigabyte card was a late arrival to Damage Labs, so we didn’t have time to test it fully as a single card. We have measured its power draw, noise levels, and GPU temperatures individually, as well as in SLI.


Zotac’s GTX 465 cards form a pair

These Zotac GeForce GTX 465 cards are essentially based on Nvidia’ reference design, with no notable deviations. That’s no bad thing, for several reasons. For one, the GTX 465 is the cheapest GTX 400-series product with dual SLI connectors up top, making it the most affordable entry into three-way SLI. For another, although I like the look and single-card performance of custom coolers like those above, I worry about how well they’ll perform with another card sandwiched right up next to them, potentially starving them for air.

We’ve had problems in the past with similar, fan-based coolers from Asus overheating in multi-GPU configurations—and even in single-GPU configs where another sort of expansion card was installed in the adjacent slot. Nvidia’s cooler designs carefully leave room for air intake next to the blower, so they better tolerate cramped quarters. Furthermore, the shroud-and-blower reference coolers from both AMD and Nvidia exhaust essentialy all of the hot air they move across the heatsink out the back of the case. The custom coolers with fans push air down, toward the board itself, and don’t direct much heat out of the expansion slot opening.

Beyond that, the GTX 465’s advantages over the GTX 460 are few. Since the GTX 460’s introduction, the GTX 465 has spent a lot of time alone in its bedroom, listening to Pearl Jam and writing in its journal, stopping at meal times to yell at its parents. We expect it to go into counseling soon. Perhaps that’s why the Zotac GTX 465’s price has dropped to $250 bucks at Newegg, along with a $30 rebate. You could get it for less than the MSI Cyclone, if the rebate pays out—a big “if” in the consumer rebate biz, we must remind you.


Asus’ GTX 470 keeps it classy

For its GTX 470 offering, Asus has seen the wisdom of sticking with Nvidia’s stock cooler, and the result is a card with nicely understated looks. The one, surgical addition Asus makes to the GTX 470’s stock formula is its SmartDoctor overvolting and overclocking utility—just what you’d want to see added. The GTX 470 came out looking surprisingly good in our recent GPU value roundup, and prices have dropped since then. Asus’ version is down to $289.99 at Newegg. There’s free shipping and a $20 mail-in rebate attached, too.


Zotac’s GTX 480 AMP! wins the girth contest

This monster is Zotac’s GeForce GTX 480 AMP! edition, complete with unnecessary punctuation. When the folks at Zotac told us they’d be producing a GTX 480 with a cooler that performs better than the incredibly beefy stock unit, we responded with skepticism. “What’s it gonna be, triple slot?” After a brief pause, the answer came back: “Heh, yep.” And so it is. With two fans and more surface area than the pages of the federal budget, this thing looks to have a good chance of outperforming the dual-slot default cooler.

The two large fans are generally pretty quiet, but we ran into an annoying problem with ours. Apparently, the shroud may have been bent slightly in shipping; it was making contact with the fan blades somewhere, resulting in a constant clicking noise while the card was powered up. We removed the shroud, which is fairly light and thin, and bent it slightly to keep it out of the way of the fan. That sort of worked. Eventually, we just decided to remove one of the four screws holding the shroud because that was the best way of preventing any contact with the fan. That’s not the sort of problem one wants to encounter with a graphics card this expensive—we’re talking $509.99 at Newegg as of right now—but we’d hope our experience wasn’t typical. Zotac’s packaging for the thing is form-fitting and protective, and ours didn’t look to have been damaged in shipping.

If there’s a silver lining to the apparent fragility of the cooler, it’s the fact that this three-slot-wide contraption doesn’t weigh the card down much. The extra volume is dedicated to a handful of copper heatpipes and oodles of aluminum fins. We’ll check its performance shortly, of course.

Speaking of which, the GTX 480 AMP! is clocked at 756/1512MHz, with its GGDR5 memory at 950MHz (3.8 GT/s). That’s up slightly on all fronts from the GTX 480’s stock speeds of 700/1400/924MHz, so this card should be a little faster than the standard-issue version.

Because this thing is triple-slot, we didn’t even bother with attempting SLI. We just tested the AMP! by itself and left the SLI work to the reference cards, which were able to fit into the two main PCIe x16 slots on our test motherboard and take advantage of all 16 lanes of PCIe bandwidth per slot.

Naturally, we’ve tested against a broad range of DirectX 11 Radeons in single- and dual-GPU CrossFireX configurations. There are some inherent rivalries here at certain price points—such as the Radeon HD 5830 versus the GeForce GTX 460 768MB at around 200 bucks, or the GTX 470 versus the Radeon HD 5850 at just under $300—but not all of the match-ups are so direct. Nvidia currently has no analog for the $150-ish Radeon HD 5770, for instance, and AMD doesn’t have an answer for the GTX 460 1GB or GTX 465 in the mid-$200 range. The mismatches grow more obvious at the high end, where the Radeon HD 5870 (~$400) and the 5970 (~$650+!) straddle the GTX 480’s ~$450-510 asking prices.

Some scenarios worth considering
Now that we’ve reviewed the host of DX11 graphics cards currently available, we can look at some intriguing potential comparisons between single- and multi-card setups. We’ll start with the most obvious one, perhaps, between dual GeForce GTX 460 cards and a single GTX 480. These two solutions are priced very similarly, since a single GTX 460 768MB is about $200, a single GTX 460 1GB runs about $235, and a GTX 480 will set you back at least $450. We’ll want to watch the capabilities, performance, power draw, and noise levels of these competing alternatives in the following pages.

Similarly, AMD’s Radeon HD 5870 will have to work hard to outdo a pair of Radeon HD 5770 or 5830 cards. The Juniper GPU in the 5770 is, in most respects, just a sawed-in-half version of the Cypress chip in the 5870, so the 5770 CrossFireX pair ought to perform comparably to a 5870. Dual 5830s, based on a cut-down Cypress, should be even faster.

The 5770 is selling for about $150 right now, so an unusual option opens up to us: a triple-team of 5770s in a three-way CrossFireX config. For only ~$450, they won’t run you much more than a 5870, and since each card only requires a single six-pin PCIe power connector, they won’t tax your PSU like most triple-CrossFire setups would. Yeah, they’re going to chew up six expansion slots, but many high-end boards have nearly everything else integrated these days. Our test system’s X58 motherboard is PCIe-rich enough to support a three-way CrossFireX config reasonably well. The primary slot retains all 16 PCIe Gen2 lanes, while the second and third slots each get eight PCIe Gen2 lanes. That ought to be enough bandwidth, but keep in mind that your run-of-the-mill P55 motherboard—or, indeed, anything based on a Lynnfield processor with integrated PCI Express—will only have 16 full-speed PCIe Gen2 lanes in total.

Of course, there’s plenty of potential for additional comparisons here. A triple-5770 array might be the perfect foil for a GeForce GTX 480, for example, and if two GTX 460s work out well, they could give the Radeon HD 5870 more than it can handle. We’ll try to keep an eye on some of the key match-ups we’ve identified above, but the scope of our test results makes many more comparisons possible, up to very expensive dual-card configs.

Test notes
Beyond the higher-than-stock clocks on the cards we’ve already mentioned, our Asus ENGTX260 TOP SP216 card’s core and shader clocks are 650 and 1400MHz, respectively, and its memory speed is 2300 MT/s. The GTX 260 displayed uncommon range during its lifespan, adding an additional SP cluster and getting de facto higher clock speeds on shipping products over time. The Asus card we’ve included represents the GTX 260’s highest point, near the end of its run.

Similarly, the Radeon HD 4870 we’ve tested is the later version with 1GB of memory.

Many of our performance tests are scripted and repeatable, but for a couple of games, Battlefield: Bad Company 2 and Metro 2033, we used the FRAPS utility to record frame rates while playing a 60-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We raised our sample size, testing each FRAPS sequence five times per video card, in order to counteract any variability. We’ve included second-by-second frame rate results from FRAPS for those games, and in that case, you’re seeing the results from a single, representative pass through the test sequence.

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-965 Extreme 3.2GHz
Motherboard Gigabyte EX58-UD5
North bridge X58 IOH
South bridge ICH10R
Memory size 12GB
(6 DIMMs)
Memory type Corsair Dominator
CMD12GX3M6A1600C8
DDR3 SDRAM
at 1600MHz
Memory
timings
8-8-8-24 2T
Chipset drivers INF update
9.1.1.1025
Rapid Storage Technology 9.6.0.1014
Audio Integrated ICH10R/ALC889A
with Realtek R2.49 drivers
Graphics Radeon HD 4870 1GB
with Catalyst 10.6 drivers
Gigabyte
Radeon HD 5770 1GB
with Catalyst 10.6 drivers
Gigabyte
Radeon HD 5770 1GB + Radeon HD 5770 1GB
with Catalyst 10.6 drivers &  6/23/10 application
profiles 
Gigabyte
Radeon HD 5770 1GB + Radeon HD 5770 1GB +
Radeon HD 5770 1GB
with Catalyst 10.6 drivers &  6/23/10 application
profiles 
XFX
Radeon HD 5830 1GB
with Catalyst 10.6 drivers
XFX 
Radeon HD 5830 1GB + Radeon HD 5830 1GB
with Catalyst 10.6 drivers &  6/23/10 application
profiles 
Radeon HD
5850 1GB
with Catalyst 10.6 drivers
Dual
Radeon HD
5850 1GB
with Catalyst 10.6 drivers &  6/23/10 application
profiles 
Asus Radeon HD 5870
1GB
with Catalyst 10.6 drivers
Asus
Radeon HD 5870
1GB + Radeon HD 5870
1GB
with Catalyst 10.6 drivers &  6/23/10 application
profiles 
Radeon HD
5970 2GB
with Catalyst 10.6 drivers &  6/23/10 application
profiles 
Asus
ENGTX260 TOP SP216 GeForce GTX 260 896MB
with ForceWare 258.80 drivers
GeForce GTX
460 768MB
with ForceWare 258.80 drivers
Dual GeForce GTX
460 768MB
with ForceWare 258.80 drivers
Zotac GeForce GTX
460 1GB
with ForceWare 258.80 drivers
MSI 
N460GTX Cyclone GeForce GTX
460 1GB
with ForceWare 258.80 drivers
MSI 
N460GTX Cyclone GeForce GTX
460 1GB + Zotac GeForce GTX
460 1GB
with ForceWare 258.80 drivers
Zotac GeForce GTX
465 1GB
with ForceWare 258.80 drivers
Dual Zotac GeForce GTX
465 1GB
with ForceWare 258.80 drivers
GeForce GTX
470 1280MB
with ForceWare 258.80 drivers
Asus
ENGTX470 GeForce GTX
470 1280MB + GeForce GTX
470 1280MB
with ForceWare 258.80 drivers
Zotac GeForce GTX
480 AMP! 1536MB
with ForceWare 258.80 drivers
Dual GeForce GTX
480 1536MB
with ForceWare 258.80 drivers
Hard drive WD Caviar SE16 320GB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition
DirectX runtime update June 2010

Thanks to Intel, Corsair, Gigabyte, and PC Power & Cooling for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, XFX, Asus, Sapphire, Zotac, and Gigabyte supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Running the numbers

Peak pixel
fill rate
(Gpixels/s)


Peak bilinear
INT8 texel
filtering rate*
(Gtexels/s)
*FP16 is half rate

Peak
memory
bandwidth
(GB/s)

Peak shader
arithmetic
(GFLOPS)

Peak
rasterization
rate
(Mtris/s)
GeForce GTX 260 (216 SPs)

18.2 46.8 128.8 605 650
GeForce GTX 460 768MB

16.2 37.8 86.4 907 1350
GeForce GTX 460 1GB

21.6 37.8 115.2 907 1350
MSI GeForce GTX 460 1GB Cyclone

23.2 40.6 115.2 974 1450
GeForce GTX 465

19.4 26.7 102.6 855 1821
GeForce GTX 470

24.3 34.0 133.9 1089 2428
GeForce GTX 480

33.6 42.0 177.4 1345 2800
GeForce GTX 460 768MB x2

32.4 75.6 172.8 1814 2700
GeForce GTX 460 1GB x2

43.2 75.6 230.4 1814 2700
Zotac GeForce GTX 480 AMP!

36.3 45.4 182.4 1452 3024
Radeon HD 4870

12.0 30.0 115.2 1200 750
Radeon HD 5770

13.6 34.0 76.8 1360 850
Radeon HD 5830

12.8 44.8 128.0 1792 800
Radeon HD 5850

23.2 52.2 128.0 2088 725
Radeon HD 5870

27.2 68.0 153.6 2720 850
Radeon HD 5770 x2

27.2 68.0 153.6 2720 1700
Radeon HD 5830 x2

25.6 44.8 256.0 3584 1600
Radeon HD 5770 x3

40.8 102.0 230.4 4080 2550
Radeon HD 5970

46.4 116.0 256.0 4640 1450

The figures above represent theoretical peaks for the GPUs in question. Delivered performance, as we’ll see, is often lower. These numbers are still a useful indicator, especially when comparing cards and chips based on the same basic architecture. I’ve included entries for some of our key multi-GPU setups, but not all of them, in the interests of manageability.

You can see that a pair of GTX 460 1GB cards has substantially higher peak throughput potential than a single GTX 480 in nearly every respect, particularly in texture filtering capacity, since just one GTX 460 isn’t far off the peak for a GTX 480.

A new addition to our table this time is the rasterization rate in millions of triangles per second, and that’s one place where the formidable internal parallelism of the GF100 GPU pays off. A single GTX 480 can theoretically reach rasterization rates higher than two GTX 460s. AMD’s “Evergreen” GPUs can rasterize only a single triangle per clock cycle, so going multi-GPU is the best way to scale up that capability. Thus, the dual 5770s are an exact match for a single 5870 in every category except rasterization, where the 5770s have twice the peak rate. Our rasterization champ among the Radeon is the three-way 5770 setup. In fact, the trio of 5770s is 50% faster than a single 5870 in every respect.

The very high triangle throughput rates for some of these solutions—anything over the single-Radeon peak of about 850 Mtris/s—aren’t likely to make a difference in today’s games, but they could become important if future DX11 titles make extensive use of tessellation and ramp up the geometric complexity.

We’ve grown increasingly dissatisfied with the texture fill rate tool in 3DMark Vantage, so we’ve reached back into the cupboard and pulled out an old favorite, D3D RightMark, to test texture filtering performance.

Unlike 3DMark, this tool lets us test a range of filtering types, not just texture sampling rates. Unfortunately, D3D RightMark won’t test FP16 texture formats, but integer texture formats are still pretty widely used in games. I’ve plotted a range of results below, and to make things more readable, I’ve broken out a couple of filtering types into bar charts, as well. Since this test isn’t compatible with SLI, we’ve omitted those results. We’ve also left the CrossFire configs out of the line plot for the sake of readability.

The Radeons perform pretty well here. The individual cards aren’t too far from their theoretical peaks in the bilinear filtering test, but interestingly enough, the multi-GPU solutions are even closer to theirs. Thus, dual Radeon HD 5770s outperform a single Radeon HD 5870, even though they’re evenly matched in theory. Our three-way 5770 setup is fairly efficient, too.

The GeForces become relatively stronger as we transition from simple bilinear filtering to nice, strong aniso—the sort of filtering you’ll use if you want your games to look good. Still, Nvidia has taken a bit of a step backwards from the GTX 260 to the GTX 470, even though the Fermi architecture reaches closer to its theoretical max. Nvidia’s true strength here is the GTX 460 and its GF104 GPU, whose mix of internal resources is more biased toward texture throughput than the GF100’s.

As I’ve noted before, the Unigine Heaven demo’s “extreme” tessellation mode isn’t a very smart use of DirectX 11 tessellation, with too many triangles and little corresponding improvement in image quality. I think that makes it a poor representation of graphics workloads in future games and thus a poor benchmark of overall GPU performance.

Pushing through all of those polygons does have its uses, though. This demo should help us tease out the differences in triangle throughput between these GPUs. To do so, we’ve tested at the relatively low resolution of 1680×1050, with 4X anisotropic filtering and no antialiasing. Shaders were set to “high” and tessellation to “extreme.”

We’ve not talked much in the past about multi-GPU tech’s benefits for polygon throughput, but they are quite real, as you can see. If the application works well with everybody’s preferred load-balancing method, alternate frame rendering, then GPU 1 will handle one frame and GPU 2 the next. Each chip contributes its full triangle throughput to the task, and performance should scale just as well as it does with pixel shading or texture filtering.

Nvidia’s architectural advantage is very much on display here, too, as the GTX 480 AMP! proves faster than dual Radeon HD 5870s. That edge is blunted if you pile up lots of Radeons; the trio of 5770s matches the GTX 480 AMP! exactly.

Aliens vs. Predator
The new AvP game uses several DirectX 11 features to improve image quality and performance, including tessellation, advanced shadow sampling, and DX11-enhanced multisampled anti-aliasing. Naturally, we were pleased when the game’s developers put together an easily scriptable benchmark tool. This benchmark cycles through a range of scenes in the game, including one spot where a horde of tessellated aliens comes crawling down the floor, ceiling, and walls of a corridor.

For these tests, we turned up all of the image quality options to the max, with two exceptions. We held the line at 2X antialiasing and 8X anisotropic filtering simply to keep frame rates in a playable range with most of these graphics cards. The use of DX11 effects ruled out the use of older, DX10-class video cards, so we’ve excluded them here.

Dual Radeon HD 5830s in CrossFire are a very close match for dual GTX 460 768MB cards in SLI—and both perform very similarly to the GTX 480 AMP! (Do I put a period after that? Hmm.) The two GTX 460 1GB cards are consistently quicker than the GTX 480. All of these mid-range dual-GPU options are solidly faster than the Radeon HD 5870, too.

Our triple-5770 array stumbles a bit in this case. Poor performance scaling like we’re seeing here is something you risk with a multi-GPU setup, and the risk is more acute once you venture beyond two GPUs.

Just Cause 2
I’ve already sunk more hours than I’d care to admit into this open-world adventure, and I feel another bout coming on soon. JC2 has some flashy visuals courtesy of DirectX 10, and the sheer scope of the game world is breathtaking, as are the resulting view distances.

Although JC2 includes a couple of visual effects generated by Nvidia’s CUDA GPU-computing API, we’ve left those disabled for our testing. The CUDA effects are only used sparingly in the game, anyhow, and we’d like to keep things even between the different GPU brands. I do think the water simulation looks gorgeous, but I’m not so impressed by the Bokeh filter used for depth-of-field effects.

We tested performance with JC2‘s built-in benchmark, using the the “Dark Tower” sequence.

Chalk up another win for the mid-range multi-GPU setups against similarly priced high-end graphics cards. The dual Radeon HD 5770s can’t quite keep pace with 5870, and two 5830s can’t match the GTX 480 AMP!, but the dual GTX 460s and triple 5770s are faster than any single-GPU solution.

DiRT 2: DX9
This excellent racer packs a scriptable performance test. We tested at DiRT 2‘s “ultra” quality presets in both DirectX 9 and Direct X 11. The big difference between the two is that the DX11 mode includes tessellation on the crowd and water. Otherwise, they’re hardly distinguishable.

In this game’s DirectX 9 mode, the multi-GPU solutions have the distinction of not being strictly, well, necessary. The lowest frame rate we’re seeing on any card is 29 FPS at the four-megapixel resolution of 2560×1600—and that’s with 8X antialiasing.

The dual-GPU solutions do achieve higher average and minimum frames rates, for what it’s worth. The only one that struggles is, again, the triple-CrossFire setup.

DiRT 2: DX11

The three-way 5770 config is back in the saddle in DirectX 11, delivering frame rates better than even the Radeon HD 5970 or dual GTX 470s, amazingly enough. That’s… unexpectedly excellent. The GTX 460 1GB SLI setup outruns the GTX 480 AMP! again, too, as do the dual Radeon HD 5830s.

However, the 768MB version of the GeForce GTX 460 struggles mightily, both with one card and two, at 2560×1600. Looks like it’s running out of video memory, leading to a big performance drop. As you may know, doubling up on video cards won’t double your effective video RAM. The effective memory size is unchanged. In fact, SLI and CrossFireX require a little bit of memory overhead.

Battlefield: Bad Company 2
BC2 uses DirectX 11, but according to this interview, DX11 is mainly used to speed up soft shadow filtering. The DirectX 10 rendering path produces the same images.

Since these are all relatively fast graphics cards, we turned up all of the image quality settings in the game. Our test sessions took place in the first 60 seconds of the “Heart of Darkness” level.

Granted, testing at this resolution makes things easy on the high-end multi-GPU solutions, but since we’re playing through the game manually, we wanted to keep frame rates playable on the slowest single-card configs.

Notice that for some of the Radeons in the middle plot above—including the 5770 CrossFireX x3, the 5970, and the 5830 CrossFireX—there’s a big frame rate drop at about 7-9 seconds into the test session. Although the FPS counter only drops to 60-70, the seat-of-your-pants impact of this problem is very noticeable as you’re playing the game. For a fraction of a second, screen updates freeze and stutter. This hiccup seems more likely to occur with multiple Radeons than with just one, but I don’t think single cards are entirely immune. Fortunately, this is a relatively uncommon and intermittent problem with a specific spot in this level, but it’s still annoying—more so than the impact on the Radeons’ average frame rates really indicates.

Other than that big, green, shiny fly in the ointment, the mid-range SLI and CrossFire setups continue to show up the high-end graphics cards.

Metro 2033
If Bad Company 2 has a rival for the title of best-looking game, it’s gotta be Metro 2033. This game uses DX10 and DX11 to create some of the best visuals on the PC today. You can get essentially the same visuals using either version of DirectX, but with DirectX 11, Metro 2033 offers a couple of additional options: tessellation and a DirectCompute-based depth of field shader. If you have a GeForce card, Metro 2033 will use it to accelerate some types of in-game physics calculations, since it uses the PhysX API. We didn’t enable advanced PhysX effects in our tests, though, since we wanted to do a direct comparison to the new Radeons. See here for more on this game’s exhaustively state-of-the-art technology.

Yes, Virginia, there is a game other than Crysis that requires you to turn down the image quality in order to achieve playable frame rates on a $200 graphics card. Metro 2033 is it. We had to dial back the presets two notches from the top settings and disable the performance-assassinating advanced depth-of-field effect, too.

We did leave tessellation enabled on the DX11 cards. In fact, we considered leaving out the DX10 cards entirely here, since they don’t produce exactly the same visuals. However, tessellation in this game is only used in a few specific ways, and you’ll be hard pressed to see the differences during regular gameplay. Thus, we’ve provisionally included the DX10 cards for comparison, in spite of the fact that they can’t do DX11 tessellation.

The story here, it seems to me, is the strong showing from the GeForce side of the house. When a GeForce GTX 460 1GB is faster than a Radeon HD 5870, you can probably expect that two GTX 460s will be faster than two 5870s. And that’s just what happens.

Borderlands
We tested Gearbox’s post-apocalyptic role-playing shooter by using the game’s built-in performance test. We tested with all of the in-game quality options at their max. We didn’t enable antialiasing, because the game’s Unreal Engine doesn’t natively support it.

Borderlands is our lone representative from the massive contingent of titles that use the Unreal Engine, and the GeForce cards take to this game like a congressman to graft. Like Lindsay Lohan to recreational drugs. Like a mouthy wide receiver to Twitter. You can play this one at home, folks. Just one GTX 460 is about as fast as dual 5830s in CrossFire, for goshsakes.

AMD did include a performance tweak in its slightly newer Catalyst 10.7 drivers aimed at Borderlands, but the change was targeted at cases where anti-aliasing is enabled via the driver control panel. Just to be sure it didn’t affect the outcome of our tests, we installed the Cat 10.7 drivers and re-tested the Radeon HD 5850. Performance was unchanged.

SLI performance scales well here, and dual GTX 460 768MB cards slightly outrun the GTX 480 AMP! once more. CrossFire is another story. Dual 5830s produce lower frame rates than one 5870.

Power consumption
Since we have a number of non-reference GeForce cards among the field, we decided to test them individually against Nvidia’s reference cards in this portion of the review, so we could see how custom coolers and clock speeds affect power draw, noise, and operating temperatures. The results should give us a sense of whether these changes really add value.

We measured total system power consumption at the wall socket using an our fancy new Yokogawa WT210 digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead at a 1920×1200 resolution with 4X AA and 16X anisotropic filtering. We test power with Left 4 Dead because we’ve found that this game’s fairly simple shaders tend to cause GPUs to draw quite a bit of power, so we think it’s a solidly representative peak gaming workload.

You will pay a bit of a penalty in the form of additional power consumption if you go the multi-GPU route, generally speaking. However, in the contest between a single GeForce GTX 480 and two GTX 460s, that penalty is really quite slim. Dual GTX 460s, both in 768MB and 1GB form, pull a little less juice at idle than the GTX 480 reference card and little more than Zotac’s GTX 480 AMP!. (Yeah, I’m going with a period there.) Fire up a game, and the GTX 460s draw more power, especially the 1GB cards, but the difference at most versus the GTX 480 AMP! is about 50W.

Because they’re based on the lightweight Juniper chip, two Radeon HD 5770 cards in CrossFireX don’t draw too much more power than a single 5870, but the dual 5830s are another story. This particular product’s combination of a Cypress chip with portions disabled and relatively high clock speeds makes it a bit of a power hog among current Radeons. Still, two 5830s draw less power under load than our GTX 460 1GB SLI setup, because the Radeons tend to draw less power at any point on the price spectrum.

The question of power efficiency, though, is more complex. For example, the Radeon HD 5870 uses slightly less power than the GTX 470, but which card is faster depends on the game. The 5870 costs about $100 more than the GTX 470, too, and that gap will cover an awful lot of the GTX 470’s extra impact on your electric bill. Another example: dual GTX 460s in SLI are still fairly power-efficient compared to, say, a GTX 480 or two 5830s, since they’re faster overall.

Among the custom GeForce cards, the tweaked clock speeds of the MSI and Gigabyte GTX 460s haven’t caused power use to rise much. In fact, the Gigabyte consumes 5W less under load than its reference counterpart. The biggest surprise, though, is the Zotac GTX 480 AMP!, which magically needs less power than the stock-clock reference board. Although one could attribute this minor miracle to a more efficient board design or better chip binning, I suspect the Zotac’s lower power use comes courtesy of its lower operating temperatures. Cooler chips draw less power, and, well… keep reading.

Noise levels
We measured noise levels on our test system, sitting on an open test bench, using an Extech model 407738 digital sound level meter. The meter was mounted on a tripod approximately 8″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

We should start here by talking about the GeForce cards with custom coolers, since they’re also used in the SLI pairings. Our best individual performer is the Gigabyte GTX 460 768MB, which is admirably quiet. MSI’s Cyclone also produces less noise than its corresponding reference card under load, but it’s louder when running a game. The custom blower on Zotac’s GTX 460 1GB has the dubious distinction of being the worst of the lot; it has some sort of rattle that may or may not be a unique quirk of our review unit.

Zotac finds redemption, though, thanks to the triple-slot wonder that is the GTX 480 AMP!. The thing makes less noise than a Radeon HD 5850 while running a game, believe it or not. I’d say that’s worth another expansion slot, if you have the room.

Unfortunately, teaming up the relatively loud Zotac GTX 460 1GB and the MSI Cyclone isn’t exactly an acoustic win. Not wanting to go easy on the fan-based coolers, I made the Cyclone the primary, interior card in the pair, so that it had to perform with another card situated right next to its fan. That caused the Cyclone to work especially hard while the Zotac clattered away, and the results on the decibel meter mirror my own impressions. The dual GTX 460 1GB cards are even louder than a couple of GTX 465 or 470s with the stock coolers, much noisier than one would expect given the amount of power being dissipated. I suspect a pair of reference coolers would generate quite a bit less noise.

Before you give up on the potential acoustic benefits of multi-GPU configs, compare the GTX 460 768MB SLI setup to the single GTX 480 reference card. These two solutions have comparable performance and comparable amounts of power to dissipate, yet the dual-card setup is markedly quieter. All of the CrossFireX setups are louder than their single-card variants, but not necessarily by much. Some of the pairings, such as the 5850s and 5830s, are pretty good acoustic citizens.

GPU temperatures
We used GPU-Z to log temperatures during our load testing. For the multi-GPU options, we’ve reported the temperature from the primary GPU, which is generally the warmest. We had to leave out the GeForce GTX 260, because it was reporting some obviously incorrect values.

Sandwiching a pair of cards together in a multi-GPU team almost inescapably leads to higher operating temperatures for the one whose cooler intake is partially obscured. A couple of the mid-range products with fan-based custom coolers, the XFX 5830 and the Gigabyte GTX 460 768MB, end up in the 90° C range, otherwise the domain of the GF100-based cards.

The single-GPU results are a rather different story. Say this for MSI’s Cyclone: it’s pretty aggressively tuned for keeping temperatures low. That helps explain why it’s not particularly quiet under load, and those modest temperatures could produce some additional overclocking headroom. Both the Gigabyte GTX 460 and the Zotac GTX 480 AMP! are keeping things cool, as well.

Conclusions
We can summarize our results with the help of one of our infamous price-performance scatter plots. The plot below was created via the same formula used in our recent GPU value roundup, though updated with more configs and current pricing. As ever, the best combinations of price and performance will gravitate toward the upper left corner of the plot, while the worst will be closer to the lower right.

One impression hasn’t changed since our value roundup: the latest GeForces tend to be better values at present than AMD’s 5000-series Radeons. That’s a clear reversal of fortunes since the GeForce GTX 400 series’ somewhat underwhelming debut, when the GTX 480 was really no faster than the Radeon HD 5870. The Fermi architecture still quite new, and Nvidia has extracted enough additional performance through driver tuning to put the GTX 470 on equal footing with the 5870—and the GTX 480 well ahead. That’s true across a range of games, not just those where the GeForces have an apparent advantage, like Metro 2033 and Borderlands. The addition of the cheaper and more architecturally efficient GeForce GTX 460 has further solidified Nvidia’s value leadership.

As for the question of the hour, yes, a multi-GPU solution can provide better value than a higher-end single card. Dual GeForce GTX 460s of either flavor, 768MB or 1GB, will set you back less cash and deliver higher frame rates than the GeForce GTX 480 AMP!, as the scatter plot shows. In fact, the 460 SLI options are among the best values we tested. Also, we learned on the previous page that these SLI rigs have comparable power draw to a GTX 480, and their noise levels can be similar with the right cooling solution.

The GTX 460 SLI setups have exceptional performance, too. They should run most games competently at four megapixels with robust image quality settings, and they’ll likely handle a six-megapixel array of three 1080p monitors quite nicely, too—which a lone GTX 480 can’t do, with its limit of two simultaneous display outputs. One caveat: I’m concerned about the 768MB cards running out of video memory at higher resolutions in some cases, as they did in DiRT 2‘s DX11 mode at 2560×1600. If I were spending the money, I’d spring for the 1GB cards.

Among the Radeons, the multi-GPU value question is murkier. Going with a couple of Radeon HD 5830s in CrossFire instead of a 5870 will net you superior performance for a little more money, but it will come with substantially higher power draw and a bit more noise. Dual Radeon HD 5770s in CrossFire will get you near-5870 frame rates for less dough, but a lone GeForce GTX 470 would probably be a smarter choice. The GTX 470 is better in terms of performance, price, power consumption, noise levels, and expansion space occupied. The only drawback to that plan: any of these Radeons will drive three displays at once via Eyefinity, and the GTX 470 tops out at two.

Strangely enough, one of the best values in the Radeon camp is our three-way Radeon HD 5770 team, which performs quite well overall in spite of a few obvious stumbles in certain games. The three-way 5770 setup lands on our price-performance scatter plot in a position clearly preferable to the GTX 480 AMP!, though it’s not in quite as nice a place as the GTX 460 SLI options. This trio’s acoustics and power consumption are reasonable given its performance, also. I’m not sure I’d want to deal with the hassle of a six-slot solution that doesn’t always perform as well as it should, but there is a case to be made for it, nonetheless.

Latest News

Ripple CLO Clarifies Future Steps With the SEC While Quenching Settlement Rumors
Crypto News

Ripple CLO Clarifies Future Steps With the SEC While Quenching Settlement Rumors

Cisco Launches AI-Driven Security Solution 'Hypershield'
News

Cisco Launches AI-Driven Security Solution ‘Hypershield’

Cisco is planning to take the lead in stepping up the security of businesses through AI thanks to a new collaboration with Nvidia. On Thursday (April 18), the company unveiled...

Crypto analyst April top picks
Crypto News

Crypto Analyst Reveals His Top Three Investments for April

Popular crypto analyst Andre Outberg has revealed his top three crypto investments for April. One of them is a brand-new GameFi/GambleFi Solana project that has gained popularity recently. Outberg is...

You May Soon Have to Pay to Tweet on X, Hints Musk
News

You May Soon Have to Pay to Tweet on X, Hints Musk

Pakistan Interior Ministry Bans X Over Security Concerns
News

Pakistan Bans X over Security Concerns – But The Ban Might Be Temporary

Colorado’s New Law Aims To Protect Consumer’s Brainwave Data
News

Colorado’s One-of-a-Kind Law Aims to Protect Consumer’s Brainwave Data

Samsung's $44 billion investment in chipmaking in the US
News

Samsung’s $44 Billion Investment in Chipmaking in the US