SLI vs. CrossFireX: The DX11 generation

As I learned from a trip to KFC this summer, doubling down can have its risks and its rewards. Sadly, the Colonel’s new sandwich wasn’t exactly the rewarding explosion of bacon-flavored goodness for which I’d hoped. Eating it mostly involved a lot of chewing and thinking about my health, which got tiresome. Still, I had to give it a shot, because the concept held such promise for meat-based confections.

If there’s one thing I enjoy as much as dining on cooked meats, it’s consuming the eye candy produced by a quality GPU. (Yes, I’m doing this.) Happily, doubling down on a good graphics card can be much tastier than anything the Colonel has managed to serve in the past 15 years, and thermal grease isn’t nearly as nasty as the stuff soaking through the bottom of that red-and-white cardboard bucket. The latest GPUs support DirectX 11’s secret blend of herbs and spices, and the recently introduced GeForce GTX 460 has set a new standard for price and performance among them.

In fact, at around 200 bucks, the GTX 460 is a good enough value to raise an intriguing question: Is there any reason to plunk down the cash for an expensive high-end graphics card when two of these can be had for less?

With this and many other questions in mind, we fired up the test rigs in Damage Labs and set to work, testing a ridiculous 23 different configurations of one, two, and, yes, three graphics cards against one another for performance, power draw, noise, and value. Could it be that doubling down on mid-range graphics cards is a better path to gaming enjoyment? How does, well, nearly everything else perform in single and multi-GPU configs? Let’s see what we can find out.

The case for multiple GPUs

Multi-GPU schemes have been around for quite a while now, simply because they’re an effective way to achieve higher performance. The very parallelizable nature of graphics as a computing problem means two GPUs have the potential to deliver nearly twice the speed of a single chip a pretty high percentage of the time. These schemes can have their drawbacks, when for one reason or another performance doesn’t scale well, but both of the major graphics players are very strongly committed to multi-GPU technology.

Heck, AMD has replaced its largest graphics processor with a multi-chip solution; its high-end graphics card is the Radeon HD 5970, prodigiously powered by dual GPUs. Multiple Radeon cards can gang up via CrossFireX technology into teams of two, three, or four GPUs, as well.

Nvidia’s SLI tops out at three GPUs and is limited to fewer, more expensive cards, but then Nvidia is still making much larger chips. A duo of GeForce GTX 480s is nothing to sneeze at—the mist would instantly vaporize due to the heat of hundreds of watts being dissipated. Also, they’re pretty fast. The green team hasn’t yet introduced a dual-GPU video card in the current generation, but it has a long history of such products stretching from the GeForce GTX 295 back to the GeForce 7950 GX2, which essentially doubled up on PlayStation 3-class GPUs way back in 2006. (Yeah, the whole PCs versus next-gen consoles hardware debate kinda ended around that time.)

Nvidia arguably kicked off the modern era of multi-GPU goodness by resurrecting the letters “SLI”, which it saw sewn into a jacket it took off the corpse of graphics chip pioneer 3dfx. Those letters originally stood for “scanline interleave” back in the day, which was how 3dfx Voodoo chips divvied up the work between them. Nvidia re-christened the term “scalable link interconnect,” so named for the bridge connection between two cards, and turned SLI into a feature of multiple generations of GeForces. Since then, Nvidia has expended considerable effort by working with game developers to ensure smooth compatibility and solid performance scaling for SLI configurations. These days, Nvidia often adds support for new games to its drivers weeks before the game itself ships to consumers.

AMD’s answer to SLI was originally named CrossFire, but it was later updated to “CrossFireX” in order to confuse people like me. Mission accomplished! AMD hasn’t always been as vigilant about providing CrossFire support for brand-new games prior to their release, but it has recently ratcheted up its efforts by breaking out CrossFire application profiles into a separate download. Those profiles can be updated more quickly and frequently than its monthly Catalyst driver drops, if needed.

Game developers are more aware of multi-GPU solutions than ever, too, and they generally have tweaked their game engines to work properly with SLI and CrossFireX. As a result, the state of multi-GPU support is pretty decent at present, particularly for games that really need the additional graphics horsepower.

Multi-card graphics solutions can make more sense inside of a desktop gaming rig than you might first think. For instance, a pair of graphics cards can use twice the area of a single, larger card for heat dissipation, making them potentially quieter, other things being equal. Two mid-range graphics cards will draw power from two different PCIe slots, which may save you the trouble of having to accommodate a card with one of those annoying eight-pin auxiliary power connectors. And these days, the second graphics card in a pair is generally pretty good about shutting down and not requiring much power or making much noise when it’s not in use. Add up all of the considerations, and going with dual graphics cards might be less trouble than some of the pricey single-card alternatives.

The value equation can tilt in the direction of multiple cards, too, in certain cases. Let’s have a look at some of the options we’re faced with in the current market, and then we’ll consider more specifically what combinations of cards might be the best candidates for a pairing.

The cards

We’ve gathered together quite a collection of the latest graphics cards in order to make this article possible. The GeForce GTX 400 series is still relatively new, so this endeavor has given us the chance to look at quite a few new graphics cards that deviate from the reference designs established by Nvidia. There’s quite a bit of creativity in terms of custom coolers, higher clock speeds, and the like in some of these cards, sweetening the pot a bit for potential buyers.

MSI’s GeForce GTX 460 Cyclone 1GB

MSI’s GeForce GTX 460 Cyclone 1GB is emblematic of the variety in this first crop of GTX 460 cards. Although it retains the compact length of the reference cards, this puppy’s heatpipe-infused, dual-slot cooler is nothing like the usual shrouded number. The thing looks amazing, and without giving away too much, it’s pretty darned effective, too. The caveat here is that the cooler protrudes above the top edge of the card by roughly an inch, which could make it a tight fit in quite a few shallow enclosures. Your case will need some extra room above the expansion slot area in order for this card to fit comfortably.

With the upgraded cooling comes some additional clock speed headroom, and MSI capitalizes by raising clock speeds from the GTX 460’s stock 675MHz to 725MHz. (Because GPUs based on the Fermi architecture run their shader cores at twice the GPUs’ base clock frequency, that raises the shader clock from 1350MHz to 1450MHz, too.) MSI holds the line on the memory clock, though, at 900MHz—or 3.6 GT/s, since GDDR5 memory transfers data four times per clock cycle. If you’d like some additional speed beyond that, MSI’s Afterburner software allows for GPU overclocking and overvolting from within Windows.

Right now, the Cyclone is selling for $234.99 at Newegg with free shipping.

Because the Cyclone is such a departure from the reference design, we’ve tested it in a single-card config in the following pages. We’ve also paired it in SLI with a stock-clocked GTX 460 1GB from Zotac. In that case, the pair should perform just like two stock-clocked cards, since the slower of the two cards will constrain their performance.

Gigabyte’s GV-N460OC-768I sports twin fans

The more affordable variant of the GeForce GTX 460 has 768MB of memory, less memory bandwidth, less pixel filling and antialiasing capacity, and a smaller L2 cache. Its shader core and base clock speeds are the same as the 1GB version’s, though, and the price is lower. End result: the GTX 460 768MB may be a pretty good value, particularly if you’re buying two.

Gigabyte has provided some additional incentive at the GTX 460’s 768MB standard $199.99 price by throwing in a custom cooler with two fans, two heat pipes, and loads of surface area. In a pinch, it could double as a means of propulsion through the Everglades. This cooler doesn’t stick up beyond the top edge of the card as much as MSI’s Cyclone, but the heatpipes do poke up a quarter of an inch or so. The cooler protrudes about three-quarters of an inch beyond the rear edge of the board, too, although that only makes the card about 9″ long in total.

Gigabyte has raised clock speeds to 715/1430MHz, with the standard 900MHz memory. Once again, we’ve paired this card with a reference one for SLI testing. The Gigabyte card was a late arrival to Damage Labs, so we didn’t have time to test it fully as a single card. We have measured its power draw, noise levels, and GPU temperatures individually, as well as in SLI.

Zotac’s GTX 465 cards form a pair

These Zotac GeForce GTX 465 cards are essentially based on Nvidia’ reference design, with no notable deviations. That’s no bad thing, for several reasons. For one, the GTX 465 is the cheapest GTX 400-series product with dual SLI connectors up top, making it the most affordable entry into three-way SLI. For another, although I like the look and single-card performance of custom coolers like those above, I worry about how well they’ll perform with another card sandwiched right up next to them, potentially starving them for air.

We’ve had problems in the past with similar, fan-based coolers from Asus overheating in multi-GPU configurations—and even in single-GPU configs where another sort of expansion card was installed in the adjacent slot. Nvidia’s cooler designs carefully leave room for air intake next to the blower, so they better tolerate cramped quarters. Furthermore, the shroud-and-blower reference coolers from both AMD and Nvidia exhaust essentialy all of the hot air they move across the heatsink out the back of the case. The custom coolers with fans push air down, toward the board itself, and don’t direct much heat out of the expansion slot opening.

Beyond that, the GTX 465’s advantages over the GTX 460 are few. Since the GTX 460’s introduction, the GTX 465 has spent a lot of time alone in its bedroom, listening to Pearl Jam and writing in its journal, stopping at meal times to yell at its parents. We expect it to go into counseling soon. Perhaps that’s why the Zotac GTX 465’s price has dropped to $250 bucks at Newegg, along with a $30 rebate. You could get it for less than the MSI Cyclone, if the rebate pays out—a big “if” in the consumer rebate biz, we must remind you.

Asus’ GTX 470 keeps it classy

For its GTX 470 offering, Asus has seen the wisdom of sticking with Nvidia’s stock cooler, and the result is a card with nicely understated looks. The one, surgical addition Asus makes to the GTX 470’s stock formula is its SmartDoctor overvolting and overclocking utility—just what you’d want to see added. The GTX 470 came out looking surprisingly good in our recent GPU value roundup, and prices have dropped since then. Asus’ version is down to $289.99 at Newegg. There’s free shipping and a $20 mail-in rebate attached, too.

Zotac’s GTX 480 AMP! wins the girth contest

This monster is Zotac’s GeForce GTX 480 AMP! edition, complete with unnecessary punctuation. When the folks at Zotac told us they’d be producing a GTX 480 with a cooler that performs better than the incredibly beefy stock unit, we responded with skepticism. “What’s it gonna be, triple slot?” After a brief pause, the answer came back: “Heh, yep.” And so it is. With two fans and more surface area than the pages of the federal budget, this thing looks to have a good chance of outperforming the dual-slot default cooler.

The two large fans are generally pretty quiet, but we ran into an annoying problem with ours. Apparently, the shroud may have been bent slightly in shipping; it was making contact with the fan blades somewhere, resulting in a constant clicking noise while the card was powered up. We removed the shroud, which is fairly light and thin, and bent it slightly to keep it out of the way of the fan. That sort of worked. Eventually, we just decided to remove one of the four screws holding the shroud because that was the best way of preventing any contact with the fan. That’s not the sort of problem one wants to encounter with a graphics card this expensive—we’re talking $509.99 at Newegg as of right now—but we’d hope our experience wasn’t typical. Zotac’s packaging for the thing is form-fitting and protective, and ours didn’t look to have been damaged in shipping.

If there’s a silver lining to the apparent fragility of the cooler, it’s the fact that this three-slot-wide contraption doesn’t weigh the card down much. The extra volume is dedicated to a handful of copper heatpipes and oodles of aluminum fins. We’ll check its performance shortly, of course.

Speaking of which, the GTX 480 AMP! is clocked at 756/1512MHz, with its GGDR5 memory at 950MHz (3.8 GT/s). That’s up slightly on all fronts from the GTX 480’s stock speeds of 700/1400/924MHz, so this card should be a little faster than the standard-issue version.

Because this thing is triple-slot, we didn’t even bother with attempting SLI. We just tested the AMP! by itself and left the SLI work to the reference cards, which were able to fit into the two main PCIe x16 slots on our test motherboard and take advantage of all 16 lanes of PCIe bandwidth per slot.

Naturally, we’ve tested against a broad range of DirectX 11 Radeons in single- and dual-GPU CrossFireX configurations. There are some inherent rivalries here at certain price points—such as the Radeon HD 5830 versus the GeForce GTX 460 768MB at around 200 bucks, or the GTX 470 versus the Radeon HD 5850 at just under $300—but not all of the match-ups are so direct. Nvidia currently has no analog for the $150-ish Radeon HD 5770, for instance, and AMD doesn’t have an answer for the GTX 460 1GB or GTX 465 in the mid-$200 range. The mismatches grow more obvious at the high end, where the Radeon HD 5870 (~$400) and the 5970 (~$650+!) straddle the GTX 480’s ~$450-510 asking prices.

Some scenarios worth considering

Now that we’ve reviewed the host of DX11 graphics cards currently available, we can look at some intriguing potential comparisons between single- and multi-card setups. We’ll start with the most obvious one, perhaps, between dual GeForce GTX 460 cards and a single GTX 480. These two solutions are priced very similarly, since a single GTX 460 768MB is about $200, a single GTX 460 1GB runs about $235, and a GTX 480 will set you back at least $450. We’ll want to watch the capabilities, performance, power draw, and noise levels of these competing alternatives in the following pages.

Similarly, AMD’s Radeon HD 5870 will have to work hard to outdo a pair of Radeon HD 5770 or 5830 cards. The Juniper GPU in the 5770 is, in most respects, just a sawed-in-half version of the Cypress chip in the 5870, so the 5770 CrossFireX pair ought to perform comparably to a 5870. Dual 5830s, based on a cut-down Cypress, should be even faster.

The 5770 is selling for about $150 right now, so an unusual option opens up to us: a triple-team of 5770s in a three-way CrossFireX config. For only ~$450, they won’t run you much more than a 5870, and since each card only requires a single six-pin PCIe power connector, they won’t tax your PSU like most triple-CrossFire setups would. Yeah, they’re going to chew up six expansion slots, but many high-end boards have nearly everything else integrated these days. Our test system’s X58 motherboard is PCIe-rich enough to support a three-way CrossFireX config reasonably well. The primary slot retains all 16 PCIe Gen2 lanes, while the second and third slots each get eight PCIe Gen2 lanes. That ought to be enough bandwidth, but keep in mind that your run-of-the-mill P55 motherboard—or, indeed, anything based on a Lynnfield processor with integrated PCI Express—will only have 16 full-speed PCIe Gen2 lanes in total.

Of course, there’s plenty of potential for additional comparisons here. A triple-5770 array might be the perfect foil for a GeForce GTX 480, for example, and if two GTX 460s work out well, they could give the Radeon HD 5870 more than it can handle. We’ll try to keep an eye on some of the key match-ups we’ve identified above, but the scope of our test results makes many more comparisons possible, up to very expensive dual-card configs.

Test notes

Beyond the higher-than-stock clocks on the cards we’ve already mentioned, our Asus ENGTX260 TOP SP216 card’s core and shader clocks are 650 and 1400MHz, respectively, and its memory speed is 2300 MT/s. The GTX 260 displayed uncommon range during its lifespan, adding an additional SP cluster and getting de facto higher clock speeds on shipping products over time. The Asus card we’ve included represents the GTX 260’s highest point, near the end of its run.

Similarly, the Radeon HD 4870 we’ve tested is the later version with 1GB of memory.

Many of our performance tests are scripted and repeatable, but for a couple of games, Battlefield: Bad Company 2 and Metro 2033, we used the FRAPS utility to record frame rates while playing a 60-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We raised our sample size, testing each FRAPS sequence five times per video card, in order to counteract any variability. We’ve included second-by-second frame rate results from FRAPS for those games, and in that case, you’re seeing the results from a single, representative pass through the test sequence.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-965 Extreme 3.2GHz
Motherboard Gigabyte EX58-UD5
North bridge X58 IOH
South bridge ICH10R
Memory size 12GB
(6 DIMMs)
Memory type Corsair Dominator
CMD12GX3M6A1600C8

DDR3 SDRAM at 1600MHz

Memory
timings
8-8-8-24 2T
Chipset drivers INF update
9.1.1.1025

Rapid Storage Technology 9.6.0.1014

Audio Integrated ICH10R/ALC889A

with Realtek R2.49 drivers

Graphics Radeon HD 4870 1GB

with Catalyst 10.6 drivers

Gigabyte
Radeon HD 5770 1GB

with Catalyst 10.6 drivers

Gigabyte
Radeon HD 5770 1GB + Radeon HD 5770 1GB

with Catalyst 10.6 drivers &  6/23/10 application
profiles 

Gigabyte
Radeon HD 5770 1GB + Radeon HD 5770 1GB +

Radeon HD 5770 1GB

with Catalyst 10.6 drivers &  6/23/10 application
profiles 

XFX
Radeon HD 5830 1GB

with Catalyst 10.6 drivers

XFX 
Radeon HD 5830 1GB + Radeon HD 5830 1GB

with Catalyst 10.6 drivers &  6/23/10 application
profiles 

Radeon HD
5850 1GB

with Catalyst 10.6 drivers

Dual
Radeon HD
5850 1GB

with Catalyst 10.6 drivers &  6/23/10 application
profiles 

Asus Radeon HD 5870
1GB

with Catalyst 10.6 drivers

Asus
Radeon HD 5870
1GB + Radeon HD 5870
1GB

with Catalyst 10.6 drivers &  6/23/10 application
profiles 

Radeon HD
5970 2GB

with Catalyst 10.6 drivers &  6/23/10 application
profiles 

Asus
ENGTX260 TOP SP216 GeForce GTX 260 896MB

with ForceWare 258.80 drivers

GeForce GTX
460 768MB

with ForceWare 258.80 drivers

Dual GeForce GTX
460 768MB

with ForceWare 258.80 drivers

Zotac GeForce GTX
460 1GB

with ForceWare 258.80 drivers

MSI 
N460GTX Cyclone GeForce GTX
460 1GB

with ForceWare 258.80 drivers

MSI 
N460GTX Cyclone GeForce GTX
460 1GB + Zotac GeForce GTX
460 1GB

with ForceWare 258.80 drivers

Zotac GeForce GTX
465 1GB

with ForceWare 258.80 drivers

Dual Zotac GeForce GTX
465 1GB

with ForceWare 258.80 drivers

GeForce GTX
470 1280MB

with ForceWare 258.80 drivers

Asus
ENGTX470 GeForce GTX
470 1280MB + GeForce GTX
470 1280MB

with ForceWare 258.80 drivers

Zotac GeForce GTX
480 AMP! 1536MB

with ForceWare 258.80 drivers

Dual GeForce GTX
480 1536MB

with ForceWare 258.80 drivers

Hard drive WD Caviar SE16 320GB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition

DirectX runtime update June 2010

Thanks to Intel, Corsair, Gigabyte, and PC Power & Cooling for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, XFX, Asus, Sapphire, Zotac, and Gigabyte supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Running the numbers

Peak pixel
fill rate
(Gpixels/s)


Peak bilinear

INT8 texel
filtering rate*
(Gtexels/s)
*FP16 is half rate


Peak
memory
bandwidth
(GB/s)

Peak shader
arithmetic

(GFLOPS)

Peak
rasterization

rate

(Mtris/s)

GeForce GTX 260 (216 SPs)

18.2 46.8 128.8 605 650
GeForce GTX 460 768MB

16.2 37.8 86.4 907 1350
GeForce GTX 460 1GB

21.6 37.8 115.2 907 1350
MSI GeForce GTX 460 1GB Cyclone

23.2 40.6 115.2 974 1450
GeForce GTX 465

19.4 26.7 102.6 855 1821
GeForce GTX 470

24.3 34.0 133.9 1089 2428
GeForce GTX 480

33.6 42.0 177.4 1345 2800
GeForce GTX 460 768MB x2

32.4 75.6 172.8 1814 2700
GeForce GTX 460 1GB x2

43.2 75.6 230.4 1814 2700
Zotac GeForce GTX 480 AMP!

36.3 45.4 182.4 1452 3024
Radeon HD 4870

12.0 30.0 115.2 1200 750
Radeon HD 5770

13.6 34.0 76.8 1360 850
Radeon HD 5830

12.8 44.8 128.0 1792 800
Radeon HD 5850

23.2 52.2 128.0 2088 725
Radeon HD 5870

27.2 68.0 153.6 2720 850
Radeon HD 5770 x2

27.2 68.0 153.6 2720 1700
Radeon HD 5830 x2

25.6 44.8 256.0 3584 1600
Radeon HD 5770 x3

40.8 102.0 230.4 4080 2550
Radeon HD 5970

46.4 116.0 256.0 4640 1450

The figures above represent theoretical peaks for the GPUs in question. Delivered performance, as we’ll see, is often lower. These numbers are still a useful indicator, especially when comparing cards and chips based on the same basic architecture. I’ve included entries for some of our key multi-GPU setups, but not all of them, in the interests of manageability.

You can see that a pair of GTX 460 1GB cards has substantially higher peak throughput potential than a single GTX 480 in nearly every respect, particularly in texture filtering capacity, since just one GTX 460 isn’t far off the peak for a GTX 480.

A new addition to our table this time is the rasterization rate in millions of triangles per second, and that’s one place where the formidable internal parallelism of the GF100 GPU pays off. A single GTX 480 can theoretically reach rasterization rates higher than two GTX 460s. AMD’s “Evergreen” GPUs can rasterize only a single triangle per clock cycle, so going multi-GPU is the best way to scale up that capability. Thus, the dual 5770s are an exact match for a single 5870 in every category except rasterization, where the 5770s have twice the peak rate. Our rasterization champ among the Radeon is the three-way 5770 setup. In fact, the trio of 5770s is 50% faster than a single 5870 in every respect.

The very high triangle throughput rates for some of these solutions—anything over the single-Radeon peak of about 850 Mtris/s—aren’t likely to make a difference in today’s games, but they could become important if future DX11 titles make extensive use of tessellation and ramp up the geometric complexity.

We’ve grown increasingly dissatisfied with the texture fill rate tool in 3DMark Vantage, so we’ve reached back into the cupboard and pulled out an old favorite, D3D RightMark, to test texture filtering performance.

Unlike 3DMark, this tool lets us test a range of filtering types, not just texture sampling rates. Unfortunately, D3D RightMark won’t test FP16 texture formats, but integer texture formats are still pretty widely used in games. I’ve plotted a range of results below, and to make things more readable, I’ve broken out a couple of filtering types into bar charts, as well. Since this test isn’t compatible with SLI, we’ve omitted those results. We’ve also left the CrossFire configs out of the line plot for the sake of readability.

The Radeons perform pretty well here. The individual cards aren’t too far from their theoretical peaks in the bilinear filtering test, but interestingly enough, the multi-GPU solutions are even closer to theirs. Thus, dual Radeon HD 5770s outperform a single Radeon HD 5870, even though they’re evenly matched in theory. Our three-way 5770 setup is fairly efficient, too.

The GeForces become relatively stronger as we transition from simple bilinear filtering to nice, strong aniso—the sort of filtering you’ll use if you want your games to look good. Still, Nvidia has taken a bit of a step backwards from the GTX 260 to the GTX 470, even though the Fermi architecture reaches closer to its theoretical max. Nvidia’s true strength here is the GTX 460 and its GF104 GPU, whose mix of internal resources is more biased toward texture throughput than the GF100’s.

As I’ve noted before, the Unigine Heaven demo’s “extreme” tessellation mode isn’t a very smart use of DirectX 11 tessellation, with too many triangles and little corresponding improvement in image quality. I think that makes it a poor representation of graphics workloads in future games and thus a poor benchmark of overall GPU performance.

Pushing through all of those polygons does have its uses, though. This demo should help us tease out the differences in triangle throughput between these GPUs. To do so, we’ve tested at the relatively low resolution of 1680×1050, with 4X anisotropic filtering and no antialiasing. Shaders were set to “high” and tessellation to “extreme.”

We’ve not talked much in the past about multi-GPU tech’s benefits for polygon throughput, but they are quite real, as you can see. If the application works well with everybody’s preferred load-balancing method, alternate frame rendering, then GPU 1 will handle one frame and GPU 2 the next. Each chip contributes its full triangle throughput to the task, and performance should scale just as well as it does with pixel shading or texture filtering.

Nvidia’s architectural advantage is very much on display here, too, as the GTX 480 AMP! proves faster than dual Radeon HD 5870s. That edge is blunted if you pile up lots of Radeons; the trio of 5770s matches the GTX 480 AMP! exactly.

Aliens vs. Predator

The new AvP game uses several DirectX 11 features to improve image quality and performance, including tessellation, advanced shadow sampling, and DX11-enhanced multisampled anti-aliasing. Naturally, we were pleased when the game’s developers put together an easily scriptable benchmark tool. This benchmark cycles through a range of scenes in the game, including one spot where a horde of tessellated aliens comes crawling down the floor, ceiling, and walls of a corridor.

For these tests, we turned up all of the image quality options to the max, with two exceptions. We held the line at 2X antialiasing and 8X anisotropic filtering simply to keep frame rates in a playable range with most of these graphics cards. The use of DX11 effects ruled out the use of older, DX10-class video cards, so we’ve excluded them here.

Dual Radeon HD 5830s in CrossFire are a very close match for dual GTX 460 768MB cards in SLI—and both perform very similarly to the GTX 480 AMP! (Do I put a period after that? Hmm.) The two GTX 460 1GB cards are consistently quicker than the GTX 480. All of these mid-range dual-GPU options are solidly faster than the Radeon HD 5870, too.

Our triple-5770 array stumbles a bit in this case. Poor performance scaling like we’re seeing here is something you risk with a multi-GPU setup, and the risk is more acute once you venture beyond two GPUs.

Just Cause 2

I’ve already sunk more hours than I’d care to admit into this open-world adventure, and I feel another bout coming on soon. JC2 has some flashy visuals courtesy of DirectX 10, and the sheer scope of the game world is breathtaking, as are the resulting view distances.

Although JC2 includes a couple of visual effects generated by Nvidia’s CUDA GPU-computing API, we’ve left those disabled for our testing. The CUDA effects are only used sparingly in the game, anyhow, and we’d like to keep things even between the different GPU brands. I do think the water simulation looks gorgeous, but I’m not so impressed by the Bokeh filter used for depth-of-field effects.

We tested performance with JC2‘s built-in benchmark, using the the “Dark Tower” sequence.

Chalk up another win for the mid-range multi-GPU setups against similarly priced high-end graphics cards. The dual Radeon HD 5770s can’t quite keep pace with 5870, and two 5830s can’t match the GTX 480 AMP!, but the dual GTX 460s and triple 5770s are faster than any single-GPU solution.

DiRT 2: DX9

This excellent racer packs a scriptable performance test. We tested at DiRT 2‘s “ultra” quality presets in both DirectX 9 and Direct X 11. The big difference between the two is that the DX11 mode includes tessellation on the crowd and water. Otherwise, they’re hardly distinguishable.

In this game’s DirectX 9 mode, the multi-GPU solutions have the distinction of not being strictly, well, necessary. The lowest frame rate we’re seeing on any card is 29 FPS at the four-megapixel resolution of 2560×1600—and that’s with 8X antialiasing.

The dual-GPU solutions do achieve higher average and minimum frames rates, for what it’s worth. The only one that struggles is, again, the triple-CrossFire setup.

DiRT 2: DX11

The three-way 5770 config is back in the saddle in DirectX 11, delivering frame rates better than even the Radeon HD 5970 or dual GTX 470s, amazingly enough. That’s… unexpectedly excellent. The GTX 460 1GB SLI setup outruns the GTX 480 AMP! again, too, as do the dual Radeon HD 5830s.

However, the 768MB version of the GeForce GTX 460 struggles mightily, both with one card and two, at 2560×1600. Looks like it’s running out of video memory, leading to a big performance drop. As you may know, doubling up on video cards won’t double your effective video RAM. The effective memory size is unchanged. In fact, SLI and CrossFireX require a little bit of memory overhead.

Battlefield: Bad Company 2
BC2 uses DirectX 11, but according to this interview, DX11 is mainly used to speed up soft shadow filtering. The DirectX 10 rendering path produces the same images.

Since these are all relatively fast graphics cards, we turned up all of the image quality settings in the game. Our test sessions took place in the first 60 seconds of the “Heart of Darkness” level.

Granted, testing at this resolution makes things easy on the high-end multi-GPU solutions, but since we’re playing through the game manually, we wanted to keep frame rates playable on the slowest single-card configs.

Notice that for some of the Radeons in the middle plot above—including the 5770 CrossFireX x3, the 5970, and the 5830 CrossFireX—there’s a big frame rate drop at about 7-9 seconds into the test session. Although the FPS counter only drops to 60-70, the seat-of-your-pants impact of this problem is very noticeable as you’re playing the game. For a fraction of a second, screen updates freeze and stutter. This hiccup seems more likely to occur with multiple Radeons than with just one, but I don’t think single cards are entirely immune. Fortunately, this is a relatively uncommon and intermittent problem with a specific spot in this level, but it’s still annoying—more so than the impact on the Radeons’ average frame rates really indicates.

Other than that big, green, shiny fly in the ointment, the mid-range SLI and CrossFire setups continue to show up the high-end graphics cards.

Metro 2033

If Bad Company 2 has a rival for the title of best-looking game, it’s gotta be Metro 2033. This game uses DX10 and DX11 to create some of the best visuals on the PC today. You can get essentially the same visuals using either version of DirectX, but with DirectX 11, Metro 2033 offers a couple of additional options: tessellation and a DirectCompute-based depth of field shader. If you have a GeForce card, Metro 2033 will use it to accelerate some types of in-game physics calculations, since it uses the PhysX API. We didn’t enable advanced PhysX effects in our tests, though, since we wanted to do a direct comparison to the new Radeons. See here for more on this game’s exhaustively state-of-the-art technology.

Yes, Virginia, there is a game other than Crysis that requires you to turn down the image quality in order to achieve playable frame rates on a $200 graphics card. Metro 2033 is it. We had to dial back the presets two notches from the top settings and disable the performance-assassinating advanced depth-of-field effect, too.

We did leave tessellation enabled on the DX11 cards. In fact, we considered leaving out the DX10 cards entirely here, since they don’t produce exactly the same visuals. However, tessellation in this game is only used in a few specific ways, and you’ll be hard pressed to see the differences during regular gameplay. Thus, we’ve provisionally included the DX10 cards for comparison, in spite of the fact that they can’t do DX11 tessellation.

The story here, it seems to me, is the strong showing from the GeForce side of the house. When a GeForce GTX 460 1GB is faster than a Radeon HD 5870, you can probably expect that two GTX 460s will be faster than two 5870s. And that’s just what happens.

Borderlands

We tested Gearbox’s post-apocalyptic role-playing shooter by using the game’s built-in performance test. We tested with all of the in-game quality options at their max. We didn’t enable antialiasing, because the game’s Unreal Engine doesn’t natively support it.

Borderlands is our lone representative from the massive contingent of titles that use the Unreal Engine, and the GeForce cards take to this game like a congressman to graft. Like Lindsay Lohan to recreational drugs. Like a mouthy wide receiver to Twitter. You can play this one at home, folks. Just one GTX 460 is about as fast as dual 5830s in CrossFire, for goshsakes.

AMD did include a performance tweak in its slightly newer Catalyst 10.7 drivers aimed at Borderlands, but the change was targeted at cases where anti-aliasing is enabled via the driver control panel. Just to be sure it didn’t affect the outcome of our tests, we installed the Cat 10.7 drivers and re-tested the Radeon HD 5850. Performance was unchanged.

SLI performance scales well here, and dual GTX 460 768MB cards slightly outrun the GTX 480 AMP! once more. CrossFire is another story. Dual 5830s produce lower frame rates than one 5870.

Power consumption

Since we have a number of non-reference GeForce cards among the field, we decided to test them individually against Nvidia’s reference cards in this portion of the review, so we could see how custom coolers and clock speeds affect power draw, noise, and operating temperatures. The results should give us a sense of whether these changes really add value.

We measured total system power consumption at the wall socket using an our fancy new Yokogawa WT210 digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead at a 1920×1200 resolution with 4X AA and 16X anisotropic filtering. We test power with Left 4 Dead because we’ve found that this game’s fairly simple shaders tend to cause GPUs to draw quite a bit of power, so we think it’s a solidly representative peak gaming workload.

You will pay a bit of a penalty in the form of additional power consumption if you go the multi-GPU route, generally speaking. However, in the contest between a single GeForce GTX 480 and two GTX 460s, that penalty is really quite slim. Dual GTX 460s, both in 768MB and 1GB form, pull a little less juice at idle than the GTX 480 reference card and little more than Zotac’s GTX 480 AMP!. (Yeah, I’m going with a period there.) Fire up a game, and the GTX 460s draw more power, especially the 1GB cards, but the difference at most versus the GTX 480 AMP! is about 50W.

Because they’re based on the lightweight Juniper chip, two Radeon HD 5770 cards in CrossFireX don’t draw too much more power than a single 5870, but the dual 5830s are another story. This particular product’s combination of a Cypress chip with portions disabled and relatively high clock speeds makes it a bit of a power hog among current Radeons. Still, two 5830s draw less power under load than our GTX 460 1GB SLI setup, because the Radeons tend to draw less power at any point on the price spectrum.

The question of power efficiency, though, is more complex. For example, the Radeon HD 5870 uses slightly less power than the GTX 470, but which card is faster depends on the game. The 5870 costs about $100 more than the GTX 470, too, and that gap will cover an awful lot of the GTX 470’s extra impact on your electric bill. Another example: dual GTX 460s in SLI are still fairly power-efficient compared to, say, a GTX 480 or two 5830s, since they’re faster overall.

Among the custom GeForce cards, the tweaked clock speeds of the MSI and Gigabyte GTX 460s haven’t caused power use to rise much. In fact, the Gigabyte consumes 5W less under load than its reference counterpart. The biggest surprise, though, is the Zotac GTX 480 AMP!, which magically needs less power than the stock-clock reference board. Although one could attribute this minor miracle to a more efficient board design or better chip binning, I suspect the Zotac’s lower power use comes courtesy of its lower operating temperatures. Cooler chips draw less power, and, well… keep reading.

Noise levels

We measured noise levels on our test system, sitting on an open test bench, using an Extech model 407738 digital sound level meter. The meter was mounted on a tripod approximately 8″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

We should start here by talking about the GeForce cards with custom coolers, since they’re also used in the SLI pairings. Our best individual performer is the Gigabyte GTX 460 768MB, which is admirably quiet. MSI’s Cyclone also produces less noise than its corresponding reference card under load, but it’s louder when running a game. The custom blower on Zotac’s GTX 460 1GB has the dubious distinction of being the worst of the lot; it has some sort of rattle that may or may not be a unique quirk of our review unit.

Zotac finds redemption, though, thanks to the triple-slot wonder that is the GTX 480 AMP!. The thing makes less noise than a Radeon HD 5850 while running a game, believe it or not. I’d say that’s worth another expansion slot, if you have the room.

Unfortunately, teaming up the relatively loud Zotac GTX 460 1GB and the MSI Cyclone isn’t exactly an acoustic win. Not wanting to go easy on the fan-based coolers, I made the Cyclone the primary, interior card in the pair, so that it had to perform with another card situated right next to its fan. That caused the Cyclone to work especially hard while the Zotac clattered away, and the results on the decibel meter mirror my own impressions. The dual GTX 460 1GB cards are even louder than a couple of GTX 465 or 470s with the stock coolers, much noisier than one would expect given the amount of power being dissipated. I suspect a pair of reference coolers would generate quite a bit less noise.

Before you give up on the potential acoustic benefits of multi-GPU configs, compare the GTX 460 768MB SLI setup to the single GTX 480 reference card. These two solutions have comparable performance and comparable amounts of power to dissipate, yet the dual-card setup is markedly quieter. All of the CrossFireX setups are louder than their single-card variants, but not necessarily by much. Some of the pairings, such as the 5850s and 5830s, are pretty good acoustic citizens.

GPU temperatures

We used GPU-Z to log temperatures during our load testing. For the multi-GPU options, we’ve reported the temperature from the primary GPU, which is generally the warmest. We had to leave out the GeForce GTX 260, because it was reporting some obviously incorrect values.

Sandwiching a pair of cards together in a multi-GPU team almost inescapably leads to higher operating temperatures for the one whose cooler intake is partially obscured. A couple of the mid-range products with fan-based custom coolers, the XFX 5830 and the Gigabyte GTX 460 768MB, end up in the 90° C range, otherwise the domain of the GF100-based cards.

The single-GPU results are a rather different story. Say this for MSI’s Cyclone: it’s pretty aggressively tuned for keeping temperatures low. That helps explain why it’s not particularly quiet under load, and those modest temperatures could produce some additional overclocking headroom. Both the Gigabyte GTX 460 and the Zotac GTX 480 AMP! are keeping things cool, as well.

Conclusions

We can summarize our results with the help of one of our infamous price-performance scatter plots. The plot below was created via the same formula used in our recent GPU value roundup, though updated with more configs and current pricing. As ever, the best combinations of price and performance will gravitate toward the upper left corner of the plot, while the worst will be closer to the lower right.

One impression hasn’t changed since our value roundup: the latest GeForces tend to be better values at present than AMD’s 5000-series Radeons. That’s a clear reversal of fortunes since the GeForce GTX 400 series’ somewhat underwhelming debut, when the GTX 480 was really no faster than the Radeon HD 5870. The Fermi architecture still quite new, and Nvidia has extracted enough additional performance through driver tuning to put the GTX 470 on equal footing with the 5870—and the GTX 480 well ahead. That’s true across a range of games, not just those where the GeForces have an apparent advantage, like Metro 2033 and Borderlands. The addition of the cheaper and more architecturally efficient GeForce GTX 460 has further solidified Nvidia’s value leadership.

As for the question of the hour, yes, a multi-GPU solution can provide better value than a higher-end single card. Dual GeForce GTX 460s of either flavor, 768MB or 1GB, will set you back less cash and deliver higher frame rates than the GeForce GTX 480 AMP!, as the scatter plot shows. In fact, the 460 SLI options are among the best values we tested. Also, we learned on the previous page that these SLI rigs have comparable power draw to a GTX 480, and their noise levels can be similar with the right cooling solution.

The GTX 460 SLI setups have exceptional performance, too. They should run most games competently at four megapixels with robust image quality settings, and they’ll likely handle a six-megapixel array of three 1080p monitors quite nicely, too—which a lone GTX 480 can’t do, with its limit of two simultaneous display outputs. One caveat: I’m concerned about the 768MB cards running out of video memory at higher resolutions in some cases, as they did in DiRT 2‘s DX11 mode at 2560×1600. If I were spending the money, I’d spring for the 1GB cards.

Among the Radeons, the multi-GPU value question is murkier. Going with a couple of Radeon HD 5830s in CrossFire instead of a 5870 will net you superior performance for a little more money, but it will come with substantially higher power draw and a bit more noise. Dual Radeon HD 5770s in CrossFire will get you near-5870 frame rates for less dough, but a lone GeForce GTX 470 would probably be a smarter choice. The GTX 470 is better in terms of performance, price, power consumption, noise levels, and expansion space occupied. The only drawback to that plan: any of these Radeons will drive three displays at once via Eyefinity, and the GTX 470 tops out at two.

Strangely enough, one of the best values in the Radeon camp is our three-way Radeon HD 5770 team, which performs quite well overall in spite of a few obvious stumbles in certain games. The three-way 5770 setup lands on our price-performance scatter plot in a position clearly preferable to the GTX 480 AMP!, though it’s not in quite as nice a place as the GTX 460 SLI options. This trio’s acoustics and power consumption are reasonable given its performance, also. I’m not sure I’d want to deal with the hassle of a six-slot solution that doesn’t always perform as well as it should, but there is a case to be made for it, nonetheless.

Comments closed
    • thebeadfairy
    • 9 years ago

    Purchased the GTX480 SLI no clocking feature based upon the results of these tests. Thank you Scott

    • fuego
    • 9 years ago

    The performance/price scaling for the 460 768MB is incredible, beating all other cards by large margin. If 3-way SLI with this card was possible it would provide 480 SLI performance at $300+ lower cost. Nividia is running a very dirty game with it’s customers.
    It was fun to see 5770 3xCF included alongside the choice of 460 SLI at same price/higher performance or lower price/same performance 🙂

    • urbain
    • 9 years ago

    So overclocked GTX460 consumes less power than stock clocked 5770 eh?
    and why include GTX480 AMP edition while including a stock clocked 5870?
    this bench is not only fake its screaming of it,I thin AMD should stop sending any hardware to this crappy site..

      • Damage
      • 9 years ago

      If you look closer, you’ll find that the GTX 460 only drew slightly less power at idle than the 5770–and quite a bit more power than the 5770 under load. The GTX 460 is a larger, more power-hungry chip, but Nvidia appears to have more effective clock gating and other power-saving measures at idle, by a small amount.

      You’ll find that we tested power and noise for both the GTX 480 AMP! (a new product we’re testing in this context) and the reference GTX 480, so you’re free to compare both.

    • designerfx
    • 9 years ago

    I don’t doubt the results currently as much as I enjoy my ATI product, but wasn’t there a lot of hubbub about how the latest nvidia drivers have been really good while the latest 2 or 3 ati drivers have screwed crossfire bigtime?

      • travbrad
      • 9 years ago

      And who’s fault is that?

    • Synchromesh
    • 9 years ago

    I remember in a recent thread when somebody asked about SLI tons of people came in assuring everyone that SLI and Crossfire sucked, still suck, will continue to suck indefinitely and that a single card is the only way to go. I wonder why I didn’t see all this in comments to this article. Oh wait…

      • mboza
      • 9 years ago

      A pair of GTX 460s beat a 5870 or a GTX 480 for price performance. Single card wins at all cheaper price levels. How much extra is the 120W cooler running, 11db quieter system worth to you?

      How noticable is the increase in performance for anyone gaming on a 1920×1200 monitor or smaller compared to just a single 460?

      And the main arguments against SLI/CF on the forums are against picking a SLI/CF motherboard, and hoping to get a second card when the first one feels slow. No benches of last generation cards, but would you really recommend pairing up some DX9 cards today?

        • Lans
        • 9 years ago

        Looks like SLI and, to lesser degree, Crossfire (or lack of) scaling is a less of an issue compared to a few years ago.

        The motherboard problem is also becoming less of a price premium but still there.

        Waiting for last gen prices to drop doesn’t make much sense to me because I generally find original price (1st card) + 50% of original price (2nd card) to work out to be about the same price as new card (maybe 3 months old or so) with similar performance. Of course, I would go for middle to upper middle cards in stack which doesn’t have such a steep premium.

        Lastly (but most importantly to me) is 11db is the breaker for me if it means the difference between 40db and 51db. I have never actually measured but I have a good estimate from playing around with fan speed control in a very controlled fashion and knowing the noise rating for each and every spinning fan (thanks to sites like SPCR). Having had said that, my numbers aren’t too precise but I can say a system of around 40db is about my limit (sure I can’t hear it when there is lots of sound from speakers but most of the time I can still hear it when game play is not so intense for games I play)… 11db more means nothing so long as it is below your limit but it certainly is not for me.

        Then again, TR’s idle noise levels are lot higher than my current system… But I can say a single card at load/max is probably above my limit already (at least for reference designs) even for my machines.

        The 120W diff is only meaningful to me in terms of noise (how much more cooling/air flow/fans I need to add to maintain acceptable level of heat for system to function properly). At least not for desktop which isn’t on 24/7 (I used IGP on such machines).

    • mboza
    • 9 years ago

    Would have been interesting to see a Lucid Hydra combination in there too. Maybe next time.

    • anotherengineer
    • 9 years ago

    “SLI vs. CrossFireX: The DX11 generation
    We test nearly everything”

    EXCEPT GAMES using the source engine, sigh, the only few games I play all run on the source engine so none of these benchmarks do anything for me.

    Oh well, my 4850 runs everything fine anyway.

      • khands
      • 9 years ago

      I was going to say, Source games are all DX9 aren’t they? Kinda old school stuff you could max out with a GTX 260 216 on 2560×1600

      • Xaser04
      • 9 years ago

      If source engine games run perfectly fine on your HD4850 what benefit would there be including these in the review? All it would prove is that they run even better on faster graphics cards.

      • d0g_p00p
      • 9 years ago

      unless your computer and video card are 5 years old anything source based will run fine. did you post just to whine, i am sure TR has tested the 4850 using a source game so you know how it will run using new cards.

        • cegras
        • 9 years ago

        Sorry, that’s lies. A 4850 will choke on 1440 x 900, TF2 online multiplayer, busy scenes.

        This is why nowaday’s I’m suspicious of all video card benchmarks. In practice you need to buy the next level above what you think is good enough just to be safe (at least if you want online gaming).

          • Krogoth
          • 9 years ago

          Nah, TF2 runs fine on modern GPUs. The problem is that the game is CPU-bound with 32 player matches especially if they are bots.

          In that case, it never goes below 40-50FPS on my Q6600@ 3.0Ghz.

            • cegras
            • 9 years ago

            And that’s choppy.

            I don’t know what sort of maps you play, but a 24 person payload @ 1440 x 900, 2xAA 8xAF can get to the point where the choppyness is very noticeable. I demand 60+ FPS no matter what, it’s the only way to play multiplayer.

            • ultima_trev
            • 9 years ago

            Well, thanks to network latency (unless you mean bots instead of live players), you’ll get choppiness regardless of framerate.

            50 fps is hardly choppy though, you might experience a microstutter here and there, but it’s definitely good enough. Back in the day with Q3A LAN games at college, I probably only averaged 20-40fps, but I still managed to win every time.

        • anotherengineer
        • 9 years ago

        No I didn’t post to whine and I do realize the source engine is not dx11. It has been updated however in LFD2

        It would just be nice to see a source engine benchmark so it can help me decide my next video card purchase. (since I the only games I play are source based) Would a 5770 be good or would a gtx460 be better for the dollar?? Without benchmarks I have no idea.

        And #100 yes, very true, the server and amount of people on it can have large variance in fps, which is why an HL2 and or LFD or LFD2 benchmark would be nice go get an idea of how newer cards perform.

        Edit: §[<http://www.anandtech.com/bench/GPU/102<]§ excellent :)

    • Chrispy_
    • 9 years ago

    Another excellent article, thanks Scott.

    Random request:
    Could you please put a diagonal line in from 0/0 through the median result or a significant result on your scatter plots? I think the scatter plots are great, but it’s sometimes hard to see which card offers better bang for your buck when they’re not that close together in price.

    • thermistor
    • 9 years ago

    If you only match frame/frame, then yes, nVidia comes out on top, but when you factor in size (3-slot card, you gotta me kidding me!), heat, noise, and power consumption, nVidia looks much less compelling.

    Again, I think the market knows that, and that’s why nVidia has done price cuts and ATI hasn’t.

    • sammorris
    • 9 years ago

    Still a shame there aren’t any truly unbiased tests here. Better than HardOCP’s set which frankly whiffs of bribery, but there isn’t really a neutral game here. Most of the important modern titles have a bias towards nvidia through development, but that’s not really to say that the geforce cards are ‘just faster’ like articles like this dictate. Catalyst 10.7’s badness isn’t helping, but neither is testing websites’ refusal to incorporate unbiased games. They do exist. AvP is clearly biased in several parts, when you look at review sites. Here it doesn’t fare so bad, but it depends where you test it. ATI also never really implemented crossfire for it – on my HD4870X2 QCF setup, the frames rendered by the extra GPUs aren’t actually displayed, so ATI are partly to blame for that. Just Cause 2 obviously includes PhysX bribery, DiRT 2 is another anomalous title for the GTX400 series, and Bad Company 2 is broken for AA with ATI cards – I notice all review sites refuse to test it with AA disabled. Funny that.
    And finally Metro 2033 even has a quality modifier on geforces in the drivers, along with Borderlands being renowned for being the most geforce-biased game to date (though that’ll probably change with the Arma 2 expansion).
    There are a few unbiased games out there, Starcraft 2 appears to be one of them. Blur, Split/Second Velocity and Need for Speed World all appear to be relatively unbiased, and then there are games like the STALKER series which are actually slightly ATI-biased, but see minimal coverage in reviews. The number of games coded in favour of geforces now means that to truly analyse GPU performance you have to use a vast sample of games, which limits you to the smaller review sites that don’t have a strict test regime.

      • Deanjo
      • 9 years ago

      So what you are saying is that TR selection of commonly played titles all show nVidia kicking ass and taking names but they don’t include the rare exception where ATI is almost equal. Gotcha.

        • sammorris
        • 9 years ago

        There are commonly played titles that don’t show such bias. That was the point of my post.

          • hapyman
          • 9 years ago

          Almost (of not) all of your examples are due to ATI lack of support or driver failures. Isn’t that part of the product you buy? This is not TRs fault it is AMDs. I love AMD CPUs and them as a company but you can’t pass the buck to review sites that use commonly played games that people actually play.

      • PixelArmy
      • 9 years ago

      When you say biased do you mean the games were coded specifically to benefit one or the other? Or do you mean a game simply favors one over the other?

      Who makes this biased determination? If they have a TWIMTBP logo? How about AvP and Dirt 2? Those were done with a close working relationship with AMD according to AMD’s own press releases.

      What about Stalker? It’s on AnandTech… §[<http://www.anandtech.com/show/3809/nvidias-geforce-gtx-460-the-200-king/12<]§ The GTX 400s beat their supposed rivals 5800 models (even with an ATI bias as you say). And SC2 seems to favor the GTXs. §[<http://www.xbitlabs.com/articles/video/display/starcraft2-wings-of-liberty.html<]§ Though SC2 doesn't really matter too much as most people don't have trouble running it. At some point, you have to accept what the current landscape is, "bias" or no. These games are there because they're either popular, their (base) engines are used a lot or they stress particular DX11 features, not because they are supposed to be "fair". *EDIT* FYI, this review benches 4 of 6 currently available DX11 games, the other 2 being Stalker as addressed above and Battleforge (which is in the nVidia camp as well according to the same AnandTech article...)

        • sammorris
        • 9 years ago

        I stopped trusting Anand after they said a Freezer 7 Pro cooled better than an Ultra-120 Extreme.

          • flip-mode
          • 9 years ago

          Then why are you here? You don’t trust Anand and you don’t trust TR. Why don’t you go back to billybobsbasementhardwarereviews.com and hang out there? Oh, right, because you’re really just here to troll.

        • SiliconSlick
        • 9 years ago

        Below, IN OTHER WORDS, after the ENDLESS shrieking 6-8 months of the quacking red roosters spewing forth their ear breaking psychotic shrieks about DX11 and the 5000 series, NOW THE ONLY 8 GAMES 8 MONTHS OR A YEAR LATER KNOWN TO HUMANITY FAVOR THE NVIDIA CARDS, so we will be treated to ANOTHER long, outstandingly disruptive, full of bs, shrieking, crybaby, crashing, loser fountain of wailing and moaning of the red roosters…in between the outrageous lies they always lay down in favor of their fanboy freak crap cards…

        That’s the bottom line – these 6 “favor” nvidia, and the other two “favor nvidia” – physxs “favors nvidia”, cuda “favors nvidia”, OpenCL is supposed to be the crybaby whine peak for the ati red roosters, but even that is IMPLENTED IN SUPERIOR AND MORE COMPLETE FASHION by Nvidia, and “favors nvidia” (unless the crybabies can destroy PhysX by DEMANDING nvidia drop it and make nice with a some freebie openGL crud ati can latch on to for zero dollars)…yes we know.. it favors nvidia, AA favors nvidia, so we must turn it off for ATI to keep up and do highest top end card tests with ZERO AA that looks like CRAP!

        I’m really SICK of the crybabies, and the lies.

        ” #62, When you say biased do you mean the games were coded specifically to benefit one or the other? Or do you mean a game simply favors one over the other?

        Who makes this biased determination? If they have a TWIMTBP logo? How about AvP and Dirt 2? Those were done with a close working relationship with AMD according to AMD’s own press releases.

        What about Stalker? It’s on AnandTech… §[<http://www.anandtech.com/show/3809/nvidias-geforce-gtx-460-the-200-…<]§ The GTX 400s beat their supposed rivals 5800 models (even with an ATI bias as you say). And SC2 seems to favor the GTXs. §[<http://www.xbitlabs.com/articles/video/display/starcraft2-wings-of-…<]§ Though SC2 doesn't really matter too much as most people don't have trouble running it. At some point, you have to accept what the current landscape is, "bias" or no. These games are there because they're either popular, their (base) engines are used a lot or they stress particular DX11 features, not because they are supposed to be "fair". *EDIT* FYI, this review benches 4 of 6 currently available DX11 games, the other 2 being Stalker as addressed above and Battleforge (which is in the nVidia camp as well according to the same AnandTech article...) "

      • SiliconSlick
      • 9 years ago

      So what you’ve said amounts to this: Nary a game developer gives a crap about making their game run on ati cards, but they sure do love coding and testing on Nvidia cards.
      So, you have your “theoretically superior if only someone would give them a chance!” brken driver, crap performance, crashing ati card, and the rest of the population, the sane portion, will buy the cards that games actually work with.
      Further, after the non crashing gaming is done, with PhysX, water tesselation on high, the bokeh filter rocking, ambient occlusion from the driver in lots of new and older games, then the real fun starts with the free Warmonger 3d destructive fps, design garage ray tracing that dream car, and if it’s your thing, folding that spanks ati’s pathetic architecture again and again.
      We are always told ati is “more efficient”, but what that really means is crappier eye candy, fuzzier, AA turned off or not functional half the time and more, and all the disappearing mouse cursors, GSOD’s and green and white and red dots and 2d deceleration and driver upgrade problems and incompatibilities….
      KEEP whining, because the game developers are obviously as sick as everyone else of ati’s card problem – I mean who wants to try to devlope on a piece of crashing crap for a card? Sure – lots of fanboy reds never have a problem, then there’s the other 75% of ati users.

        • Questors
        • 9 years ago

        Wow! You definitely can’t be a fanboy of NVIDIA could you?

        I buy the best product for the budget I have to spend. Whether it be Intel, AMD, ATI, Nvidia or who ever is the next competitor in the product market from which I have to choose.

        Saying ATI video cards are all crashing junk or every set of drivers ATI creates are sub-par is plain and simply an untrue statement. You are letting your personal preference get in the way of fair judgement and facts concluded from accepted testing practices. Get a grip.

    • DaAwesomeWalrusPrime
    • 9 years ago

    Awesome review. I think the 10.6 drivers used in your review was the worse of the 2010 drivers. my crossfire hd 5770 w/ 10.8beta drivers scale’s much better than your review indicates. I game at 1920 x 1200 maxed settings and i’m getting 90 fps in battle bad 2, 110 fps in aliens vs predator, and 70 plus in borderlands. I plan on adding an additional hd 5770 to my setup, i’m glad some games work really well in 3 x HD 5770 CFX.

    • Thresher
    • 9 years ago

    I would really like to see these tests on a P55 board to see what kind of effect having a single 16x channel has on performance. Most people aren’t sporting X58s because we can get most of the performance without paying a premium. The one thing we give up is full speed SLI, but I’m not really sure how much of a penalty we pay.

      • Damage
      • 9 years ago
        • Thresher
        • 9 years ago

        Thanks Damage. I didn’t realize it would be that close, but I guess it makes sense.

        May have to go that route!

      • Krogoth
      • 9 years ago

      16X PCIe is still bloody overkill, 8X PCIe is still quite sufficient for video cards. You start to see some bandwidth starvation with 4x mode. I suspect it is a moot point with 4x PCIe 2.0 slots since it has the same bandwidth as a 8x PCIe 1.0 slot.

    • Jambe
    • 9 years ago

    So… when you factor the cost of a motherboard with three x16 slots on it into the picture, how would the three 5770s stack up?

    I’m all for ATI ably competing, but… eh.

    • darryl
    • 9 years ago

    Does either camp have a solution whereby two x 1gb memory cards are functioning with the combined power of 2gb of memory? As far as I understand it neither SLi nor Crossfire runs memory in “parallel” but only in “series”. Do I have this right?
    thanks,
    darryl

      • Dagwood
      • 9 years ago

      Memory on a video card is a buffer. The memory is only available for the processor that is next to it. Much of what is in memory in one card will have to be duplicated in memory in the other card. Therefore two cards in SLI or Crossfire are like haveing one card with Twice the amount of Shaders but the same amount of memory as the one card.

      If you know you are going to use your cards in pairs it is best to get one with as much memory as you can.

    • flip-mode
    • 9 years ago

    A single GTX 470 is looking pretty darn good! I don’t remember it looking this good at launch.

    Power consumption is just about the same as a 5870, overall performance is slightly higher! Cost is lower. Shoot, that’s good.

      • Krogoth
      • 9 years ago

      It is because Nvidia was forced to do aggressive price cuts on it.

      470 is a now 5870 killer, but that will likely only hold until Southern Islands get here.

      • sammorris
      • 9 years ago

      Power consumption of a GTX470 the same as an HD5870? lol

        • flip-mode
        • 9 years ago

        RTFA dude.

          • sammorris
          • 9 years ago

          Read it. Then read other articles, which don’t agree.

            • flip-mode
            • 9 years ago

            No links? Fail. You smell like a troll.

      • w00tstock
      • 9 years ago

      To me you seem like the troll. So heres the 470 useing 47 more watts playing crysis.

      §[<http://www.anandtech.com/show/2977/nvidia-s-geforce-gtx-480-and-gtx-470-6-months-late-was-it-worth-the-wait-/19<]§

        • flip-mode
        • 9 years ago

        We’re all trolls!

        To each his own, but to me, it’s ALWAYS idle power consumption that is the more important power consumption aspect. While I’m using something, let it use power – at least it’s doing something. But while it’s sitting there idling, I want it sipping as little power as possible. The 470’s idle consumption is pretty good.

        Now, I wonder if it is possible that newer drivers have helped bring down load power consumption? Dunno.

        Either way, new drivers have *[

          • poulpy
          • 9 years ago

          If you’re only talking about idle power you might want to make it clear in the first place as we spend days discussing various TDPs on these pages mate 🙂

          Also if I agree idle consumption is important, load consumption is too as I don’t want to:
          – buy a high-end power supply
          – deal with more noise
          – deal with more heat

          The linked article seems to have a different take on the subject with the 470 quoted at 33W idle power and the 5870 at 24W (-9W overall power).
          That’s over a third more than the 5870, not negligible at all IMO.

          On that same page the 470 loses by decent margins (~15 to ~35% I’d say) on any idle/load consumption/temperature.

            • flip-mode
            • 9 years ago

            In my OP #52 I was not just talking about idle. I was talking about the results that TR published in this article.

            The link that woodstock posted is from the initial lauch. Since that time Nvidia has released driver updates that have done good things for the performance of the 470, taking it from being overall slower than the 5870 to being overall faster than the 5870.

            Fan noise while gaming doesn’t much bother me. Fan noise while idle is CRITICAL. According to TR’s results, which I trust, 470 is about 10 watts over the 5870 at idle. That’s not very alarming.

            As for load power consumption, whether we use TR’s figures from the present day showing a ~10 watt difference or Anand’s figures from months ago showing a ~50 watt difference is almost irrelevant:
            – both are under 400 watts so a 500 watt PSU should drive the 470
            – I’m guessing anyone with a 5870 has a PSU that’ll do more than 500 watts anyway, so that moots the point yet again.
            – The 5870 and the 470 both require the same PSU leads – two 6 pin PCIE connectors, FWIW
            – Additional heat is going out the back of the case

            I hate Nvidia as a company more than anyone, but I’m not going to let that bias what I say. The fact is the the 470 looks more attractive than the 5870 right now in a couple of ways:
            – price
            – overall performance
            – doesn’t suffer in particular games the way ATI’s 5xxx architecture seems to.

            I love how both sides think I’m a fanboy! Tells me I’m doing something right! Go read my comments to TR’s Fermit launch article. I was completely unkind to both the 470 and the 480. Well, something has change! Prices have dropped and performance and risen! That’s a magic combo.

            Peace.

            • poulpy
            • 9 years ago

            Some are quick to call others fanbois, the ones doing it on the daily being the most biased posters themselves so I wouldn’t worry too much if such thing happens..

            Outside of this 470 vs 5870 debate and in a more general note I -for one- like to:
            – lower my overall electricity bill
            – reduce the heat output (as it ultimately goes into my room)
            – lower the Dbs (as you may not hear with headphones but, unless you live alone, others will)

            Therefore 20 or 30W at idle makes a difference, as +50W at load, as +10C or 3, 4dB would. Then I guess it’s down to personal preferences.

            Regarding nVidia as a company they put me off buying their products for a while, so unless they release something that blows ATi completely out of the water they won’t see my dough for some time.
            You need -to a certain extend- to vote with your money IMO.

            • flip-mode
            • 9 years ago

            If I ever buy Nvidia, it will be grudgingly and with anger toward ATI for falling behind. I’m not at all in the market for a card right now. If I were, I would be strongly compelled to buy a GTX 460 768 (I game at 1600×1200). If I had more money to spend, I’d be grabbing a GTX 470. Now, those words should carry some weight because I strongly dislike Nvidia as a company. I’ve hated on them nearly every time I’ve had the opportunity. But here’s the cold hard facts and they’re bad news for ATI:

            $389 HD 5870:
            §[<http://www.newegg.com/Product/Product.aspx?Item=N82E16814150476&cm_re=5870-_-14-150-476-_-Product<]§ $289 - $20 = $269 GTX 470 §[<http://www.newegg.com/Product/Product.aspx?Item=N82E16814121372&cm_re=gtx_470-_-14-121-372-_-Product<]§ So *[

            • ThorAxe
            • 9 years ago

            I have a 4870×2 and 4870 in Crossfire and I am seriously considering going to a GTX 470 SLI solution.

            This is partly because no driver after the 10.5 beta have worked properly for me in BFBC2 due to Crossfire issues. It is a widespread issue with the 4000 series.

            • Krogoth
            • 9 years ago

            10.7s have been working great with my 4850 CF in BF:BC2.

            No more delays in level loading.

            • ThorAxe
            • 9 years ago

            Must be a 4870×2 issue then as many with this card have problems but loading is not one of them.

            10.7 will work for a while but extended play results in a black HUD-like screen.

            Annoyingly 10.7 is extremely fast in Crysis and Warhead.

    • PainIs4ThaWeak
    • 9 years ago

    Any chance TR, that we could get a couple Gainward GeForce GTX 460 1GB GLH’s thrown into this mix at some point?

    • samurai1999
    • 9 years ago

    Interesting article
    – I think the scatter plots are a great way to show price/performance trends quickly!

    A couple of questions
    – why not use the latest NV drivers (258.96) these have been out a while, and are bound to give a few % better performance
    – also 10.7 for ATI – there are rumours it has bad CF scaling

    And also, why not include some of the factory OC versions of the GTX460
    – the Galaxy has about a 20% OC, for little cost increase
    – i.e. good value!

    • flip-mode
    • 9 years ago

    GREATEST… OPENING… LINE… EVER!

    OMG I laughed at that. I haven’t even read past that line yet. That is some funny stuff. Had me doing a full laugh at 5:23 AM. Love it.

    • esterhasz
    • 9 years ago

    Price drops do not come because of benchmark results – they are a function of inventory and order lists. Fermi was a PR desaster for NV and as long as ATI can sell their cards at current prices, they will – the fact that prices are where they are tells you that they can…

      • WaltC
      • 9 years ago

      True, but don’t forget the middleman reseller. As long as they can keep on selling their ATi cards at price premiums compared to selling nV’s, they’ll do it. Higher demand generally equals higher price; lower demand, lower price, etc. When you buy a card from NewEgg, NewEgg gets its cut on every one, and it’s NewEgg who ultimately decides what the products it sells will cost the consumer–not nVidia or AMD.

    • travbrad
    • 9 years ago

    Great article! There is so much information to soak up here. The multi-GPU solutions did a lot better than I expected actually. I’m still not convinced they are worth it, considering all the downsides to multi-GPU setups, but at least it’s close.

    I’m still willing to trade that bit of extra performance for guaranteed performance (and more simplicity). I just don’t have a lot of confidence that there will be driver updates 3 years from now to support some obscure game, and I generally hold on to a card for a few years.

    P.S. First page, 3rd paragraph it says “high-end graphics card when two of these can had for less?” I think there should be a “be” in there.

    • can-a-tuna
    • 9 years ago

    You have all special AMP!s and damps from nvidia but ATI has to settle reference designs.

      • Damage
      • 9 years ago

      Yeah, it’s a shame. Nvidia has long worked with its board partners to enable and encourage higher-clocked variants of its products. When the GTX 400 series came out, board makers sent us these cards for testing, so we used them here.

      No such thing happened when the Radeon HD 5000 series was introduced. Instead, if you’ll recall, shortages and price hikes were the order of the day, and many 5000-series cards still cost more than their initial suggested prices. Even today, I don’t see any Radeon HD 5830s on Newegg with a GPU clock above the stock 800MHz.

      It’s not our intention to slight AMD here, but we have to work with the realities of the market. If AMD and its board partners want to send us higher-clocked variants of their cards to test–and if they are real products available to consumers–we’d be happy to test them next time around.

        • Usethis2
        • 9 years ago

        I hate to sound like a troll here, but your attitude is kind of eye-opening. There are tons of 3rd party design 5800 cards available today yet you’re talking about 6 months ago.. and you won’t test something unless a manufacturer send it to you? (“for free” obviously comes to mind)

        I thought TR was a bit better than that.

          • pmonti80
          • 9 years ago

          The conclusion of the review would still be the same, ATI has to reduce prices a bit if the want to have price-performance parity since GTX 460 arrived to the market.
          But TR has a long tradition of making tests mixing custom and reference board. That’s not something new you either like it or don’t, but in recent years they do that a often.

            • Usethis2
            • 9 years ago

            I have nothing against the review’s conclusion or testing results. My surprise only pertains to the post #45. Obviously one of the reasons why a custom 5800 series card wasn’t tested was because no one has send one to TR. But as Damage notes, he could have paid a little bit more attention on “market reality” where many non-reference 5800 series cards are selling all over the places. And it’s not like there are many custom 480s are available. Zotac’s AMP is almost like an exceptional case to me rather than a norm.

    • paulsz28
    • 9 years ago

    Why are there no no 5970 CFX nor 5870 CFX setups in review?

      • potatochobit
      • 9 years ago

      that would be over 1000$ I think

      • Damage
      • 9 years ago

      The 5870 CrossFireX config is all over this review. Look again!

      We only tested a single 5970, and although testing two of them would be fun, it would be a different class of solution than anything we tested–a ~$1300 video subsystem–something more along the lines of three-way GTX 470/480 SLI, which we also didn’t test. Doing justice to solutions of that class would require a larger PSU and consistently higher display resolutions than we’d already committed to for this comparison.

        • Voldenuit
        • 9 years ago

        Good point, Damage!

        Although I do wonder if you have any x8/x8 SLI/CFX benchmarks for the 460/5770, since someone who buys an X58 mobo and Corei7 CPU is probably going to have the budget for a 470 or 5870 rather than a 460 (or two).

        Will folks on a P55 see the same scaling for 460 SLI as on an X58? How big is the performance penalty (if any)?. What about H55 users who are mostly stuck in x16/x4 configs? Are there even any SLI-certified H55 boards?

    • HurgyMcGurgyGurg
    • 9 years ago

    Stop making my HD 4870 feel inadaquate… it’s been through enough already!

    • Proxicon
    • 9 years ago

    AMD! WHERE ARE THE PRICE DROPS!?!?!?

      • Voldenuit
      • 9 years ago

      Yeah. Since nvidia is $50-100 cheaper than AMD for the same performance, I propose AMD chop $100 off every GPU they sell ;).

        • NeelyCam
        • 9 years ago

        They won’t. You have to factor in the expected lifetime of NVidia cards… you know, ATi chips don’t fall off the board after 10 heat/cool cycles, so they ATi cards tend to have a bit more intrinsic value.

      • HisDivineShadow
      • 9 years ago

      As soon as AMD unplugs their fingers from their ears and stops yelling, “La la la,” they may hear you telling them that their cards don’t have the performance to command a premium anymore.

      In the meantime, they’re still of the mindset that they dominate the charts without peer.

        • StuG
        • 9 years ago

        Or the 6800 series will come out, and there will be no need for a price drop |roll|

    • bdwilcox
    • 9 years ago

    The non-quantitative reason for only running one card is reduced complexity which ultimately aids in troubleshooting those inevitable game glitches. SLI / CFX setups really make troubleshooting a monster compared to a single card solution.

    • internetsandman
    • 9 years ago

    Hmm….I know it’s only a small variance in the numbers, but looking at the temperatures for the GTX470 on it’s own, and then in SLI, it was weird seeing that the SLI config was actually a few degrees cooler than the lone card. Any explanations for this?

      • Damage
      • 9 years ago

      The single GTX 470 card is an Nvidia reference unit. The primary card in the GTX 470 SLI pair (i.e., the one we measured) is a newer card from Asus–with the same reference cooler, but quite possibly different fan speed control tuning in the video BIOS.

    • TheEmrys
    • 9 years ago

    I’m very surprised by the performance of 3-way 5770. I hadn’t expected it to do nearly so well. Looks like drivers are being tweaked in such a way to really make use of more cards. We may not be “there” yet, but this is very encouraging.

    • StuG
    • 9 years ago

    I’ll have to admit here, that though the Nvidia cards are winning in these charts I don’t regret owning my 5870…especially since September 😛

    This was a great article though. Can’t wait for the next gen, really feel like things are going to heat up and we’ll see some awesome on-time competition from both camps.

      • Krogoth
      • 9 years ago

      The real question is, is your 5870 still sufficient enough to handle modern games at your desired settings? 😉

        • StuG
        • 9 years ago

        Not at all. Thats why I have two XD

    • Bensam123
    • 9 years ago

    Despite the upfront costs of purchasing the cards, I’d look more to the long term as the extra power the cards will use in comparison to a single card solution. How long will it be till that pays off?

    Other factors worth considering factoring in are heat as well as the hardware failure associated with overheating and noise. That and new games are still glitchy till hardware profiles are released and even after then strange things happen such as microstuttering (that can drive someone like myself up the walls just like refresh rates and response rates). I know you have graphs and such for heat and power, but you don’t seemingly factor that into your conclusion besides talking briefly about the 5770×3.

      • Krogoth
      • 9 years ago

      Micro-stuttering is an issue with all multiple-card rendering solutions. It is just the nature of the beast, otherwise you would run into artfacting issues.

      Micro-stuttering is only a significance issue if your CF/SLI solution is incapable of rendering more than 30FPS. It slowly wanes away as the FPS moves up until you hit 60-75FPS range where it becomes unnoticeable.

      It does present itself as a long-term issue in cases where 2 mid-range cards may meet or exceed one high-card of the same generation in their heyday. Later on, when games start become more demanding. The singe high-end card will able to provide a playable framerate (30-50FPS) while, the 2 mid-ranges cards in SLI/CF begin to suffer from the effects of micro-shuttering.

        • Deanjo
        • 9 years ago

        The other advantage that one big card offers over a SLi/CF rig as well is not having to worry about the drivers having good multi-gpu profiles for your particular game.

      • flip-mode
      • 9 years ago

      4870 uses more power at idle than every config tested here with the exception of the 480 SLI.

      At load, the 5770 Crossfire is a marvel of power efficient dual card performance.

      Power consumption just does not appear to be any issue to be at all concerned about.

    • Krogoth
    • 9 years ago

    Damage, there are a few hardware differences between Crossfire and CrossfireX. Think of them as generation 1 and generation 2 respectfully.

    Crossfire = proprietary external cabling along with a proprietary port which required special edition cards = rare and expensive. (X800XT, X850XT, X1800XT and X1900XT).

    CrossfireX = PCIe bridges and fingers that are similar to Nvidia’s SLI. Like SLI, they are found on the mid-range cards or greater.

    With that aside, the article demonstrates what I had known for a while. SLI/CF only make sense in a few instances where you get two mid-range or second-fastest GPUs to meet or exceed high-end for less $$$$$ at 1920×1080 gaming or greater. CF 5770 and 460 SLI fit nicely into this niche. Going beyond two card is just a waste as diminishing returns rears its ugly head.

    Otherwise, stick with single-cards. They have more consistent performance, less software issues and generate less heat/noise.

      • Damage
      • 9 years ago

      Actually, that’s not quite accurate. The name CrossFire was retained well after the original, FPGA-and-cabling based scheme was replaced by dedicated logic on the GPU and the familiar bridge connector. That transition happened with the introduction of the Radeon X1950 Pro and the RV570 GPU:

      §[<https://techreport.com/articles.x/11032/2<]§ "CrossFire X" was introduced a couple of years ago when AMD introduced drivers capable of supporting 3- and 4-way GPU teaming. See here: §[<https://techreport.com/articles.x/14284<]§

        • Krogoth
        • 9 years ago

        I stand corrected, but most of it pretty much holds.

        Again, marketing brands were never meant to make any real sense. 😉

    • UltimateImperative
    • 9 years ago

    A question: in some cases, the lower priced Gigabyte and Asus motherboards have “CrossfireX” on the box, but not “SLI”. For instance, the $150 GA-P55A-UD3 (while the $175 GA-P55A-UD4 has both SLI and CrossfireX “stickers”).

    Does this mean that these motherboards cannot be used for SLI, or just that Nvidia won’t endorse their use? Given that cheaper LGA1156 motherboards will often only have one PCIe x16 slot, has anyone tested them to see if using them for multi-GPU setups is limiting?

      • MadManOriginal
      • 9 years ago

      This is the closest I could find without trying hard at all, it appears to be single card testing:

      §[<http://www.techpowerup.com/reviews/NVIDIA/GTX_480_PCI-Express_Scaling/24.html<]§ As long as it's PCIe 2.0, x8 has very little effect and even x4 isn't a truly massive drop. Most SLI/xFire communication goes over the bridge connector and not the PCIe slots so while it might scale worse it probably wouldn't be too bad. As far as motherboards not 'SLI certified' I can't really say, I pretty much never cared enough about multi-GPU to look that deeply in to it.

      • Krogoth
      • 9 years ago

      Theoretically, modern flavors SLI/CF just needs two cards with SLI/CF bridge = you only need two physical 16X PCIe slots.

      Licensing BS puts a huge wrench into that. In the past, Nvidia tried to use this to justify their chipset division. ATI/AMD were far more relaxed as they were willing to offer CF support on all non-Nvidia chipsets.

      When Nvidia gave up on their chipset division. They were forced to license SLI on Intel chipsets. This required BIOS-level “certificates” as the drivers would normally check for the chipset to permit SLI support. A few clever hackers successfully extracted these certificates and imported them to a wider range of Intel chipsets. They found that they still worked.

      My P45-DS4P board has already been flashed with these certificates which means I have “unofficial” SLI support. I haven’t tested it myself, but other users with P45 boards had great success with it.

      • sluggo
      • 9 years ago

      My AMD-based motherboard does not officially support SLI , but a registry hack convinces the Nvidia drivers that they’re talking to a licensed chipset. My dual 8800GTs work great in SLI in this AMD 790/750 setup.

    • Deanjo
    • 9 years ago

    Wow, did ATI ever get smacked around in these multi-GPU tests.

      • StuG
      • 9 years ago

      It should, they are just catching up when…10 months later now? They are fighting old as dirt radeons with brand new cards. They SHOULD win.

        • Deanjo
        • 9 years ago

        With Southern Islands being a minor tweak (and rumored to be delayed) to the existing Evergreen offerings they are in for some rough times I think until they can get Northern Islands rolling along. Nvidia still has a lot of headroom on their current offerings where AMD is nearing the full potential of their current offerings. Should be fun to watch unfold. Lets just hope it’s not going to be like ATI’s R600 years where they were dominated for over 2 years by the same old G80/92’s.

          • Fighterpilot
          • 9 years ago

          l[

            • Deanjo
            • 9 years ago

            Ya it is a minor tweak. The big changes won’t happen until Northern Islands come about. But go ahead and bookmark it if you wish to end up with egg on your face.

            • Fighterpilot
            • 9 years ago

            So a minor tweak…say 5% or so quicker than Cypress then.
            Cool….see you for dinner.

      • Voldenuit
      • 9 years ago

      There are some issues with the 10.6 (and 10.7) Catalysts that are hampering CFX performance. According to the forums at various tech sites, ATI CFX users are complaining that their setups are slower with the new drivers than they were on 10.5.

      The CFX setups did very badly on 10.7 at [H], with even the 5870 CFX losing out to the 460 SLI. This is very likely a driver issue, as a single 5870 demolishes a single 460.

      In any case, the TR figures are more in line with what I would have expected out of CFX and SLI performance. There is no doubt that the 470 and 480 cards are super fast, their main criticism has been the excessive heat and high price.

      ATI really needs to lower their prices though. The 460 is still the best bang for buck card around, and it gets even sweeter in SLI. Too bad many mainstream users running AM3/H55/P55 (the target market for the 460) will be scratching their heads trying to figure out if their system will support SLI, as only the X58 does x16/x16, with a mish mash of x8/x8, x16/x4 boards in the mid segment. Also, AMD users are left in the cold unless they bought an nforce board, which is not an attractive option for that platform.

        • Deanjo
        • 9 years ago

        r[

          • tay
          • 9 years ago

          nforce based boards are more problematic in general than the AMD/ATI boards. I think voldenuit is correct on all the points he makes.

            • Deanjo
            • 9 years ago

            Well I’ve got a few computer labs and my personal system that would beg to differ. Most of nForces woes was on their Intel versions.

          • Voldenuit
          • 9 years ago

          My point is that nvidia would do themselves a favour if they removed the silly restriction on SLI-compatible chipsets. It’s not like they have a chipset business to support anymore, as they have no QPI license, intel IGPs are now on-package and the AMD nforce boards are out of date. With Llano coming, they will lose out on the AMD chipset business by default as well.

          The only thing keeping SLI from working on AMD systems is a single bit in the drivers. It’s about time they flipped it, because anyone with an AM3 system would probably prefer to buy 2x460s to 3x5770s or 2x5830s (or even 2×5850).

            • MadManOriginal
            • 9 years ago

            Yeah if NVs chipset business is going to die out almost completely, and it looks like it’s on its last legs already, they really ought to extend licensing to all chipsets. It’s pretty easy revenue with little overhead, if I was an NV stockholder I’d be hoping for that versus Jen-Hsun acting like a bratty child by throwing a selfish but ultimately detrimental hissy fit.

            • Krogoth
            • 9 years ago

            SLI being exclusive to Nvidia’s chipset platform never made any real sense to begin with.

            Nvidia makes far on extra graphic card sales for SLI than on chipset sales.

    • Atradeimos
    • 9 years ago

    In the part about the price-performance scatter plots, I think you mean “upper left” and “lower right” of the graph. Not that any regular TR reader will be fooled for long…

    • phez
    • 9 years ago

    Amazing. I always wondered where the SLI/CF numbers were … and here they are, in grand fashion.

    • BoBzeBuilder
    • 9 years ago

    SECOND. DAMN.

      • NeelyCam
      • 9 years ago

      Try harder next time.

    • MadManOriginal
    • 9 years ago

    Frist pots!

Pin It on Pinterest

Share This