AMD’s Radeon HD 3870 X2 video card

So here we are, knee-deep in yet another chapter of the ongoing struggle for graphics supremacy between the Radeon camp and the GeForce camp, between the red team and the green team, between ATI—now AMD—and Nvidia. In the past few months, the primary battlefront in this war has shifted away from the traditional slug-fest between ridiculously expensive video cards and into the range between $200 and $300—an unusual development, but a welcome one. Nvidia has captured the middle to upper end of that price range with the GeForce 8800 GT, a mighty performer whose sole liability was the initial difficultly many encountered with finding one in stock at their favorite online store. Meanwhile, AMD offers a capable substitute in the form of the Radeon HD 3870, which isn’t quite as quick as an 8800 GT but is still a solid value.

In short, everybody’s a winner. And we can’t have that, now. Can we?

Nvidia has held the overall graphics performance crown uninterrupted since the fall of 2006, when it introduced the GeForce 8800 GTX, and the only notable change since then was when the green team added a few additional jewels to its crown and called it the GeForce 8800 Ultra. The folks at AMD have no doubt been fidgeting nervously over the past year-plus—chewing on the tips of their pens, tapping their fingers on their desks, checking and re-checking the value of their stock options—waiting for the chance to recapture the performance lead. Frustratingly, no single Radeon HD GPU would do it, not the 2900 XT, and not the 3870.

But who says you need a single GPU? This is where the Radeon HD 3870 X2 comes into the picture. On the X2, two Radeon HD 3870 GPUs gang up together to take on any GeForce 8800 available. Will they succeed? Let’s find out.

The X2 up close

The idea behind the Radeon HD 3870 X2 is simple: to harness the power of two GPUs via a multi-GPU scheme like CrossFire or SLI in order to make a faster single-card solution than would otherwise be possible. AMD did this same thing in its last generation with the Radeon HD 2600 X2, but the card never found its way into our labs and was quickly usurped by the Radeon HD 3870. Nvidia was somewhat more successful with the GeForce 7950 GX2, an odd dual-PCB card that was a key component of its Quad SLI scheme.

AMD’s pitch for the 3870 X2 sounds pretty good. The X2 should possess many of the Radeon HD 3870’s virtues—including DirectX 10.1 support and HD video decode acceleration—while packing a heckuva punch. In fact, AMD claims the X2 offers higher performance, superior acoustics, and lower power draw than two Radeon HD 3870 cards in a CrossFire pairing. That should be good enough, they say, for gaming in better-than-HD resolutions.



The Radeon HD 3870 X2



The X2 (left) next to the GeForce 8800 Ultra (right)

At 10.5″, the X2 card is every bit as long as a GeForce 8800 Ultra and, in fact, is deeper than most motherboards. A card this long may create clearance problems in some motherboards, especially if the mobo has poorly placed SATA ports. Unlike regular Radeon HD 3870 cards, the X2 has only a single CrossFire connector onboard, presumably because attaching more than two of these puppies together would be borderline crazy.

The X2 has two auxiliary power plugs onboard, one of the older six-pin variety and another of the newer eight-pin sort. The card will work fine with a six-pin plug in that eight-pin port, but you’ll need to use an eight-pin connector in order to unlock the Overdrive overclocking utility in AMD’s control panel.



The back of the card shows telltale signs of two GPUs



The X2’s cooler. ’tain’t small.

Remove about 19 screws, and you can pull off the X2’s massive cooler for a look beneath, where things really start to get interesting. The first thing you’ll notice, no doubt, is the fact that the two GPU heatsinks are made from different materials: one copper and one aluminum. Reminds me of a girl I dated briefly in high school whose eyes were different colors.

Hey, I said “briefly.”

I think it’s a good thing both heatsinks aren’t copper, because the X2’s massive cooler is heavy enough as it is.

And the X2 has much to cool. AMD rates the X2’s peak power draw at 196W. However, the Radeon HD 3870 GPUs on this beast do have some power-saving measures, including a “light gaming” mode for less intensive 3D applications. With that mode in use, the X2’s rated power draw is 110W.

With all of the talk lately about hybrid graphics schemes, the X2 might seem to have a built-in opportunity for power savings in the form of simply turning off one of its two GPUs at idle. I asked AMD about this possibility, though, and they said the latency involved in powering down and up a whole GPU was too great to make such a scheme feasible here. Ah, well.

Under the hood

As I said, the X2 becomes more interesting under the hood. Here’s what you’ll find there.



Dual GPUs flank the X2’s PCI Express switch chip

The two chips on the left and the right are Radeon HD 3870 chips, also known as RV670 GPUs. The X2’s two GPUs flip bits at a very healthy frequency of 825MHz, which is up 50MHz from the Radeon HD 3870. Each GPU has a 256-bit interface to a 512MB bank of memory. That gives the X2 a total 1GB of RAM on the card, but since the memory is split between two GPUs, the card’s effective memory size is still 512MB. The X2’s GDDR3 memory runs at 900MHz, somewhat slower than the 1125MHz GDDR4 memory on the single 3870. All in all, this arrangement should endow the X2 with more GPU power than a pair of 3870 cards in CrossFire but less total memory bandwidth. AMD says its partners are free to use GDDR4 memory on an X2 board if they so choose, but we’re not aware of any board makers who plan to do so.

By the way, in a confusing bit of roadmappery, the X2’s code-name is R680, although it’s really just a couple of RV670s glued together.



The PLX 8547 PCIe switch

The “glue,” in this case, is a PCI Express switch chip. That mysterious chip is a PLX 8547 PCIe switch. Here’s how PLX’s website describes the chip:

The ExpressLane™ PEX 8547 device offers 48 PCI Express lanes, capable of configuring up to 3 flexible ports. The switch conforms to the PCI Express Base Specification, rev 1.1. This device enables users to add scalable high bandwidth, non-blocking interconnects to high-end graphics applications. The PEX 8547 is designed to support graphics or data aggregation while supporting peer-to-peer traffic for high-resolution graphics applications. The architecture supports packet cut-thru with the industry’s lowest latency of 110ns (x16 to x16). This, combined with large packet memory (1024 byte maximum payload size) and non-blocking internal switch architecture, provide full line-rate on all ports for performance-hungry graphics applications. The PEX 8547 is offered in a 37.5 x 37.5mm² 736-ball PBGA package. This device is available in lead-free packaging.

Order yours today!

This switch chip sounds like it’s pretty much tailor-made for the X2, where it sits between the two GPUs and the PCIe x16 graphics slot. Two of its ports attach to the X2’s two GPUs, with 16 lanes of connectivity each. The remaining 16 lanes connect to the rest of the system via the PCIe x16 slot. Since PCI Express employs a packet-based data transmission scheme, both GPUs ought to have reasonably good access to the rest of the system when connecting through a switch like this one, even though neither GPU has a dedicated 16-lane link to the rest of the system. That said, the X2’s two GPUs are really no more closely integrated than a pair of Radeon HD 3870 cards in CrossFire.

Also, as you may know, the Radeon HD 3870 GPU itself is capable of supporting PCIe version 2.0, the revved-up version of the standard that offers roughly twice the bandwidth of PCIe 1.1. The PLX switch chip, however, doesn’t support PCIe 2.0. AMD says it chose to go with PCIe 1.1 in order to get the X2 to market quickly. I don’t imagine 48-lane PCIe 2.0 switch chips are simply falling from the sky quite yet, and the PCIe 1.1 arrangement was already familiar technology, since AMD used it in the Radeon HD 2600 X2.

Incidentally, the PLX bridge chip adds about 10-12W of power draw to the X2.

AMD plans to make Radeon HD 3870 X2 cards available via online resellers starting today, with an initial price tag of $449. At that rate, the X2 should cost slightly less than two Radeon HD 3870s at current street prices.

The dual GPU issue

Multi-GPU video cards aren’t exactly a new thing, of course. They’ve been around almost since the beginning of consumer 3D graphics cards. But multi-GPU cards and schemes have a storied history of driver issues and support problems that we can’t ignore when assessing the X2. Take Nvidia’s quad SLI, for instance: it’s still not supported in Windows Vista. Lest you think we’re counting Nvidia’s problems against AMD, though, consider the original dual-GPU orphan: the Rage Fury MAXX, a pre-Radeon dual-GPU card from ATI that never did work right in Windows 2000. The prophets were right: All of this has happened before, and will again.

The truth is that multi-GPU video cards are weird, and that creates problems. At best, they suffer from all of the same problems as any multi-card SLI or CrossFire configuration (with the notable exception that they don’t require a specific core-logic chipset on the motherboard in order to work.) Those problems are pretty well documented at this point, starting with performance scaling issues.

Dual-GPU schemes work best when they can distribute the load by assigning one GPU to handle odd frames and the other to handle the even ones, a technique known as alternate-frame rendering, or AFR. AFR provides the best performance scaling of any load-balancing technique, up to nearly twice the performance of a single GPU when all goes well, but it doesn’t always work correctly. Some games simply break when AFR is enabled, so the graphics subsystem may have to fall back to split-frame rendering (which is just what it sounds like: one GPU takes the top half of the screen and the other takes the bottom half) or some other method in order to maintain compatibility. These other load-balancing methods deliver sometimes underwhelming results.

Things grow even more complicated from there, but the basic reality is this: GPU makers have to keep a database of games in their drivers and provide hints about what load-balancing method—and perhaps what workarounds for scaling or compatibility problems—should be used for each application. In other words, before the X2’s two GPUs can activate their Wonder Twin powers, AMD’s drivers may have to be updated with a profile for the game you’re playing. The same is true for a CrossFire rig. The game may have to be patched to support multi-GPU rendering, as well, as Crysis was recently. (And heck, Vista needs several patches in order to support multi-GPU rigs properly.) Getting those updates tends to take time—weeks, if not months.

One would think that game developers and GPU makers would work together more diligently to ensure a good out-of-the-box experience for gamers with SLI or CrossFire, but the reality is that multi-GPU support is almost inescapably a lower priority for both parties than just getting the single-GPU functionality right—and getting the game shipped. The upshot of this situation is that avid gamers may find they’ve already finished playing through the hottest new games before they’re able to bring a second GPU to bear on the situation. Such was my experience with Crysis.

Generally speaking, Nvidia has done a better job on this front lately than AMD. You’ll see the Nvidia logo and freaky theme-jingle-thingy during the startup phase of many of the latest games, and usually, that means the game developer has worked with Nvidia to optimize for GeForce GPUs and hopefully SLI, as well. That said, AMD has been better about supporting its multi-GPU scheme during OS transitions, such as the move to Vista or to 64-bit versions of Windows.

This one-upsmanship reached new heights of unintentional comedy this past week when Nvidia sent us a note talking down the Radeon HD 3870 X2 since multi-GPU cards often have compatibility problems. It ended with this sentiment: besides, wait until you see our upcoming GX2! Comedy gold.

The problem is, neither company has made multi-GPU support as seamless as it should be, for different reasons. Both continue to pursue this technology direction, but I remain skeptical about the value of a card like the X2 given the track record here. Going with a multi-GPU card to serve the high end of the market is something of a risk for AMD, but one that probably makes sense given the relative size of the market. Whether it make sense for you to buy one, however, is another question.

How the X2 handles

I am pleased to report that AMD has made a breakthrough in removing one of the long-standing problems with multi-GPU configs: multi-monitor support. Traditionally, with SLI or CrossFire enabled, a PC could only drive a single display—a crazy limitation that seems counter-intuitive on its face. If you had two displays, you’d have to go into the control panel enable CrossFire, thus disabling one monitor, before starting up the game. You’d then have to reverse the process when finished playing. Total PITA.

Happily, though, AMD’s newer drivers pretty much blow that limitation away. They’ve delivered the “seamless” multi-monitor/multi-GPU support AMD promised at the time of the Radeon HD 3870 launch. Here’s what I was able to do as a result. I connected a pair of monitors—a 30″ LCD with a dual-link DVI port and an analog CRT—to a Radeon HD 3870 in a CrossFire pairing and enabled CrossFire. Both displays continued to show the Vista desktop with CrossFire enabled. Then I ran UT3 and played full-screen on the LCD. The CRT continued to show the Vista desktop just fine. Next, I switched UT3 into windowed mode, without exiting the game, and the transition went smoothly, with both displays active. Finally, I dragged the UT3 window to span both desktops, and it continued to render everything perfectly. Here’s a screenshot, wildly shrunken, of my desktop session. Its original dimensions were 2048×1536 plus 2560×1600, with the UT3 window at 2560×1600.



UT3 spans two displays with CrossFire enabled

That, friends, is what I’m talking about. I tried the same thing on the 3870 X2, and it worked just as well. By contrast, enabling SLI simply caused the secondary display to be disabled immediately.

This multi-display seamlessness does have an unhappy consequence, though. Since multi-monitor configs work seamlessly, AMD seems to think it can get away with essentially “hiding” the X2’s CrossFire-based nature. The Catalyst Control Center offers no indication that CrossFire is enabled on an X2 card, and there’s no switch to disable CrossFire. That stings. I’ve worked with multi-GPU setups quite a bit, and I wouldn’t want to own one without having the option of turning it off if it caused problems.



Super AA 16X is available on the X2

The one indication you’ll get on the X2 that CrossFire is involved is the extension of the antialiasing slider up to 16X. This 16X setting triggers CrossFire’s Super AA mode, a method of multi-GPU load distribution via cooperative AA sampling. This is a nice option to have, but in my book, SuperAA isn’t the best option for load sharing or for high-quality antialiasing. I’d much rather use 4X multisampling combined with one of the Radeon HD 3870’s custom tent filters.

AMD is running behind on another of its promises from the Radeon HD 3870 launch, unfortunately, and that affects the X2 directly. At the time, the firm said it would be providing drivers for “CrossFire X” in January (that’s now) to enable CrossFire with three or four GPUs. However, AMD has delayed those drivers until March as it works to get adequate performance scaling out of more than two GPUs. That’s no small challenge, so I wouldn’t be surprised to see the schedule for the drivers slip further. Eventually, AMD expects the 3870 X2 to be able to participate in a couple of different CrossFire X configurations: dual X2 cards forming a quad-GPU phalanx, or one X2 card hooking up with one regular 3870 for some three-way action. I’m curious to see how well this will work once the drivers arrive.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core
2 Extreme X6800
2.93GHz
Core
2 Extreme X6800
2.93GHz
System
bus
1066MHz
(266MHz quad-pumped)
1066MHz
(266MHz quad-pumped)
Motherboard Gigabyte
GA-X38-DQ6
XFX
nForce 680i SLI
BIOS
revision
F7 P31
North
bridge
X38
MCH
nForce
680i SLI SPP
South
bridge
ICH9R nForce
680i SLI MCP
Chipset
drivers
INF
update 8.3.1.1009

Matrix Storage Manager 7.8

ForceWare
15.08
Memory
size
4GB
(4 DIMMs)
4GB
(4 DIMMs)
Memory
type
2
x Corsair
TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
2
x Corsair
TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
CAS
latency (CL)
4 4
RAS
to CAS delay (tRCD)
4 4
RAS
precharge (tRP)
4 4
Cycle
time (tRAS)
18 18
Command
rate
2T 2T
Audio Integrated
ICH9R/ALC889A

with RealTek 6.0.1.5497 drivers

Integrated
nForce 680i SLI/ALC850

with RealTek 6.0.1.5497 drivers

Graphics Dual

Radeon HD 3870 512MB PCIe

with 8.451.2-080116a-057935E drivers

Dual
GeForce
8800 GT 512MB PCIe

with ForceWare 169.28 drivers

Dual

Radeon HD 3870 512MB PCIe

with 8.451.2-080116a-057935E drivers

Dual

Radeon HD 3870 X2 1GB PCIe

with 8.451.2-080116a-057935E drivers

GeForce
8800 GT 512MB PCIe

with ForceWare 169.28 drivers

EVGA
GeForce 8800 GTS 512MB PCIe

with ForceWare 169.28 drivers

GeForce
8800 Ultra 768MB PCIe

with ForceWare 169.28 drivers

Hard
drive
WD
Caviar SE16 320GB SATA
OS Windows
Vista Ultimate
x86 Edition
OS
updates
KB936710, KB938194, KB938979,
KB940105, KB945149,
DirectX November 2007 Update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs. Thanks to OCZ for providing these units for our use in testing.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

The basic metrics

I’ll admit I’ve been holding back on you about just how powerful the X2 really is. If you consider it as a single card, it has substantially more theoretical peak throughput, in nearly every major category, than any of its competitors. Here’s a look at the basic math.

Peak
pixel
fill rate
(Gpixels/s)

Peak bilinear

texel
filtering
rate
(Gtexels/s)


Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)

Peak
shader
arithmetic
(GFLOPS)
GeForce 8800 GT 9.6 33.6 16.8 57.6 504
GeForce 8800 GTS 10.0 12.0 12.0 64.0 346
GeForce 8800 GTS 512 10.4 41.6 20.8 62.1 624

GeForce 8800 GTX

13.8 18.4 18.4 86.4 518
GeForce 8800 Ultra 14.7 19.6 19.6 103.7 576
Radeon HD 2900 XT 11.9 11.9 11.9 105.6 475
Radeon HD 3850 10.7 10.7 10.7 53.1 429
Radeon HD 3870 12.4 12.4 12.4 72.0 496
Radeon HD 3870 X2 26.4 26.4 26.4 115.2 1056

In terms of pixel fill rate, filtering of FP16-format textures, memory bandwidth, and shader arithmetic, the Radeon HD 3870 X2 leads all other “single” graphics cards on the market today. Most notably, perhaps, it’s the first graphics card to pack more than a teraflop of shader power. (And I’m using the more optimistic set of shader arithmetic numbers for the GeForce 8800 cards; another way of counting would cut their numbers by a third.) So long as the X2 can scale well to two GPUs and use the power available to it, it should be the fastest card of the bunch.

I’ve chosen to test the X2 against a range of different graphics options, as you’ll see in our first set of synthetic benchmark results below. Perhaps the most direct competition for the X2 is GeForce 8800 Ultra. Ultras typically cost quite a bit more than the X2’s $449 price tag, but board makers offer a number of “overclocked in the box” GeForce 8800 GTX cards like this one with near-Ultra clock speeds. The stock-clocked 8800 GTX, meanwhile, has essentially been rendered obsolete by the GeForce 8800 GTS 512, which offers very similar performance at a much lower price, as we found in our review. As a result, we’ve allowed a stock-clocked Ultra to be our sole G80-based representative.

The single-textured fill rate test is essentially limited by memory bandwidth, and the only solution that can top the X2 on that count among our group is the Radeon HD 3870 in CrossFire. If you’ll recall, the 3870 has a slower GPU clock but a higher memory clock thanks to its use of GDDR4 memory.

When we get to multitextured fill rate, the X2 looks to be limited by the texturing capacity of its GPUs. The cards based on Nvidia’s G92 don’t reach anything close to their (rather staggering) theoretical peak filtering capacities here, but the GeForce 8800 GTS 512 and the 8800 GT in SLI manage to outdo the X2.

The X2 takes the top spot in three of the four simple shader tests in 3DMark06. I have no idea how the Radeon HD 3870 is able to produce much more than double its vertex throughput in CrossFire than with a single GPU, but the voodoo magic seems to rub off on the X2, as well.

Anyhow, those synthetic tests give us a sense of the lay of the land, but they’re by no means the final word. Let’s look at game performance.

Call of Duty 4: Modern Warfare

We tested Call of Duty 4 by recording a custom demo of a multiplayer gaming session and playing it back using the game’s timedemo capability. Since this is a high-end graphics card we’re testing, we enabled 4X antialiasing and 16X anisotropic filtering and turned up the game’s texture and image quality settings to their limits.

Thanks to a recent change to Nvidia’s drivers, we were (finally!) able to test at 1680×1050, even though that’s not our display’s native resolution. Consequently, we’ve chosen to test at 1680×1050, 1920×1200, and 2560×1600—resolutions of roughly two, three, and four megapixels—to see how performance scales.

The X2 handily outperforms the GeForce 8800 Ultra at 1680×1050 and 1920×1200, but the contest tightens up at 2560×1600, where the multi-GPU configs seem to run into a wall—probably with available memory, since multi-GPU operation does bring some overhead. The GeForce 8800 GT in SLI trounces the X2 at the two lower resolutions, but stumbles badly at four megapixels.

Half-Life 2: Episode Two

We used a custom-recorded timedemo for this game, as well. We tested Episode Two with the in-game image quality options cranked, with 4X AA and 16 anisotropic filtering. HDR lighting and motion blur were both enabled.

The X2 again takes it to the Ultra, proving itself the fastest single-card solution around. However, the GeForce 8800 GT in SLI runs away and hides here, and it’s the only solution we tested capable of hitting 60 FPS at our peak resolution. If you’re willing to run an Nvidia chipset and sacrifice an extra PCIe x16 slot—both big “ifs,” especially the chipset thing—a couple of 8800 GTs in SLI can put the X2 to shame.

Enemy Territory: Quake Wars

We tested this game with 4X antialiasing and 16X anisotropic filtering enabled, along with “high” settings for all of the game’s quality options except “Shader level” which was set to “Ultra.” We left the diffuse, bump, and specular texture quality settings at their default levels, though. Shadows, soft particles, and smooth foliage were enabled. Again, we used a custom timedemo recorded for use in this review.

This one’s close, but the Ultra edges out the X2 overall in Quake Wars. True to its billing, the X2 again outperforms a couple of Radeon HD 3870s in CrossFire, at least.

Crysis

Rather than test a range in resolutions in Crysis, we tried several different settings, all using the game’s “high” image quality options. Crysis has another set of “very high” quality options, but those pretty much bog down any GPU combination we’ve yet thrown at it. In fact, “high” is pretty demanding, so we’ve scaled back the resolution some, too. We tested performance using the benchmark script Crytek supplies with the game.

Also, the results you see below for the Radeons come from a newer graphics driver, version 8.451-2-080123a, than the ones we used for the rest of our tests. This newer driver improved Crysis performance noticeably over the older one, both in benchmarks and when playing the game.

The X2 tops all single-card solutions at 1680×1050, but it’s not quite as quick as the Ultra at 1280×800 with 4X AA (nor is anything else). Interestingly enough, this is one application where the Radeon HD 3870 CrossFire config’s superior memory bandwidth seems to give it an edge over the X2.

Unreal Tournament 3

We tested UT3 by playing a deathmatch against some bots and recording frame rates during 60-second gameplay sessions using FRAPS. This method has the advantage of duplicating real gameplay, but it comes at the expense of precise repeatability. We believe five sample sessions are sufficient to get reasonably consistent and trustworthy results. In addition to average frame rates, we’ve included the low frames rates, because those tend to reflect the user experience in performance-critical situations. In order to diminish the effect of outliers, we’ve reported the median of the five low frame rates we encountered.

Because UT3 doesn’t support multisampled antialiasing, we tested without AA. Instead, we just cranked up the resolution to 2560×1600 and turned up the game’s quality sliders to the max. I also disabled the game’s frame rate cap before testing.

Here, the X2 produces a higher average frame rate than the Ultra, but its median low frame rate is almost the same. Amazingly enough, nearly all of these cards can run UT3 acceptably at this crazy resolution. Bottom line: UT3 performance shouldn’t be a problem for you with the X2.

Power consumption

We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows Vista desktop with the Aero theme enabled. The cards were tested under load running UT3 at 2560×1600 resolution, using the same settings we did for performance testing.

Note that the GeForce 8800 GT SLI config was, by necessity, tested on a different motherboard, as noted on our Testing Methods page.

Personally, I think the X2’s power consumption at idle is impressively low. Although it has two GPUs and two banks of 512MB of memory, our X2-based system draws only 12W more at idle than the GeForce GTS 512-based one. The 8800 Ultra-based system draws over 20W more. Under load, the situation changes, and the X2-based system draws quite a bit of power—more than the Ultra-based system and more than the 8800 GT SLI system, which is easily the better performer.

Notice, also, that the X2 doesn’t quite live up to AMD’s claims. It does draw a little more power than two Radeon HD 3870s in CrossFire, both at idle and under load.

Noise levels

We measured noise levels on our test systems, sitting on an open test bench, using an Extech model 407727 digital sound level meter. The meter was mounted on a tripod approximately 12″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured, including the stock Intel cooler we used to cool the CPU. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

Unfortunately—or, rather, quite fortunately—I wasn’t able to reliably measure noise levels for any of these cards at idle. Our test systems keep getting quieter with the addition of new power supply units and new motherboards with passive cooling and the like, as do the video cards themselves. I decided this time around that our test rigs at idle are too close to the sensitivity floor for our sound level meter, so I only measured noise levels under load.

What you should take from these results is that, at least on our open test bench, the X2 was perceptibly louder than any of the other graphics solutions we tested. That’s definitely what my ears told me. And yet, I really can’t complain too much about the X2’s noise levels. The cooler can be loud when the system is first powered on, but that’s it. During normal gaming, like the scenario we tested here, the card really isn’t very loud. It’s at least not into the “annoying” range for me. We’re talking loud-whisper level here.

GPU temperatures

Per your requests, I’ve added GPU temperature readings to our results. I captured these using AMD’s Catalyst Control Center and Nvidia’s nTune Monitor, so we’re basically relying on the cards to report their temperatures properly. In the case of multi-GPU configs, well, I only got one number out of CCC. I used the higher of the two numbers from the Nvidia monitoring app. These temperatures were recorded while running UT3 in a window.

There you have ’em. Kind of an interesting complement to the noise and power numbers, I think. I could live with a little more noise and little less heat from the 8800 GT and the Radeon HD 3870s in CrossFire.

Conclusions

Well, you’ve seen the results, and you’ve heard my thoughts about the perils of multi-GPU performance scaling, driver updates, and the like. You can probably draw your own conclusions at this point, so I’ll keep it brief.

Obviously, AMD has captured the title of “fastest single graphics card” with the Radeon HD 3870 X2. There is much to like here. The X2 is generally faster than the GeForce 8800 Ultra, and it has HD video decode acceleration that Nvidia’s older G80 GPU lacks. In all, the X2 looks to be a pretty good value in a single-card, high-end solution at $449. The X2 does draw more power and generate more noise under load than the GeForce 8800 Ultra, but it’s not unacceptable on either front for a card in this class. And the X2’s seamless multi-monitor support is the icing on the cake. I’m really pleased to see that working so well.

That said, the X2’s title is by no means undisputed. Nvidia’s best alternatives to the X2 are based on the newer G92 GPU and have support for H.264 decode acceleration themselves. For just a little more money, a pair of GeForce 8800 GT cards in SLI will outperform the X2, usually by a healthy margin. Those 8800 GTs will come with many of the same multi-GPU caveats as the X2, however, plus additional ones about requiring two PCIe x16 slots and an Nvidia chipset. If you’re hoping to sidestep those worries, one can hardly afford to ignore the solid value presented by the GeForce 8800 GTS 512 at around $300. The GTS 512’s performance isn’t far from the X2’s in many cases. Unless you really are planning on driving a four-megapixel display, a card like the GTS 512 will probably feed your appetite for eye candy quite well in the vast majority of today’s games.

We do have a new champ today, though, and it’s from AMD. Nice to see the title changing hands again, even if it took a dually to do it.

Comments closed

Pin It on Pinterest

Share This

Share this post with your friends!