AMD’s Radeon HD 3870 X2 video card

So here we are, knee-deep in yet another chapter of the ongoing struggle for graphics supremacy between the Radeon camp and the GeForce camp, between the red team and the green team, between ATI—now AMD—and Nvidia. In the past few months, the primary battlefront in this war has shifted away from the traditional slug-fest between ridiculously expensive video cards and into the range between $200 and $300—an unusual development, but a welcome one. Nvidia has captured the middle to upper end of that price range with the GeForce 8800 GT, a mighty performer whose sole liability was the initial difficultly many encountered with finding one in stock at their favorite online store. Meanwhile, AMD offers a capable substitute in the form of the Radeon HD 3870, which isn’t quite as quick as an 8800 GT but is still a solid value.

In short, everybody’s a winner. And we can’t have that, now. Can we?

Nvidia has held the overall graphics performance crown uninterrupted since the fall of 2006, when it introduced the GeForce 8800 GTX, and the only notable change since then was when the green team added a few additional jewels to its crown and called it the GeForce 8800 Ultra. The folks at AMD have no doubt been fidgeting nervously over the past year-plus—chewing on the tips of their pens, tapping their fingers on their desks, checking and re-checking the value of their stock options—waiting for the chance to recapture the performance lead. Frustratingly, no single Radeon HD GPU would do it, not the 2900 XT, and not the 3870.

But who says you need a single GPU? This is where the Radeon HD 3870 X2 comes into the picture. On the X2, two Radeon HD 3870 GPUs gang up together to take on any GeForce 8800 available. Will they succeed? Let’s find out.

The X2 up close

The idea behind the Radeon HD 3870 X2 is simple: to harness the power of two GPUs via a multi-GPU scheme like CrossFire or SLI in order to make a faster single-card solution than would otherwise be possible. AMD did this same thing in its last generation with the Radeon HD 2600 X2, but the card never found its way into our labs and was quickly usurped by the Radeon HD 3870. Nvidia was somewhat more successful with the GeForce 7950 GX2, an odd dual-PCB card that was a key component of its Quad SLI scheme.

AMD’s pitch for the 3870 X2 sounds pretty good. The X2 should possess many of the Radeon HD 3870’s virtues—including DirectX 10.1 support and HD video decode acceleration—while packing a heckuva punch. In fact, AMD claims the X2 offers higher performance, superior acoustics, and lower power draw than two Radeon HD 3870 cards in a CrossFire pairing. That should be good enough, they say, for gaming in better-than-HD resolutions.



The Radeon HD 3870 X2



The X2 (left) next to the GeForce 8800 Ultra (right)

At 10.5″, the X2 card is every bit as long as a GeForce 8800 Ultra and, in fact, is deeper than most motherboards. A card this long may create clearance problems in some motherboards, especially if the mobo has poorly placed SATA ports. Unlike regular Radeon HD 3870 cards, the X2 has only a single CrossFire connector onboard, presumably because attaching more than two of these puppies together would be borderline crazy.

The X2 has two auxiliary power plugs onboard, one of the older six-pin variety and another of the newer eight-pin sort. The card will work fine with a six-pin plug in that eight-pin port, but you’ll need to use an eight-pin connector in order to unlock the Overdrive overclocking utility in AMD’s control panel.



The back of the card shows telltale signs of two GPUs



The X2’s cooler. ’tain’t small.

Remove about 19 screws, and you can pull off the X2’s massive cooler for a look beneath, where things really start to get interesting. The first thing you’ll notice, no doubt, is the fact that the two GPU heatsinks are made from different materials: one copper and one aluminum. Reminds me of a girl I dated briefly in high school whose eyes were different colors.

Hey, I said “briefly.”

I think it’s a good thing both heatsinks aren’t copper, because the X2’s massive cooler is heavy enough as it is.

And the X2 has much to cool. AMD rates the X2’s peak power draw at 196W. However, the Radeon HD 3870 GPUs on this beast do have some power-saving measures, including a “light gaming” mode for less intensive 3D applications. With that mode in use, the X2’s rated power draw is 110W.

With all of the talk lately about hybrid graphics schemes, the X2 might seem to have a built-in opportunity for power savings in the form of simply turning off one of its two GPUs at idle. I asked AMD about this possibility, though, and they said the latency involved in powering down and up a whole GPU was too great to make such a scheme feasible here. Ah, well.

Under the hood

As I said, the X2 becomes more interesting under the hood. Here’s what you’ll find there.



Dual GPUs flank the X2’s PCI Express switch chip

The two chips on the left and the right are Radeon HD 3870 chips, also known as RV670 GPUs. The X2’s two GPUs flip bits at a very healthy frequency of 825MHz, which is up 50MHz from the Radeon HD 3870. Each GPU has a 256-bit interface to a 512MB bank of memory. That gives the X2 a total 1GB of RAM on the card, but since the memory is split between two GPUs, the card’s effective memory size is still 512MB. The X2’s GDDR3 memory runs at 900MHz, somewhat slower than the 1125MHz GDDR4 memory on the single 3870. All in all, this arrangement should endow the X2 with more GPU power than a pair of 3870 cards in CrossFire but less total memory bandwidth. AMD says its partners are free to use GDDR4 memory on an X2 board if they so choose, but we’re not aware of any board makers who plan to do so.

By the way, in a confusing bit of roadmappery, the X2’s code-name is R680, although it’s really just a couple of RV670s glued together.



The PLX 8547 PCIe switch

The “glue,” in this case, is a PCI Express switch chip. That mysterious chip is a PLX 8547 PCIe switch. Here’s how PLX’s website describes the chip:

The ExpressLane™ PEX 8547 device offers 48 PCI Express lanes, capable of configuring up to 3 flexible ports. The switch conforms to the PCI Express Base Specification, rev 1.1. This device enables users to add scalable high bandwidth, non-blocking interconnects to high-end graphics applications. The PEX 8547 is designed to support graphics or data aggregation while supporting peer-to-peer traffic for high-resolution graphics applications. The architecture supports packet cut-thru with the industry’s lowest latency of 110ns (x16 to x16). This, combined with large packet memory (1024 byte maximum payload size) and non-blocking internal switch architecture, provide full line-rate on all ports for performance-hungry graphics applications. The PEX 8547 is offered in a 37.5 x 37.5mm² 736-ball PBGA package. This device is available in lead-free packaging.

Order yours today!

This switch chip sounds like it’s pretty much tailor-made for the X2, where it sits between the two GPUs and the PCIe x16 graphics slot. Two of its ports attach to the X2’s two GPUs, with 16 lanes of connectivity each. The remaining 16 lanes connect to the rest of the system via the PCIe x16 slot. Since PCI Express employs a packet-based data transmission scheme, both GPUs ought to have reasonably good access to the rest of the system when connecting through a switch like this one, even though neither GPU has a dedicated 16-lane link to the rest of the system. That said, the X2’s two GPUs are really no more closely integrated than a pair of Radeon HD 3870 cards in CrossFire.

Also, as you may know, the Radeon HD 3870 GPU itself is capable of supporting PCIe version 2.0, the revved-up version of the standard that offers roughly twice the bandwidth of PCIe 1.1. The PLX switch chip, however, doesn’t support PCIe 2.0. AMD says it chose to go with PCIe 1.1 in order to get the X2 to market quickly. I don’t imagine 48-lane PCIe 2.0 switch chips are simply falling from the sky quite yet, and the PCIe 1.1 arrangement was already familiar technology, since AMD used it in the Radeon HD 2600 X2.

Incidentally, the PLX bridge chip adds about 10-12W of power draw to the X2.

AMD plans to make Radeon HD 3870 X2 cards available via online resellers starting today, with an initial price tag of $449. At that rate, the X2 should cost slightly less than two Radeon HD 3870s at current street prices.

The dual GPU issue

Multi-GPU video cards aren’t exactly a new thing, of course. They’ve been around almost since the beginning of consumer 3D graphics cards. But multi-GPU cards and schemes have a storied history of driver issues and support problems that we can’t ignore when assessing the X2. Take Nvidia’s quad SLI, for instance: it’s still not supported in Windows Vista. Lest you think we’re counting Nvidia’s problems against AMD, though, consider the original dual-GPU orphan: the Rage Fury MAXX, a pre-Radeon dual-GPU card from ATI that never did work right in Windows 2000. The prophets were right: All of this has happened before, and will again.

The truth is that multi-GPU video cards are weird, and that creates problems. At best, they suffer from all of the same problems as any multi-card SLI or CrossFire configuration (with the notable exception that they don’t require a specific core-logic chipset on the motherboard in order to work.) Those problems are pretty well documented at this point, starting with performance scaling issues.

Dual-GPU schemes work best when they can distribute the load by assigning one GPU to handle odd frames and the other to handle the even ones, a technique known as alternate-frame rendering, or AFR. AFR provides the best performance scaling of any load-balancing technique, up to nearly twice the performance of a single GPU when all goes well, but it doesn’t always work correctly. Some games simply break when AFR is enabled, so the graphics subsystem may have to fall back to split-frame rendering (which is just what it sounds like: one GPU takes the top half of the screen and the other takes the bottom half) or some other method in order to maintain compatibility. These other load-balancing methods deliver sometimes underwhelming results.

Things grow even more complicated from there, but the basic reality is this: GPU makers have to keep a database of games in their drivers and provide hints about what load-balancing method—and perhaps what workarounds for scaling or compatibility problems—should be used for each application. In other words, before the X2’s two GPUs can activate their Wonder Twin powers, AMD’s drivers may have to be updated with a profile for the game you’re playing. The same is true for a CrossFire rig. The game may have to be patched to support multi-GPU rendering, as well, as Crysis was recently. (And heck, Vista needs several patches in order to support multi-GPU rigs properly.) Getting those updates tends to take time—weeks, if not months.

One would think that game developers and GPU makers would work together more diligently to ensure a good out-of-the-box experience for gamers with SLI or CrossFire, but the reality is that multi-GPU support is almost inescapably a lower priority for both parties than just getting the single-GPU functionality right—and getting the game shipped. The upshot of this situation is that avid gamers may find they’ve already finished playing through the hottest new games before they’re able to bring a second GPU to bear on the situation. Such was my experience with Crysis.

Generally speaking, Nvidia has done a better job on this front lately than AMD. You’ll see the Nvidia logo and freaky theme-jingle-thingy during the startup phase of many of the latest games, and usually, that means the game developer has worked with Nvidia to optimize for GeForce GPUs and hopefully SLI, as well. That said, AMD has been better about supporting its multi-GPU scheme during OS transitions, such as the move to Vista or to 64-bit versions of Windows.

This one-upsmanship reached new heights of unintentional comedy this past week when Nvidia sent us a note talking down the Radeon HD 3870 X2 since multi-GPU cards often have compatibility problems. It ended with this sentiment: besides, wait until you see our upcoming GX2! Comedy gold.

The problem is, neither company has made multi-GPU support as seamless as it should be, for different reasons. Both continue to pursue this technology direction, but I remain skeptical about the value of a card like the X2 given the track record here. Going with a multi-GPU card to serve the high end of the market is something of a risk for AMD, but one that probably makes sense given the relative size of the market. Whether it make sense for you to buy one, however, is another question.

How the X2 handles

I am pleased to report that AMD has made a breakthrough in removing one of the long-standing problems with multi-GPU configs: multi-monitor support. Traditionally, with SLI or CrossFire enabled, a PC could only drive a single display—a crazy limitation that seems counter-intuitive on its face. If you had two displays, you’d have to go into the control panel enable CrossFire, thus disabling one monitor, before starting up the game. You’d then have to reverse the process when finished playing. Total PITA.

Happily, though, AMD’s newer drivers pretty much blow that limitation away. They’ve delivered the “seamless” multi-monitor/multi-GPU support AMD promised at the time of the Radeon HD 3870 launch. Here’s what I was able to do as a result. I connected a pair of monitors—a 30″ LCD with a dual-link DVI port and an analog CRT—to a Radeon HD 3870 in a CrossFire pairing and enabled CrossFire. Both displays continued to show the Vista desktop with CrossFire enabled. Then I ran UT3 and played full-screen on the LCD. The CRT continued to show the Vista desktop just fine. Next, I switched UT3 into windowed mode, without exiting the game, and the transition went smoothly, with both displays active. Finally, I dragged the UT3 window to span both desktops, and it continued to render everything perfectly. Here’s a screenshot, wildly shrunken, of my desktop session. Its original dimensions were 2048×1536 plus 2560×1600, with the UT3 window at 2560×1600.



UT3 spans two displays with CrossFire enabled

That, friends, is what I’m talking about. I tried the same thing on the 3870 X2, and it worked just as well. By contrast, enabling SLI simply caused the secondary display to be disabled immediately.

This multi-display seamlessness does have an unhappy consequence, though. Since multi-monitor configs work seamlessly, AMD seems to think it can get away with essentially “hiding” the X2’s CrossFire-based nature. The Catalyst Control Center offers no indication that CrossFire is enabled on an X2 card, and there’s no switch to disable CrossFire. That stings. I’ve worked with multi-GPU setups quite a bit, and I wouldn’t want to own one without having the option of turning it off if it caused problems.



Super AA 16X is available on the X2

The one indication you’ll get on the X2 that CrossFire is involved is the extension of the antialiasing slider up to 16X. This 16X setting triggers CrossFire’s Super AA mode, a method of multi-GPU load distribution via cooperative AA sampling. This is a nice option to have, but in my book, SuperAA isn’t the best option for load sharing or for high-quality antialiasing. I’d much rather use 4X multisampling combined with one of the Radeon HD 3870’s custom tent filters.

AMD is running behind on another of its promises from the Radeon HD 3870 launch, unfortunately, and that affects the X2 directly. At the time, the firm said it would be providing drivers for “CrossFire X” in January (that’s now) to enable CrossFire with three or four GPUs. However, AMD has delayed those drivers until March as it works to get adequate performance scaling out of more than two GPUs. That’s no small challenge, so I wouldn’t be surprised to see the schedule for the drivers slip further. Eventually, AMD expects the 3870 X2 to be able to participate in a couple of different CrossFire X configurations: dual X2 cards forming a quad-GPU phalanx, or one X2 card hooking up with one regular 3870 for some three-way action. I’m curious to see how well this will work once the drivers arrive.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core
2 Extreme X6800
2.93GHz
Core
2 Extreme X6800
2.93GHz
System
bus
1066MHz
(266MHz quad-pumped)
1066MHz
(266MHz quad-pumped)
Motherboard Gigabyte
GA-X38-DQ6
XFX
nForce 680i SLI
BIOS
revision
F7 P31
North
bridge
X38
MCH
nForce
680i SLI SPP
South
bridge
ICH9R nForce
680i SLI MCP
Chipset
drivers
INF
update 8.3.1.1009

Matrix Storage Manager 7.8

ForceWare
15.08
Memory
size
4GB
(4 DIMMs)
4GB
(4 DIMMs)
Memory
type
2
x Corsair
TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
2
x Corsair
TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
CAS
latency (CL)
4 4
RAS
to CAS delay (tRCD)
4 4
RAS
precharge (tRP)
4 4
Cycle
time (tRAS)
18 18
Command
rate
2T 2T
Audio Integrated
ICH9R/ALC889A

with RealTek 6.0.1.5497 drivers

Integrated
nForce 680i SLI/ALC850

with RealTek 6.0.1.5497 drivers

Graphics Dual

Radeon HD 3870 512MB PCIe

with 8.451.2-080116a-057935E drivers

Dual
GeForce
8800 GT 512MB PCIe

with ForceWare 169.28 drivers

Dual

Radeon HD 3870 512MB PCIe

with 8.451.2-080116a-057935E drivers

Dual

Radeon HD 3870 X2 1GB PCIe

with 8.451.2-080116a-057935E drivers

GeForce
8800 GT 512MB PCIe

with ForceWare 169.28 drivers

EVGA
GeForce 8800 GTS 512MB PCIe

with ForceWare 169.28 drivers

GeForce
8800 Ultra 768MB PCIe

with ForceWare 169.28 drivers

Hard
drive
WD
Caviar SE16 320GB SATA
OS Windows
Vista Ultimate
x86 Edition
OS
updates
KB936710, KB938194, KB938979,
KB940105, KB945149,
DirectX November 2007 Update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs. Thanks to OCZ for providing these units for our use in testing.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

The basic metrics

I’ll admit I’ve been holding back on you about just how powerful the X2 really is. If you consider it as a single card, it has substantially more theoretical peak throughput, in nearly every major category, than any of its competitors. Here’s a look at the basic math.

Peak
pixel
fill rate
(Gpixels/s)

Peak bilinear

texel
filtering
rate
(Gtexels/s)


Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)

Peak
shader
arithmetic
(GFLOPS)
GeForce 8800 GT 9.6 33.6 16.8 57.6 504
GeForce 8800 GTS 10.0 12.0 12.0 64.0 346
GeForce 8800 GTS 512 10.4 41.6 20.8 62.1 624

GeForce 8800 GTX

13.8 18.4 18.4 86.4 518
GeForce 8800 Ultra 14.7 19.6 19.6 103.7 576
Radeon HD 2900 XT 11.9 11.9 11.9 105.6 475
Radeon HD 3850 10.7 10.7 10.7 53.1 429
Radeon HD 3870 12.4 12.4 12.4 72.0 496
Radeon HD 3870 X2 26.4 26.4 26.4 115.2 1056

In terms of pixel fill rate, filtering of FP16-format textures, memory bandwidth, and shader arithmetic, the Radeon HD 3870 X2 leads all other “single” graphics cards on the market today. Most notably, perhaps, it’s the first graphics card to pack more than a teraflop of shader power. (And I’m using the more optimistic set of shader arithmetic numbers for the GeForce 8800 cards; another way of counting would cut their numbers by a third.) So long as the X2 can scale well to two GPUs and use the power available to it, it should be the fastest card of the bunch.

I’ve chosen to test the X2 against a range of different graphics options, as you’ll see in our first set of synthetic benchmark results below. Perhaps the most direct competition for the X2 is GeForce 8800 Ultra. Ultras typically cost quite a bit more than the X2’s $449 price tag, but board makers offer a number of “overclocked in the box” GeForce 8800 GTX cards like this one with near-Ultra clock speeds. The stock-clocked 8800 GTX, meanwhile, has essentially been rendered obsolete by the GeForce 8800 GTS 512, which offers very similar performance at a much lower price, as we found in our review. As a result, we’ve allowed a stock-clocked Ultra to be our sole G80-based representative.

The single-textured fill rate test is essentially limited by memory bandwidth, and the only solution that can top the X2 on that count among our group is the Radeon HD 3870 in CrossFire. If you’ll recall, the 3870 has a slower GPU clock but a higher memory clock thanks to its use of GDDR4 memory.

When we get to multitextured fill rate, the X2 looks to be limited by the texturing capacity of its GPUs. The cards based on Nvidia’s G92 don’t reach anything close to their (rather staggering) theoretical peak filtering capacities here, but the GeForce 8800 GTS 512 and the 8800 GT in SLI manage to outdo the X2.

The X2 takes the top spot in three of the four simple shader tests in 3DMark06. I have no idea how the Radeon HD 3870 is able to produce much more than double its vertex throughput in CrossFire than with a single GPU, but the voodoo magic seems to rub off on the X2, as well.

Anyhow, those synthetic tests give us a sense of the lay of the land, but they’re by no means the final word. Let’s look at game performance.

Call of Duty 4: Modern Warfare

We tested Call of Duty 4 by recording a custom demo of a multiplayer gaming session and playing it back using the game’s timedemo capability. Since this is a high-end graphics card we’re testing, we enabled 4X antialiasing and 16X anisotropic filtering and turned up the game’s texture and image quality settings to their limits.

Thanks to a recent change to Nvidia’s drivers, we were (finally!) able to test at 1680×1050, even though that’s not our display’s native resolution. Consequently, we’ve chosen to test at 1680×1050, 1920×1200, and 2560×1600—resolutions of roughly two, three, and four megapixels—to see how performance scales.

The X2 handily outperforms the GeForce 8800 Ultra at 1680×1050 and 1920×1200, but the contest tightens up at 2560×1600, where the multi-GPU configs seem to run into a wall—probably with available memory, since multi-GPU operation does bring some overhead. The GeForce 8800 GT in SLI trounces the X2 at the two lower resolutions, but stumbles badly at four megapixels.

Half-Life 2: Episode Two

We used a custom-recorded timedemo for this game, as well. We tested Episode Two with the in-game image quality options cranked, with 4X AA and 16 anisotropic filtering. HDR lighting and motion blur were both enabled.

The X2 again takes it to the Ultra, proving itself the fastest single-card solution around. However, the GeForce 8800 GT in SLI runs away and hides here, and it’s the only solution we tested capable of hitting 60 FPS at our peak resolution. If you’re willing to run an Nvidia chipset and sacrifice an extra PCIe x16 slot—both big “ifs,” especially the chipset thing—a couple of 8800 GTs in SLI can put the X2 to shame.

Enemy Territory: Quake Wars

We tested this game with 4X antialiasing and 16X anisotropic filtering enabled, along with “high” settings for all of the game’s quality options except “Shader level” which was set to “Ultra.” We left the diffuse, bump, and specular texture quality settings at their default levels, though. Shadows, soft particles, and smooth foliage were enabled. Again, we used a custom timedemo recorded for use in this review.

This one’s close, but the Ultra edges out the X2 overall in Quake Wars. True to its billing, the X2 again outperforms a couple of Radeon HD 3870s in CrossFire, at least.

Crysis

Rather than test a range in resolutions in Crysis, we tried several different settings, all using the game’s “high” image quality options. Crysis has another set of “very high” quality options, but those pretty much bog down any GPU combination we’ve yet thrown at it. In fact, “high” is pretty demanding, so we’ve scaled back the resolution some, too. We tested performance using the benchmark script Crytek supplies with the game.

Also, the results you see below for the Radeons come from a newer graphics driver, version 8.451-2-080123a, than the ones we used for the rest of our tests. This newer driver improved Crysis performance noticeably over the older one, both in benchmarks and when playing the game.

The X2 tops all single-card solutions at 1680×1050, but it’s not quite as quick as the Ultra at 1280×800 with 4X AA (nor is anything else). Interestingly enough, this is one application where the Radeon HD 3870 CrossFire config’s superior memory bandwidth seems to give it an edge over the X2.

Unreal Tournament 3

We tested UT3 by playing a deathmatch against some bots and recording frame rates during 60-second gameplay sessions using FRAPS. This method has the advantage of duplicating real gameplay, but it comes at the expense of precise repeatability. We believe five sample sessions are sufficient to get reasonably consistent and trustworthy results. In addition to average frame rates, we’ve included the low frames rates, because those tend to reflect the user experience in performance-critical situations. In order to diminish the effect of outliers, we’ve reported the median of the five low frame rates we encountered.

Because UT3 doesn’t support multisampled antialiasing, we tested without AA. Instead, we just cranked up the resolution to 2560×1600 and turned up the game’s quality sliders to the max. I also disabled the game’s frame rate cap before testing.

Here, the X2 produces a higher average frame rate than the Ultra, but its median low frame rate is almost the same. Amazingly enough, nearly all of these cards can run UT3 acceptably at this crazy resolution. Bottom line: UT3 performance shouldn’t be a problem for you with the X2.

Power consumption

We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows Vista desktop with the Aero theme enabled. The cards were tested under load running UT3 at 2560×1600 resolution, using the same settings we did for performance testing.

Note that the GeForce 8800 GT SLI config was, by necessity, tested on a different motherboard, as noted on our Testing Methods page.

Personally, I think the X2’s power consumption at idle is impressively low. Although it has two GPUs and two banks of 512MB of memory, our X2-based system draws only 12W more at idle than the GeForce GTS 512-based one. The 8800 Ultra-based system draws over 20W more. Under load, the situation changes, and the X2-based system draws quite a bit of power—more than the Ultra-based system and more than the 8800 GT SLI system, which is easily the better performer.

Notice, also, that the X2 doesn’t quite live up to AMD’s claims. It does draw a little more power than two Radeon HD 3870s in CrossFire, both at idle and under load.

Noise levels

We measured noise levels on our test systems, sitting on an open test bench, using an Extech model 407727 digital sound level meter. The meter was mounted on a tripod approximately 12″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured, including the stock Intel cooler we used to cool the CPU. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

Unfortunately—or, rather, quite fortunately—I wasn’t able to reliably measure noise levels for any of these cards at idle. Our test systems keep getting quieter with the addition of new power supply units and new motherboards with passive cooling and the like, as do the video cards themselves. I decided this time around that our test rigs at idle are too close to the sensitivity floor for our sound level meter, so I only measured noise levels under load.

What you should take from these results is that, at least on our open test bench, the X2 was perceptibly louder than any of the other graphics solutions we tested. That’s definitely what my ears told me. And yet, I really can’t complain too much about the X2’s noise levels. The cooler can be loud when the system is first powered on, but that’s it. During normal gaming, like the scenario we tested here, the card really isn’t very loud. It’s at least not into the “annoying” range for me. We’re talking loud-whisper level here.

GPU temperatures

Per your requests, I’ve added GPU temperature readings to our results. I captured these using AMD’s Catalyst Control Center and Nvidia’s nTune Monitor, so we’re basically relying on the cards to report their temperatures properly. In the case of multi-GPU configs, well, I only got one number out of CCC. I used the higher of the two numbers from the Nvidia monitoring app. These temperatures were recorded while running UT3 in a window.

There you have ’em. Kind of an interesting complement to the noise and power numbers, I think. I could live with a little more noise and little less heat from the 8800 GT and the Radeon HD 3870s in CrossFire.

Conclusions

Well, you’ve seen the results, and you’ve heard my thoughts about the perils of multi-GPU performance scaling, driver updates, and the like. You can probably draw your own conclusions at this point, so I’ll keep it brief.

Obviously, AMD has captured the title of “fastest single graphics card” with the Radeon HD 3870 X2. There is much to like here. The X2 is generally faster than the GeForce 8800 Ultra, and it has HD video decode acceleration that Nvidia’s older G80 GPU lacks. In all, the X2 looks to be a pretty good value in a single-card, high-end solution at $449. The X2 does draw more power and generate more noise under load than the GeForce 8800 Ultra, but it’s not unacceptable on either front for a card in this class. And the X2’s seamless multi-monitor support is the icing on the cake. I’m really pleased to see that working so well.

That said, the X2’s title is by no means undisputed. Nvidia’s best alternatives to the X2 are based on the newer G92 GPU and have support for H.264 decode acceleration themselves. For just a little more money, a pair of GeForce 8800 GT cards in SLI will outperform the X2, usually by a healthy margin. Those 8800 GTs will come with many of the same multi-GPU caveats as the X2, however, plus additional ones about requiring two PCIe x16 slots and an Nvidia chipset. If you’re hoping to sidestep those worries, one can hardly afford to ignore the solid value presented by the GeForce 8800 GTS 512 at around $300. The GTS 512’s performance isn’t far from the X2’s in many cases. Unless you really are planning on driving a four-megapixel display, a card like the GTS 512 will probably feed your appetite for eye candy quite well in the vast majority of today’s games.

We do have a new champ today, though, and it’s from AMD. Nice to see the title changing hands again, even if it took a dually to do it.

Comments closed
    • Blacklash
    • 12 years ago

    I really wish you fellows would have bothered with testing higher than 4xAA
    in titles that support it.

      • RambodasCordas
      • 12 years ago

      Yes I completely agree with you.
      Besides most people I know have 17’/19′ CRT/LCD displays which resolution various from 1024×768/1280×1024/1440×900.
      Where the 8X AA (or more) tests is more appealing than high resolution tests.

      • 2457
      • 12 years ago

      I was woundering if things would change if 8gbyte of ram was used.
      8gbyte seams like.. a reasonable amount for any quad core CPU,
      I know the test is about VGA cards, but I would realy like if there was at least one test repeated with 8gbyte ram, to see how it scales.

      Its just my interest,
      I own a 6000+ dualcore amd, an msi motherboard , a simple 8600 gt from msi, and 8 gbyte of chepass ram. I noticed that with 4 gbyte things get WAY slower.
      My system is not one of the “ultramassive” systems, ad that xtra memory does have a big impact.
      I just.. thought that if my weakling VGA card suffers from it, I bet those
      “muscle” VGA cards do so too.

      (DON’t FLAME! I use msi for quite log time now, Asus and Abit faild me numerous times, MSI did never.)

    • Fighterpilot
    • 12 years ago

    I never knew Kyle had his own fanboi club.

      • Silus
      • 12 years ago

      First it’s not Kyle who has a fan club, but rather [H].
      Second, it’s funny you didn’t know about it, because I always knew TR and Anandtech had one…

    • Bensam123
    • 12 years ago

    “I have no idea how the Radeon HD 3870 is able to produce much more than double its vertex throughput in CrossFire than with a single GPU, but the voodoo magic seems to rub off on the X2, as well.”

    If I was to theorize, I would say they were made to function better in pairs then alone and actually cover holes for eachother, perhaps even have ways of teaming for processes. I don’t know any techno details for it, but I’m sure thats where multi-gpu/cpus will eventually have to tread. Where they don’t just double performance by 100%, but actually know that there is another processor there and shift the capabilities of another processor and use the other one as a real time buffer where the workloads are so off balanced.

    • clone
    • 12 years ago

    the main annoyance today with benchmark testing is in the case of Kyle and Driverheaven and Xbit is they arbitrarily choose their own personal settings.

    Kyle likes to flower it up that he’s testing playable settings but in reality it’s a mess and impossible to find any other website using the same settings.

    Xbit is terrible for testing 4X AA at what used to be standard resolutions and leaving giant gaps, additionally Xbit is terrible for using outdated drivers and claiming their was no difference then they flood their tests with synthetic benchmarks that have me snoring after the 5th graph……

    everyone at least seems to use the same testing platforms but generalise the damned settings or as is the case with TechReport be good enough to have a healthy pool of video cards to do direct comparisons with…. Xbit has been weak in this regard, HardOCP’s benches and graphs are moderate at best…… I’ve walked away from his tests simply because it’s a challenge just to make sense of them, sitting their picking settings between the cards tested to make sense of the review.

    • pluscard
    • 12 years ago

    It appears AMD has a master strategy:

    Deliver a $449 dual gpu that slaughters everything below it, and a value $189 quad core that performs as well as Intels best in gaming at med & high resolutions. Total of the two is $638 which is about the cost of the 8800 ultra alone. INTC’s cheapest quad is still $249.

    Seems like AMD is delivering a value/performance combo, driving the competitions prices down.

      • Silus
      • 12 years ago

      That’s true, in terms of prices, but the X2 is not slaughtering anything…

        • pluscard
        • 12 years ago

        Don’t like the word “slaughter?” How about “consistently faster”:

        —“It has been far too long since AMD/ATI have been at the top of the performance charts; the crown had been lost on both CPU and GPU fronts, but today’s Radeon HD 3870 X2 introduction begins to change that. The Radeon HD 3870 X2 is the most elegant single-card, multi-GPU design we’ve seen to date and the performance is indeed higher than any single-card NVIDIA solution out today.”

        “AMD is also promising the X2 at a fairly attractive price point; at $449 it is more expensive than NVIDIA’s GeForce 8800 GTS 512, but it’s also consistently faster in the majority of titles we tested. If you’re looking for something in between the performance of an 8800 GTS 512 and a 8800 GT 512 SLI setup, the Radeon HD 3870 X2 is perfect.”

        “Even more appealing is the fact that the 3870 X2 will work in all motherboards: CrossFire support is not required. In fact, during our testing it was very easy to forget that we were dealing with a multi-GPU board since we didn’t run into any CrossFire scaling or driver issues. We’re hoping that this is a sign of things to come, but we can’t help but worry about the future of these multi-GPU cards.”
        §[<http://www.anandtech.com/video/showdoc.aspx?i=3209&p=13<]§

          • Silus
          • 12 years ago

          I would rather have a quote from a site that doesn’t use cut-scenes, to test a graphics card…

          Also, the X2 is not a bad card. Far from it. At an appealing price, it’s certainly a good buy, but it disappoints, because it doesn’t “slaughter” NVIDIA’s over one year old high-end cards and sometimes, even loses.

            • grantmeaname
            • 12 years ago

            Is the 8800 Ultra’s age relevant? nVidia’s new cards don’t beat it either.

            • Silus
            • 12 years ago

            I wasn’t even speaking about the Ultra, since the Ultra is younger than the GTX. And NVIDIA was not trying to beat their own ultra high-end card. They just replaced the 8800 GTS 320 and 8800 GTS 640, with the 8800 GT and 8800 GTS 512 respectively.

      • flip-mode
      • 12 years ago

      I’d pay the $249.

        • provoko
        • 12 years ago

        It’s still 277. And I’d rather pay 189 or way less anyday.

          • flip-mode
          • 12 years ago

          Yeah, truth be told I will prolly never spend more than $200 for a CPU. So far I’ve never spent more than $130. Also, I’d easily rather have a faster dual core than a slower quad core. Quad core would be a complete waste in my hands.

      • Flying Fox
      • 12 years ago

      That’s about the only thing AMD can do at this moment, drive prices down. This can create volume but at the expense of falling ASPs, especially in the CPU area. They cannot survive with low ASPs as has been demonstrated by their history. So this may bring them revenue and marketshare it is not known at this point if they can continue like this.

    • Chrispy_
    • 12 years ago

    The single most important thing to take out of that review is that AMD have fixed multi-monitor support for multi-GPU solutions.

    Statistically, and demographically, none of of you reading this right now will ever buy this card, but some of you will probably have crossfire or SLI setups. Now that AMD have cracked multi-gpu/multi-display with this card, there is at least a hope that future and possibly existing dual-/[

    • Jigar
    • 12 years ago

    What happens if this thing folds ?

      • Chrispy_
      • 12 years ago

      We have cancer cured by Thursday.

        • Sikthskies
        • 12 years ago

        Haha, that made me lol!

        • Jigar
        • 12 years ago

        But today is Tuesday… Will it take 2 days ?

    • fpsduck
    • 12 years ago

    Never knew that Ruby has blue eyes.
    Would it be more sexy if she has green eyes?
    But sadly green thing on graphics card has association with Nvidia.

    Great article as always.

    • Fighterpilot
    • 12 years ago

    Ah yes..”Clever Kyle”….if you want to see how smart he and his testing ways are,check out his thoughts and conclusions about how the newly released Conroe C2D made little to no difference…he’s an idiot.
    It was a great GPU test tho……ugh.

      • Cript
      • 12 years ago

      “The fact of the matter is that real-world gaming performance today greatly lies at the feet of your video card. Almost none of today’s games are performance limited by your CPU. Maybe that will change, but given the trends, it is not likely. You simply do not need a $1000 CPU to get great gaming performance as we proved months ago in our CPU Scaling article.”
      §[<http://www.hardocp.com/article.html?art=MTEwOCwxMSwsaGVudGh1c2lhc3Q=<]§ He's an idiot for saying that? Not quite sure what you're upset about. Note that his statement is about games and the settings people use (not 640x480 / lowest graphics). "We feel kind of silly even entertaining this question, but yes, if you want to build a system with three 8800 Ultras, you don't need to spend $1000 on a CPU. You can get by with a 2.66GHz chip just fine." §[<http://www.anandtech.com/video/showdoc.aspx?i=3183&p=4<]§ If you play Farcry at low quality / resolution and feel 400fps (Intel) versus 200fps (AMD) is night and day for gameplay, then I've clearly wasted my time. §[<http://www.tomshardware.com/2006/06/05/first_benchmarks_conroe_vs_fx-62/page5.html<]§

        • Fighterpilot
        • 12 years ago

        Cript,there are exactly 2 websites that I trust to give the definitive numbers for PC hardware.
        When all the reviews are all over the place and the mud is flying at [H] just sit back and see what the Tech Report and Anandtech have concluded.
        Then you will know.

          • flip-mode
          • 12 years ago

          But TR has said the same thing. The CPU doesn’t make much difference for gaming.

          • Silus
          • 12 years ago

          As for TR, I use it as one of my info sources and for a wider list of games and cards used.
          But Anandtech ? For what ? To see them benchmark a cut-scene (CoD4) ? Now that’s very useful, since I just spent $450-500 on a video card, to see cut-scenes and not actually PLAY games…

          Games are dynamic and are not bound to predefined events, which are found in built-in benchmarks and cut-scenes…

          I visit [H] for real-world gameplay numbers, which is what I’ll get, when I do buy the card they reviewed. I will PLAY the game and not use some predefined set of situations, found in built-in benchmark tools. If you want 3DMark scores and generic numbers for comparison, use other sites, If you want the closest experience you’ll havet, when you buy a graphics card, then you visit [H]. It’s that simple.

    • matnath1
    • 12 years ago

    Look at what Kyle at the Hard/OCP Wrote after his conclusion:

    (Editor’s Note: You will see here today that our evaluation of the gaming performance produced by this video card does not track with some other big sites on the Web, and the simple fact is that those sites did not measure “gaming performance.” Those sites measured frames per second performance in canned benchmarks and even some of them went as far as to use cut scenes from games to pull their data from. I have been part of this industry for years now and we are seeing now more than ever where real world gaming performance and gaming “benchmarks” are not coming to the same conclusions. Remember that when we evaluate video cards, we use them exactly the way you would use them. We play games on them and collect our data.

    Another thing to think about is this. Do game developers want to provide built in benchmarks that show their games running slow? Or would the game developers rather put a game “benchmark” in that shows their game hauling ass? Do you think that slow benchmarks equal more sales?

    The “3dfx way” of evaluating video cards is DEAD. It did have its time and place, but we are beyond that now. Any person using those methods to influence your video card purchase is likely irresponsible in doing so. You might even consider them liable. And I think that is going to come bubbling to the surface more and more as the industry matures. )

      • DrDillyBar
      • 12 years ago

      I read that as well. I also noted that he did not link to the AnandTech nor the TechReport reviews. The last time I recall Kyle chiming in was when Scott got his 2001FP monitor.

        • matnath1
        • 12 years ago

        Kyles results all seem to show the X2 as being slower than almost every other review site. Is everyone else really using obsolete methods or is Kyle just pounding his chest and lookin for a fight again?

          • danny e.
          • 12 years ago

          Kyle is an idiot.

        • Tarx
        • 12 years ago

        DriverHeaven also did the tests by play through but got somewhat different results… I guess they will check to see what setting each other used. Also it seems like it depends somewhat on the level being checked..

      • indeego
      • 12 years ago

      I’d check it out but I 127.0.0.1’d HOCP years agog{<.<}g

        • DrDillyBar
        • 12 years ago

        LMAO

      • Damage
      • 12 years ago

      Kyle’s problem is that in order for him to be right, everyone else has to be wrong. That’s unfortunate, because that problem is entirely tangential to this issue, yet it’s affected how the debate has played out. I think that’s especially unfortunate in this case because both approaches to testing have their strengths and weaknesses.

      Playing games and using FRAPS to record frame rates lets us get a feel for how each video card or multi-GPU rig handles during the game itself, and it gives us the chance to more closely evaluate image quality, any quirks the cards may have while playing, and any big slowdowns that may not show up in an average FPS score. I like those things. The weaknesses of this approach are: one, it’s time-consuming enough to limit the range of games, resolutions, and quality settings we can test. Two, it inevitably involves a subjective component. Subjective assessments can be very valuable, but they’ll also include the unavoidable fallibility and quirkiness that any individual will bring to the process.

      Using a more automated testing method, meanwhile, isn’t exactly the same thing as playing the game itself–an unvoidable reality and an important one. But it’s a powerful tool for testing a broad range of resolutions and/or settings, using the game engine itself, and it delivers reliable, repeatable results. Those results often track more closely with in-game performance than some folks would have you believe, as well. (Of course, automated testing can’t entirely replace actual gameplay, which is why we do play games on the cards we review, regardless of how we gather numbers.)

      I think both approaches have their place, which is why we’ve used a mix of the two methods in our reviews for a long, long time now–a fact that our regular readers will instantly recognize as true. Yet to hear some folks tell it, only they are testing the “gaming experience,” as if they’d found the One True Way to test a video card and no one else had access to it. Hogwash.

      Funny thing: recently, our approach to reviews had shifted more in the direction of FRAPS and manual testing, like in our GeForce 8800 GTS 512 review, where we tested four out of five games with FRAPS. As a result, we started hearing complaints from you guys about the range of resolutions we used. So this time around, we relied more on custom-recorded timedemos and tested across multiple resolutions. The result? Fewer complaints on this review than anything we’ve posted in ages and lots of positive responses.

      My point is that, to some degree, we have to listen to our readers and take into account what you all want. We do have our own ideas, of course, and they will help guide us, but you all matter, as well. We’re happy to have your input, and our current mix of tests is a result of that input. What the editor of a competing site on an unhinged benchmarking jihad thinks, however, doesn’t matter to us at all.

        • provoko
        • 12 years ago

        But you did use FRAPS in one of your tests, UT3, and the 3870×2 still came out on top.

        Anandtech used FRAPS in Bioshock, 3870×2 won again.

        Anandtech did bench cutscenes which I agree with Hardocp on.

        Canned benchmarks have their place, and so do manually testing wtih FRAPS. I don’t know how Hardocp came to their results when manual testing did show 3870×2 is better. Maybe they should test more games.

        • Silus
        • 12 years ago

        I don’t think that’s fair, because that’s not how Kyle acts at all. Yes, he has his moments of “you’re either with me or against me”, but that’s not the stance I see, when talking about graphics cards review methodology.
        [H] always pointed out that they review graphics cards, in a different manner than most other sites. They emphasize that in every review and still get loads of people, asking for the so called “canned benchmarks” for comparison. It’s then that Kyle forwards the people complaining, to all the other sites that usually use built-in benchmark tools.

        I’m a [H]ardForum member for a while and I’ve witnessed this behavior in several other reviews, the most noticeable and recent being the HD 2900 XT, where [H] methodology was questioned to the bone, but in the end, [H] was right.

        It’s all matter of preference. I use Tech-Report for more generic data and a wider array of games and cards tested, which is good for comparison. However, real-world gameplay is the decisive factor for me and I go to [H] for that.

          • danny e.
          • 12 years ago

          if by “real-world gameplay” you mean the preference altered (biased) opinion of the reviewer.

            • Silus
            • 12 years ago

            How is it biased ? Because it doesn’t go with your own speculation of how a certain graphics card should perform, given its specs ?
            Then I point you to the “award winning” HD 2900 XT, which was praised by almost every site, that said for the price it was worth every penny.
            Except [H] and DriverHeaven, that actually called it what it was: a flop.
            In fact, all the reviews that praised it, did mention all its shortcomings in terms of power consumption, heat generation and noise levels, but their conclusion was always “thumbs up”. So who exactly was biased ?
            [H] method’s have been proven over and over, so I fail to see any bias, unless of course, you are an ATI fan. Then obviously you’ll think it’s biased.

          • DrDillyBar
          • 12 years ago

          Well, everyone get’s an opinion. I just happen to disagree almost completely with yours. *shrug*

            • Silus
            • 12 years ago

            And you are entitled to it. In the end, it’s your money and not mine. I choose very carefully who I truly trust, when it comes to the decisive purchasing factor.
            If you want to base your opinion of a given graphics card solely on built-in benchmarks, cut scenes and timedemos, then go ahead…

            In my view, timedemos are useful for comparison, but are NOT what shows me what a graphics card can do, given the extremely dynamic nature of games. A predefined set of situations in a built-in benchmark, will not show anything other than that card is very good running that timedemo and that’s all. The most stressful situations exist in the game itself, due to the dynamic nature I already mentioned.

        • matnath1
        • 12 years ago

        The only thing Dryer than Kyles reviews is his sense of humor and writing style. Scott’s reviews are popcorn eating movie watching level enjoyable. Kyle’s reviews are dry plodding and boring, however they do offer value and almost always have the LONE VOICE SHOUTING into the Dark approach.

        Style,Content,Educational Value,Entertainment Value are all benchmarked heavily in favor of the techreport! Please don’t change a thing.

      • willyolio
      • 12 years ago

      see: qualitative vs quantitative data.

      • mattthemuppet
      • 12 years ago

      this guy obviously doesn’t understand experimental design.

      1st – different games are going to perform better on one GPU architecture compared with another – that’s why you test several games.

      2nd – variance between cards (intrasample variation) for a particular game is therefore more informative than comparing between games eg. x card is faster than y card in game1 is more robust than x card in game 1 is better than y card in game 2. So if all the cards are tested the same in one game, then you should get a definative answer about which is better in that game.

      3rd – the higher the variance of your replicates (eg. FPS scores) the harder it is to pull out statistically significant differences for any given test. therefore, if you use FRAPs and just wander around randomly, the FPS score will vary markedly for the same card in the same test. So, to account for that, you have to increase the no. of replicates to pick out the winners, which equals more time for testing. When you have >5 cards in ~5 games and 3 different resolutions, repeating every test 5 times would take an enormous amount of time (and be incredibly boring aswell).

      So, obviously you want some “real-world testing” – this would be the qualitative bit that willyolio so succinctly put, but if you want quantitative testing, then you’ll have to accept some experimental compromises. Dismissing one or the other is to miss the merits of both.

    • kilkennycat
    • 12 years ago

    As of 8:00PM Pacific Time, 6 variants of HD3870 X2 are listed on Newegg:-

    §[<http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&DEPA=0&Description=HD3870%20X2<]§ ALL at $449. 4 variants are currently available, 2 are already out-of-stock. Newegg seems to only get trial quantities of new products when first released, so expect these to run out very quickly. The real test of availability from the manufacturers is the speed with which the items are re-stocked. However, you might just want to hold on to your wallet tightly for a while. Whatever about AMD/ATi, nVidia is churning away at the next-gen development. Also, please be aware that if a game is not ATi-profiled for the HD3870X2 dual-GPU configuration, the benefit of the second GPU is lost. See this page of the Tom's Hardware review of the HD3870X2:- §[<http://www.tomshardware.com/2008/01/28/ati_r680_the_rage_fury_maxx_2/page19.html<]§ There is no real replacement for a monolithic GPU implementation. SLI or Crossfire, whether implemented dual-card or single-card dual-GPU is always a limited-functionality substitute, requiring application-configuration updates and dual-GPU driver tweaks by the GPU manufacturers. Support of previous incarnations of dual-GPU "cards" tend to get forgotten very quickly once the next generation GPUs are on hand. The next gen single-GPUs in the same price-range as the dual-GPUs of a previous generation always exceed the previous generation in both performance and significantly-enhanced capability. Compare for example the 7950GX2 and the 8800GTX.

    • ssidbroadcast
    • 12 years ago

    I’m actually quite impressed by the CF/SLi scaling in these tests. In the past it was common to see a rig with only one video perform slightly BETTER than having two of the same card. So finally it’s “safe” to go multi-gpu again… for now.

    • Rza79
    • 12 years ago

    One important reason, aside of the increased gpu frequency, is the PLX’ ability to do ‘peer-to-peer’ writes that makes this solution faster. This essentially means the two RV670 chips can talk to eachother without going through the chipset.
    I just wonder how much more performance they could’ve had if they would have used a PCIE 2.0 switch. nVidia’s NF200 PCIE switch also supports PtP but also PCIE 2.0 … I wonder if it will matter much.
    Concerning the sound level: i find it just too loud. I’ve heard an 2900XT and that’s just too loud. Luckily GeCube and Asus will have models with different and hopefully more silent cooling. The GeCube looks promising since it’s using two ZEROtherm GX810’s. I guess using two Zalman VF900’s will also be possible.

    §[<http://www.overclockers.ru/hardnews/28051.shtml<]§

      • Harry
      • 12 years ago

      I had similar thoughts about the switch, but now I wonder if the 2 GPU’s do most of their communication with each other through the crossfire bus rather than the pcie bus. I also wonder if the fact that each GPU has an effective bandwidth of 8 lanes on the shared 16 lane slot will slow them down. However, Tom’s site did say that “the HD 3870 X2 is systematically 5% faster on average than the CrossFire solution”, so something is working out rather well for the X2. I assume the “crossfire solution” was running at gen 2 speeds, so apparently the way the gpu’s talk to each other is more important than the shared bandwidth bottleneck to the motherboard’s pcie bus.

      BTW – PLX began *[

        • Furen
        • 12 years ago

        Each GPU does not have an effective bus width of 8 lanes unless both GPUs are drawing full-tilt from the rest of the system (and even then, there could be some type of arbitration to feed the most performance-sensitive threads). A better way to look at it is that each GPU has 16 lanes minus whatever the other GPU is using at the time. Also, I’m not sure how Crossfire works internally but even if there is a bandwidth defficiency due to the PCIe switch being a bottleneck, certain amount of data could also be shared through the crossfire bridge.

        It would have been nice if ATI had found a way to daisy-chain GPUs without having to connect each one through the PCIe bus, basically just have one GPU hanging off the first one. For one, it would have made it a much neater PCB design. Secondly, it would have made the PCIe switch superfluous (so no need to buy it) and would have lowered power draw (10-12W is quite a bit, IMO). But, of course, latency is probably the limiting factor here. Speaking of latency, I wonder just how feasible a NUMA architecture on GPUs is. If R700 really is all about having multiple GPU “cores” being part of the same package then having multiple independent memory controllers is probably going to be part of it but having data replicated on each memory controller will limit scalability massively and ultimately drive up the cost of whatever effective memory capacity you end up having.

    • Convert
    • 12 years ago

    I would still buy a 8800 series card..

    • albundy
    • 12 years ago

    looks like a giant heart pump!

    • provoko
    • 12 years ago

    Great news. Hope AMD can take crowns more often and keep them longer.

    • danny e.
    • 12 years ago

    despite its flaws.. the thing that makes this unlike previous super-crazy-high-end cards are that the price is reasonable.

    They arent asking $600 for this.

    I could almost overlook the noise and power usage for the price.. hmm.
    Only problem is.. I want a card that can play Crysis at 1920×1200.

    Its nice to see that all the early “benchmarks” were wrong.. and ATI seems to be fairly consistent across the board at bettering the 3870 & GTS 512 across the board. significantly in some cases. Crysis seems to be the weak point.

    • leor
    • 12 years ago

    pretty good card for what it is. if it works that well with most games, I think AMD has a winner here.

    you can talk about what’s coming out next week all you want, i’ll address that when it gets here and if it’s available.

    • aleckermit
    • 12 years ago

    Good for AMD. Its good to see some competition again! This will cause faster technonlogy advances(more GPU releases) and lower prices.

    Lack of competition for the top spot is why Nvidia never released a 8900.
    🙁

    i’m sure Nvidia will retaliate with extreme force now though.

      • UberGerbil
      • 12 years ago

      I’d rather see slower advances with more mature drivers. Getting a new GPU design every six months is of little benefit if the drivers crash, don’t fully exploit new hardware features, and don’t unlock the full performance of the silicon.

    • thebeadfairy
    • 12 years ago

    Radeon HD 3870 X2 w/1gig ddr3 memory ready to be released and tested in France. Question is, why not using ddr4 memory instead? I found this link:
    matbe.com
    I used altavista babel fish to translate.

    • swaaye
    • 12 years ago

    This is showing some decent Crossfire efficiency, but you still have RV670’s general deficiencies compared to G92-112. I like that idle power management for sure.

    It looks like 8800U’s 768MB comes into play at the crazy high resolutions, meaning that 512MB could be nearing the end of its “idealness” at lower resolutions too.

    • kilkennycat
    • 12 years ago

    Maybe too little, too late.

    The next-gen high-end GPU from nVidia is in full development and will be on retail shelves by mid-2008. And there may also be a 8800GT dual-GPU variant any day now. Like the 7950GX2, the HD3870X2 (and any dual-8800 GPU card) will have a very short shelf life, to be instantly replaced by a single-GPU of the next generation, and with superior performance. Certainly this release might help milk more sales of RV670 GPUs. AMD is crimped for cash for next-gen silicon development, so the more that they can milk from variants of current technology, the better for their short-term bottom-line. Hopefully, AMD can break the mold of their recent product shipments and have a hard-release of the HD3870X2 with no product shortages. The opportunity-time for the HD3870X2 is likely to be very narrow indeed.

    • Tarx
    • 12 years ago

    I had read the HardOCP review before this one (for once).
    Their reviews are … different and somewhat limited. (e.g. only a few games tested, but then those are heavily tested for game play)
    I noticed that they had strongly suggested (especially in the comments for that article) that the canned benchmarks showed a major difference from actual game play with this card. Assuming their results are correct (I would expect so), then that seems to point to something that appears to heavily bottlenecking the HD3870X2 that isn’t apparent in standard benchmark tests.
    Perhaps they are looking at minimum usable performance, maybe it is heavy firefights or something similar that has such a major impact.
    Taking a look where they noticed major performance hits for the HD3870X2, should that be something that is verified to see if there is a real issue with the HD3870X2?

    • sigher
    • 12 years ago

    I’m happy with this release since it will push nvidia to try hard on their next GPU, and to release it soonest.
    As for buying it.. I don’t think so.

    • thebeadfairy
    • 12 years ago

    I would like to see a platform where both cards can be used. Intel seems to be supporting ATI with their new 9000 series cpu’s. I am confused. No matter which one you choose, elated or disappointed, your still stuck with both a motherboard and a graphics card that either lived up to your expectations or totally dissatisfied you. Or am I wrong? Can you use 8800GT with a board built to utilize Crossfire? Bumbling novice over here.

      • Dposcorp
      • 12 years ago

      You can use any *[

    • Jigar
    • 12 years ago

    Sorry double post

    • nstuff
    • 12 years ago

    Just noticed this review is using an older set of the drivers. When TechReport’s site was down earlier this morning, I went over to Anandtech to read their review. They specifically mentioned that this card was delayed a week for AMD to fix a bunch of issues in the driver.

    According to TR’s sys config: they used the older driver: “with 8.451.2-080116a-057935E drivers”

    According to Anandtech, they used the newer driver: “ATI: 8-451-2-080123a”

    The changes in the newer driver are quoted from Anandtech:
    <<
    • Company of Heroes DX10 – AA now working on R680. Up to 70% faster at 2560×1600 4xAA
    • Crysis DX10 – Improves up to ~60% on R680 and up to ~9% on RV670 on Island GPU test up to 1920×1200.
    • Lost Planet DX10 – 16xAF scores on R680 improved ~20% and more. AF scores were horribly low before and should have been very close to no AF scores
    • Oblivion – fixed random texture flashing
    • COJ – no longer randomly goes to blackscreen after the DX10 benchmark run
    • World in Conflict – 2560x1600x32 0xAA 16xAF quality=high we get 77% increase
    • Fixed random WIC random crashing to desktop
    • Fixed CF scaling for Colin McRae Dirt, Tiger Woods 08, and Blazing Angels2
    • Fixed WIC DX9 having smearable text
    >>

      • Damage
      • 12 years ago

      Not quite correct. As I said in the review, I used the newer driver set for testing Crysis. None of the other games we used were on that list.

        • sigher
        • 12 years ago

        I don’t think a list of claims means anything, every time ATI releases a driver they claim 50%-80% FPS increase, but when people test it it’s never more than 7% tops and often no discernible increase at all.

          • lethal
          • 12 years ago

          they only report best case gains, which would likely be with “performance” texture filtering, Catalyst AI enabled, no AA and low enough resolution to avoid any kind of CPU bottlenecks. Which is hardly the settings people would use on any 200$+ GPU, so you have to take any numbers they release with a bunch of salt.

    • alex666
    • 12 years ago

    This card gets hot like the 8800GT. I’ve got one of the latter, and with an after-market cooler over the single gpu, it’s running nice and cool, e.g., mid-50s while running crysis. I wonder what people will use for after-market cooling with this card given that it has 2 gpus.

    • Usacomp2k3
    • 12 years ago

    4gb of ram on a 32-bit OS?

    • flip-mode
    • 12 years ago

    GJ AMD, congrats on finally winning something! Too bad the dual GPUs aren’t seen as one. That would be a neat trick. Too bad it couldn’t just be a single GPU.

    Too bad it costs as much as a console and will be outdated in 1/4 the time.

      • Dposcorp
      • 12 years ago

      That is what I forgot in my original post, #16,………a 3rd and 4th option. Now for the complete list.

      The lovers will say congrats and good job AMD for taking the crown back.
      The haters will say it doesn’t count cause its two GPUs. 🙂
      The haters who will hate higher priced video cards
      The haters who will hate on PC gaming.

      Congrats on fitting into #3 & #4.

      It reminds me I need to post the opposite in console reviews when new consoles come out.

      Long live PC gaming. lol

        • BoBzeBuilder
        • 12 years ago

        Enjoying your little parade?

        Long live [the wii]

        • flip-mode
        • 12 years ago

        The only problem with your theory is that I’m not a hater. I’d actually call myself a lover cause I’d love to see one of these on the other side of my case window, or one of Nvidia’s 8800GTS cards, but it just doesn’t make sense for me.

        • SPOOFE
        • 12 years ago

        One has to love very-high-end video cards in order to Not Hate PC Gaming? This explains so much.

      • Jigar
      • 12 years ago

      I think you are still missing the point here.. It’s 2 chip into one card that’s it.. If tomorrow they put in 6 slower chips on one card against 1 chip on Nvidia.. I would be glad to get that 6 chip card if it is faster.

        • flip-mode
        • 12 years ago

        The point I was making was similar to the one Scott made in his concluding remarks, which is that a multi-chip design has inherent performance compromises. Are you perhaps missing the point that the more chips you add the more compromises must be made?

      • shank15217
      • 12 years ago

      i would agree with you flip-mode under most circumstances however the x2 nearly doubled frame rates over the 3870 in gpu limited resolutions. I think they did their homework on this cards, kudos to AMD.

        • flip-mode
        • 12 years ago

        Oh I completely agree. My post was wondering, in a sort of backward fashion, if there was some way that the two GPUs could be seen as one, such that there would be no need for “crossfire” or special drivers. That would be a heck of a neat trick. Is that even possible?

          • shank15217
          • 12 years ago

          The R700 architecture seems to be pointing toward a modular gpu, maybe thats where its going to end up eventually.

    • elmopuddy
    • 12 years ago

    The big question is when we’ll be able to actually buy one..

    I am happy AMD is back in the game.. I will be needing a replacement for my 7900GTX soon..

      • fent
      • 12 years ago

      Looks to be available on newegg already.

        • alex666
        • 12 years ago

        Yep, they are, several of them, and at $449US.

    • mcdill
    • 12 years ago

    Just noticed that the review used Vista x86 with 4 gigs of RAM. How much of usable RAM does Vista show after installing the 3870 x2?

      • Damage
      • 12 years ago

      3325MB.

      I’d prefer to use Vista x64, but I’ve learned not to trust AMD and Nvidia to deliver 64-bit versions of their drivers (including the pre-release ones we use) with a full set of performance tweaks, multi-GPU profiles, and feature support. They’re slowly improving, but for what we do, 64 bits is still a risk.

    • Fighterpilot
    • 12 years ago

    l[

    • Dposcorp
    • 12 years ago

    Excellent review; thanks Scott.
    I can already see the posts to this review.

    The lovers will say congrats and good job AMD for taking the crown back.
    The haters will say it doesn’t count cause its two GPUs. 🙂

    Personally, I say good job AMD, and here is why:

    1) I like the fact that this is the fastest single slot, non-chip set dependent video card around. Nice that I dont have to limit myself to anymore to get a DUAL GPUs runnin. I think the end result is all that matters.

    2)Not only is it fast, but also supports DX10.1. We shall see down the road if it will ever matter or not, but still nice to have.

    3) I wonder if the drivers will someone day allow one of the GPUs to handle physics while other does the graphics, cause I can see that happening.

      • willyolio
      • 12 years ago

      yeah, i don’t care if it’s 2 GPUs on a card. i can put this card into a mATX board. there’s no faster option right now.

        • Flying Fox
        • 12 years ago

        Until Nvidia’s GX2 is out…

        They are lucky now that there is this Chinese New Year they have a week or 2 more on the top. But a win is a win.

          • willyolio
          • 12 years ago

          true enough. i’m still waiting to see the performance (and price) of that card as well.

          still, ATI wins for now.

      • Flying Fox
      • 12 years ago

      The cooler still makes it take up 2 slots.

        • Pettytheft
        • 12 years ago

        True but it doesn’t cost an arm and a leg for a dual GPU motherboard.

        • pluscard
        • 12 years ago

        It takes 2 slots in the back panel, but it doesn’t use 2 slots on the motherboard.

        It’s no bigger than the first Sapphire 3870’s that newegg was selling.

        It’s a clear winner on everything except for Crysis and Bioshock, and it’s much cheaper than the Ultra.

        Next, AMD will put 2 gpus on the same silicon as they have done with cpus.

        AMD is once again driving the market.

          • Dagwood
          • 12 years ago

          ” Next, AMD will put 2 gpus on the same silicon as they have done with cpus ”

          Buzzzzz , wrong answer

            • Anonymous Coward
            • 12 years ago

            The R&D for the dual-core approach is probably a whole lot lower than making a monolithic chip with equivilent performance (and smaller sales volume). Of course the R&D cost is somewhat shifted from hardware to software, but if they do manage to get that ironed out pretty well and it becomes a standard, there should be no more incompatibility problems.

            • Lazier_Said
            • 12 years ago

            Keeping the the GPUs physically separated is a significant aid to cooling.

            200 watts on one die (or two dies 1mm apart) is probably not feasible with air cooling that fits the size/noise envelope of a PC.

            • Anonymous Coward
            • 12 years ago

            True, but presumably they would want to do dual core in a future product fabbed at a smaller size. For the high end its either going to be multicored or monolithic; they’ll use those transistors somehow.

          • Flying Fox
          • 12 years ago

          Haven’t we beaten the topic to death that GPUs are already highly parallel and they just need to put more SPs not more cores?

            • flip-mode
            • 12 years ago

            The task will never be finished Fox. Drink some Red Bull.

            • Anonymous Coward
            • 12 years ago

            ATI launched this dual chip product because they have found it too difficult to make a comparable monolithic design. In the future, they will also find it easier to bind two cores together on one die than to make that entire transistor count play nicely in one core. Monolithic is probably faster, but if it risks being 6+ months late to market, it might be a disaster.

            • willyolio
            • 12 years ago

            no, ATI launched this product to test out their on-board crossfire bridge design before the R700 launches. From the R700 and onwards, all their designs will supposedly be multi-chip.

            it’s not that they had difficulty designing a monolithic chip, it’s that they don’t want to get any bigger because it’s not economical.

            • SPOOFE
            • 12 years ago

            /[<"In the future, they will also find it easier to bind two cores together on one die"<]/ Which will turn it into that "monolithic" die that you say they can't make. I don't think you know what you're talking about. Anyway, yay AMD. If nothing else, it looks like they've set a new high water mark for the quality of multi-GPU drivers, at least in certain respects.

            • Anonymous Coward
            • 12 years ago

            Actually I was talking about monolithic GPUs not monolithic dies. If they eventually can get 80% to 90% scaling by doubling the number of GPUs then it could be worth their while to make something that is half the transistor count and put two on one die. (The same number of transistors, but lower R&D and lower risk of being late to market.)

            • DaveBaumann
            • 12 years ago

            Consider this: RV670 is nearly the same number of transistors as R600; RV620 is more complex than than R420!

            Do you think that ~700M transistors is some sort of ceiling that is difficult to go past, and that all that we’ll be able to do is shrink chips from here on in? Because, I think I’m failry safe to say, thats not a strategy to go forward with.

            With process technology such as it is today’s “monologic die” is tomorrows performance part, and next weeks entry level.

            • Anonymous Coward
            • 12 years ago

            I don’t think that they have made the largest chip they can ever make. However, if you were running ATI wouldn’t you feel tempted to use a slightly modified R600/R670 as the basis of a dual-cored “one chip crossfire” midrange product at the next manufacturing tech node? The R&D costs are practically negligable and the thermals should work out. I can only speculate if the performance would be compeditive. I guess they just need to figure out something to do about sharing V-RAM.

            I’m obviously not designing these things, but I would feel very tempted to spam the R600/R670 core everywhere while working on some righteous R700.

            • TO11MTM
            • 12 years ago

            Since everyone is weighing in on what AMD “Should” do… I’ll throw in my 2 cents.

            Multiple levels of binning could make a lot of sense. Have one “Dual Core” design… if one core fails, it’s a xxx100. If both are groovy, it’s an xxx500. If it’s got 2 chips where everything’s groovy it’s a xxx900.

            I’m just saying, That’s how I’d do it.

            • willyolio
            • 12 years ago

            the problem with having a giant chip is that you have to sell giant chips.

            the overwhelming majority of chips are low-end or midrange. unless you’re actually planning on having 90% of your chips be defective (and binned into the low and midrange), you’re not making good use of your silicon. it’s a huge waste of money. and if you are, then that’s a stupid plan.

            the only other option is to disable and sell perfectly viable chips at a lower cost, because few people will be willing to pay the full price of the full, high-end chip. which is pretty much what they’ve been doing so far, which is also pretty stupid.

            so, with the R700 design, having multiple smaller chips and finding a way to join them together seamlessly (as far as the OS is concerned) is the obvious way to go. as long as there isn’t a significant performance hit from having them on separate dies, there’s really no point in keeping them together.

    • DrDillyBar
    • 12 years ago

    Love the callbacks to the MAXX. We must remember these things afterall. 😉 (3DFX)
    The Multimonitor experience you had is fantastic!
    And YAY! for 1680×1050. 🙂

    • My Johnson
    • 12 years ago

    I forgot what cut through switching is. It’s where once the header of the frame is read the frame is routed before the entire frame is received, no?

      • just brew it!
      • 12 years ago

      Yeah, that’s the basic idea. The switch starts forwarding the packet as soon as the destination can be determined, even if the entire packet has not been received yet.

    • SecretMaster
    • 12 years ago

    Aside from power consumption and noise levels, what I find bothersome is the monstrous size of this card. I miss the days when they weren’t massively long and didn’t weigh a ton with their cooling designs. But I suppose one day we’ll start seeing shorter PCB designs.

    Oh and thanks a bunch for posting the temps with this review. I’m just glad to see that the peak temp is relatively decent compared to previous generations with stock cooling.

    • sam0t
    • 12 years ago

    edited dual post.

    • danny e.
    • 12 years ago

    started reading.. and just got too tired, so skimmed the rest.

    goods:
    better consistency in performance.
    not a bad price.
    better than NVidias experimental cards…

    bads:
    too big, too loud, too much power draw.
    still inconsistent performance.

    .. sadly, there is still no card that can handle crysis.

      • indeego
      • 12 years ago

      There’s little difference display wise between tweakguide’s high(tweaked) and very high. But the performance is much betterg{<.<}g

    • just brew it!
    • 12 years ago

    I’ll bet the copper cooler on the second GPU is needed because the air blowing over it is already pretty hot from cooling the first GPU. The copper boosts the efficiency of the second cooler enough that it can still do its job…

      • vince
      • 12 years ago

      That’s exactly what I was thinking…

    • sam0t
    • 12 years ago

    To me this card is quite impressive. Not only the performance is there the single card dual GPU solution seems much more elegant than the Nvidias Frankenstein with two cards strap together with Macgyver tape.

    If there is not something to like in this card its the power consumption and noise, the latter can be fixed with 3rd party cooler/Water cooling though.

    • marvelous
    • 12 years ago

    Damn AMD did it. Took the crown from nvidia. AMD’s X2 works better than any dual card solution. This solution is giving AMD 10-20% over crossfire. The benchmarks speaks for itself beating dual 3870 in most cases with slower clock speeds.

    • crazybus
    • 12 years ago

    I like that fact that ATI got multi-monitor on multi-gpus working, but it doesn’t change my misgivings concerning SLI/Crossfire/X2/MAXX/GX2 whatever they want to call it.

    • SNM
    • 12 years ago

    I’m waiting for the day when graphics cards consist of a front-end chip and varying numbers of shader bank chips that just get glued together on a board. I really don’t understand how this could be so hard unless latencies end up killing it. Certainly it would provide far, far better scalability and compatibility than current multi-gpu options in an ideal world.

      • just brew it!
      • 12 years ago

      I suspect that texture memory is the issue. You’d either have to give each shader chip its own bank of texture memory, or get killed on latency/contention going through a common memory controller.

    • TurtlePerson2
    • 12 years ago

    I don’t see this as a very attractive product to very many people. With the new GX2 cards from nVidia coming, this doesn’t look like it’ll have the performance crown very long. The price is so high that only early adopters will pick one up and even they will be wary since better cards are right around the corner. It’s nice to see AMD take the performance crown, but it won’t be there for long.

    • Krogoth
    • 12 years ago

    Yawn, hardly impressive.

    AMD’s attempt at becoming a 7950GX2 just falls short.

    It barely defeats the obsolete and aging 8800U (really a factory-overclocked 8800GTX), while the more affordable 8800GT is almost as fast.

    The upcoming 8800GTx2 on a single-card is coming out to completely dethrone the 3870 X2 as fastest single-PCB graphics solution.

Pin It on Pinterest

Share This