Single page Print

Under the hood
As I said, the X2 becomes more interesting under the hood. Here's what you'll find there.


Dual GPUs flank the X2's PCI Express switch chip

The two chips on the left and the right are Radeon HD 3870 chips, also known as RV670 GPUs. The X2's two GPUs flip bits at a very healthy frequency of 825MHz, which is up 50MHz from the Radeon HD 3870. Each GPU has a 256-bit interface to a 512MB bank of memory. That gives the X2 a total 1GB of RAM on the card, but since the memory is split between two GPUs, the card's effective memory size is still 512MB. The X2's GDDR3 memory runs at 900MHz, somewhat slower than the 1125MHz GDDR4 memory on the single 3870. All in all, this arrangement should endow the X2 with more GPU power than a pair of 3870 cards in CrossFire but less total memory bandwidth. AMD says its partners are free to use GDDR4 memory on an X2 board if they so choose, but we're not aware of any board makers who plan to do so.

By the way, in a confusing bit of roadmappery, the X2's code-name is R680, although it's really just a couple of RV670s glued together.


The PLX 8547 PCIe switch

The "glue," in this case, is a PCI Express switch chip. That mysterious chip is a PLX 8547 PCIe switch. Here's how PLX's website describes the chip:

The ExpressLane™ PEX 8547 device offers 48 PCI Express lanes, capable of configuring up to 3 flexible ports. The switch conforms to the PCI Express Base Specification, rev 1.1. This device enables users to add scalable high bandwidth, non-blocking interconnects to high-end graphics applications. The PEX 8547 is designed to support graphics or data aggregation while supporting peer-to-peer traffic for high-resolution graphics applications. The architecture supports packet cut-thru with the industry's lowest latency of 110ns (x16 to x16). This, combined with large packet memory (1024 byte maximum payload size) and non-blocking internal switch architecture, provide full line-rate on all ports for performance-hungry graphics applications. The PEX 8547 is offered in a 37.5 x 37.5mm² 736-ball PBGA package. This device is available in lead-free packaging.

Order yours today!

This switch chip sounds like it's pretty much tailor-made for the X2, where it sits between the two GPUs and the PCIe x16 graphics slot. Two of its ports attach to the X2's two GPUs, with 16 lanes of connectivity each. The remaining 16 lanes connect to the rest of the system via the PCIe x16 slot. Since PCI Express employs a packet-based data transmission scheme, both GPUs ought to have reasonably good access to the rest of the system when connecting through a switch like this one, even though neither GPU has a dedicated 16-lane link to the rest of the system. That said, the X2's two GPUs are really no more closely integrated than a pair of Radeon HD 3870 cards in CrossFire.

Also, as you may know, the Radeon HD 3870 GPU itself is capable of supporting PCIe version 2.0, the revved-up version of the standard that offers roughly twice the bandwidth of PCIe 1.1. The PLX switch chip, however, doesn't support PCIe 2.0. AMD says it chose to go with PCIe 1.1 in order to get the X2 to market quickly. I don't imagine 48-lane PCIe 2.0 switch chips are simply falling from the sky quite yet, and the PCIe 1.1 arrangement was already familiar technology, since AMD used it in the Radeon HD 2600 X2.

Incidentally, the PLX bridge chip adds about 10-12W of power draw to the X2.

AMD plans to make Radeon HD 3870 X2 cards available via online resellers starting today, with an initial price tag of $449. At that rate, the X2 should cost slightly less than two Radeon HD 3870s at current street prices.

The dual GPU issue
Multi-GPU video cards aren't exactly a new thing, of course. They've been around almost since the beginning of consumer 3D graphics cards. But multi-GPU cards and schemes have a storied history of driver issues and support problems that we can't ignore when assessing the X2. Take Nvidia's quad SLI, for instance: it's still not supported in Windows Vista. Lest you think we're counting Nvidia's problems against AMD, though, consider the original dual-GPU orphan: the Rage Fury MAXX, a pre-Radeon dual-GPU card from ATI that never did work right in Windows 2000. The prophets were right: All of this has happened before, and will again.

The truth is that multi-GPU video cards are weird, and that creates problems. At best, they suffer from all of the same problems as any multi-card SLI or CrossFire configuration (with the notable exception that they don't require a specific core-logic chipset on the motherboard in order to work.) Those problems are pretty well documented at this point, starting with performance scaling issues.

Dual-GPU schemes work best when they can distribute the load by assigning one GPU to handle odd frames and the other to handle the even ones, a technique known as alternate-frame rendering, or AFR. AFR provides the best performance scaling of any load-balancing technique, up to nearly twice the performance of a single GPU when all goes well, but it doesn't always work correctly. Some games simply break when AFR is enabled, so the graphics subsystem may have to fall back to split-frame rendering (which is just what it sounds like: one GPU takes the top half of the screen and the other takes the bottom half) or some other method in order to maintain compatibility. These other load-balancing methods deliver sometimes underwhelming results.

Things grow even more complicated from there, but the basic reality is this: GPU makers have to keep a database of games in their drivers and provide hints about what load-balancing method—and perhaps what workarounds for scaling or compatibility problems—should be used for each application. In other words, before the X2's two GPUs can activate their Wonder Twin powers, AMD's drivers may have to be updated with a profile for the game you're playing. The same is true for a CrossFire rig. The game may have to be patched to support multi-GPU rendering, as well, as Crysis was recently. (And heck, Vista needs several patches in order to support multi-GPU rigs properly.) Getting those updates tends to take time—weeks, if not months.

One would think that game developers and GPU makers would work together more diligently to ensure a good out-of-the-box experience for gamers with SLI or CrossFire, but the reality is that multi-GPU support is almost inescapably a lower priority for both parties than just getting the single-GPU functionality right—and getting the game shipped. The upshot of this situation is that avid gamers may find they've already finished playing through the hottest new games before they're able to bring a second GPU to bear on the situation. Such was my experience with Crysis.

Generally speaking, Nvidia has done a better job on this front lately than AMD. You'll see the Nvidia logo and freaky theme-jingle-thingy during the startup phase of many of the latest games, and usually, that means the game developer has worked with Nvidia to optimize for GeForce GPUs and hopefully SLI, as well. That said, AMD has been better about supporting its multi-GPU scheme during OS transitions, such as the move to Vista or to 64-bit versions of Windows.

This one-upsmanship reached new heights of unintentional comedy this past week when Nvidia sent us a note talking down the Radeon HD 3870 X2 since multi-GPU cards often have compatibility problems. It ended with this sentiment: besides, wait until you see our upcoming GX2! Comedy gold.

The problem is, neither company has made multi-GPU support as seamless as it should be, for different reasons. Both continue to pursue this technology direction, but I remain skeptical about the value of a card like the X2 given the track record here. Going with a multi-GPU card to serve the high end of the market is something of a risk for AMD, but one that probably makes sense given the relative size of the market. Whether it make sense for you to buy one, however, is another question.