Diamond’s Radeon HD 2900 XT 1GB graphics card

IF YOU’VE BEEN PAYING attention, you probably already know that AMD elected not to reach for the overall graphics performance crown when it introduced its new generation of Radeons, including the flagship Radeon HD 2900 XT. Instead, the company chose to target the $399 mark, where the 2900 XT would compete against the GeForce 8800 GTS. This was a surprising move, indeed—practically without precedent in the modern GPU race.

AMD did it for solid reasons, of course, including the fact that its new graphics processor wasn’t quite up to the task of taking on the GeForce 8800 GTX head to head. That makes the product we’re reviewing today all the more interesting, because it’s a souped-up version of the Radeon HD 2900 XT with a full gig of screaming fast GDDR4 memory. AMD has practically kept mum on this product, choosing not to point out its existence or highlight it in any way. But Diamond and a handful of other AMD partners have been shipping the cards to PC makers for a few weeks now, and contrary to the initial plan, cards are quietly beginning to show up at online retailers, as well.

This card, with a staggering 128 GB/s of memory bandwidth, raises a number of intriguing questions about the role of memory bandwidth, the question of graphics memory size (how much is enough?) and most of all, the potential of AMD’s R600 GPU. Some time has passed since the 2900 XT’s debut, drivers have had time to mature, and here we have a faster version of the card. Can this new 2900 XT take on Nvidia’s best? Are the matchups altered in DirectX 10 games? And how does the UVD-less 2900 XT really perform in HD video playback? Read on for all of these answers and more.

The card
You wouldn’t know this Radeon HD 2900 XT was anything special by looking at it. It looks the same as the original 512MB version, right down to the cheesy metallic flame job on its cooler. Underneath that cooler, though, lies an array of GDDR4 memory chips with double the density of the GDDR3 chips on the original 2900 XT. GDDR4 first debuted on the Radeon X1950 XTX, and at that time, ATI claimed this new RAM type could achieve higher clock speeds while drawing less power per clock cycle. We were somewhat surprised, then, to see Nvidia introduce a while new lineup of world-beating graphics cards without using GDDR4, and we were even more startled when AMD didn’t lead off with a GDDR4 version of the 2900 XT.

That card is here at last, though, and Diamond is planning two versions: one with a stock 743MHz core clock and 2GHz memory and another, “overclocked” variant with an 825MHz core and 2.1GHz memory. Both offer quite a bit more memory throughput than the original 2900 XT, whose GDDR3 memory runs at 825MHz; over its 512-bit memory interface, the GDDR3 2900 XT has a theoretical peak of 106 GB/s. Our 2900 XT 1GB GDDR4 card is the slower of the two versions, but it still has 128 GB/s of memory bandwidth, well above the stock 2900XT and astoundingly, almost twice the 86 GB/s of the GeForce 8800 GTX.

As I’ve mentioned, AMD initially planned to sell these cards exclusively through system builders, who would presumably stuff them into $5000 PCs painted metallic green. Fortunately, though, individual cards are already starting to show up at online vendors, and Diamond expects its branded cards to be in stock at Newegg soon. Street prices look to be between $489 and $549, which makes this card a fairly direct competitor to the GeForce 8800 GTX.

Oh, yeah. I took some pictures.

Here’s a look at the funky eight-pin power plug on the 2900 XT. This plug will accept a six-pin connector, but AMD’s Overdrive overclocking utility isn’t available unless an eight-pin power plug is connected. I’ve never seen an adapter to covert another connector type to an eight-pin one, either, so you’re probably looking at a power supply upgrade if you want to overclock this card.

These are the “native” CrossFire connectors on the 2900 XT. In case you didn’t see the memo, dongles aren’t needed anymore.

Fresh competition
Nvidia hasn’t been sitting still as AMD has released its new products. Fortunately, this review gives us a chance to catch up on some new developments from the GeForce folks, as well. Among them is this EVGA GeForce 8800 GTS “Superclocked” card:

We took a brief look at this card in our initial Radeon HD 2900 XT review, and we said we’d follow up later. Now is the time. This card has a 575MHz core clock and 1.7GHz memory, a nice boost from “stock” GTS speeds. Of course, it’s fully warranted at these clock speeds. At $394 at Newegg, this is tough new competition for the GDDR3 version of the 2900 XT.

These monsters are GeForce 8800 Ultra cards from XFX. We were underwhelmed by the Ultra when it was introduced because it didn’t offer much of a speed boost over the 8800 GTX, yet it cost quite a bit more. We said at the time that higher-clocked versions of the card might bring some redemption for the Ultra and pledged to keep an eye on it. Well, here you have the GeForce 8800 Ultra “XXX” from XFX, clocked at a stratospheric 675MHz, with 2.3GHz memory and a 1667MHz shader core. Now that’s a little more like it. Furthermore, the price of even this stupid fast Ultra is down to $699.99 at ZipZoomFly. It ain’t cheap, but it’s certainly not the heart-stopping $830 price tag attached to Ultras at their launch, either. Perhaps a measure of redemption for the Ultra is, ahem, in the cards?

The matchup
Before we dive into our test results, let’s pause to have a look at the theoretical matchups between these cards.

Peak
pixel
fill rate
(Gpixels/s)
Peak texel
filtering
rate
(Gtexels/s)
Peak
memory
bandwidth
(GB/s)
Peak
shader
throughput
(MFLOPS)
GeForce 8800 GTS 10.0 12.0 64.0 346
EVGA GeForce 8800 GTS Superclocked 11.5 13.8 68.0 389
GeForce 8800 GTX 13.8 18.4 86.5 518
GeForce 8800 Ultra 14.7 19.6 103.7 576
XFX GeForce 8800 Ultra XXX 16.2 21.6 110.4 640
Radeon HD 2900 XT 11.9 11.9 105.6 475
Radeon HD 2900 XT 1GB GDDR4 11.9 11.9 128.0 475
Radeon HD 2900 XT 1GB GDDR4 OC 13.2 13.2 134.4 528

I don’t want to dwell too long on these numbers, because they’re just theoretical peaks, but such things do tend to matter in graphics. The big things to take away from this table are pretty obvious. The GeForce 8800 cards tend to beat out the Radeon HD 2900 XT cards in texture filtering rate—in some cases, like the 2900 XT 1GB GDDR4 vs. the 8800 GTX, by quite a bit. The GeForce 8800 is simply loaded on this front. Conversely, the various flavors of the Radeon HD 2900 XT tend to have vastly more memory bandwidth than the competition, as we’ve already noted.

Peak shader throughput is a tricky thing, but I’ll give you my take. The numbers above give Nvidia’s G80 GPU credit for a MUL instruction that it can co-issue in certain situations. If you take that away, which might be the more realistic thing to do, the G80’s numbers above would be reduced by a third. I think the peak numbers we have listed for the R600 are a little more solid, but this GPU’s real-world performance may be impaired by the superscalar nature of its execution units. Scheduling instructions optimally on such a machine can be challenging. The G80’s scalar architecture is more naturally efficient. My sense is that the R600 has more peak raw shader power, but that may not matter much in the end.

Now, let’s have a look at performance.

Test notes
Both AMD and Nvidia released new drivers during our testing period for this article, so we dealt with the change the best we could on a rolling basis.

For the Radeon cards, we began using the Catalyst 7.6 drivers, and then Cat 7.7 was released. The new drivers’ release notes didn’t promise much in the way of performance enhancements, but they did offer fixes for Lost Planet: Extreme Condition. We found that Lost Planet ran much faster with Cat 7.7, so we’ve included results using those drivers. Of our other DX10 apps, Company of Heroes was no faster with Cat 7.7 and Call of Juarez was slightly slower, so we stuck with our Cat 7.6 results. All video, power, and noise testing was conducted with Cat 7.7.

We started with ForceWare 158.45 on the GeForce cards, but employed the 163.11 beta release for testing Company of Heroes and Lost Planet, since the new drivers have specfic fixes for those games. Call of Juarez was no faster with the 163.11 drivers, so we retained our 158.45 results. All video, power, and noise testing was conducted with 163.11.

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core 2 Extreme X6800 2.93GHz Core 2 Extreme X6800 2.93GHz
System bus 1066MHz (266MHz quad-pumped) 1066MHz (266MHz quad-pumped)
Motherboard XFX nForce 680i SLI Asus P5W DH Deluxe
BIOS revision P26 1901
North bridge nForce 680i SLI SPP 975X MCH
South bridge nForce 680i SLI MCP ICH7R
Chipset drivers ForceWare 15.00 INF update 8.1.1.1010
Matrix Storage Manager 6.21
Memory size 4GB (4 DIMMs) 4GB (4 DIMMs)
Memory type 2 x Corsair TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
2 x Corsair TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
CAS latency (CL) 4 4
RAS to CAS delay (tRCD) 4 4
RAS precharge (tRP) 4 4
Cycle time (tRAS) 18 18
Command rate 2T 2T
Hard drive Maxtor DiamondMax 10 250GB SATA 150 Maxtor DiamondMax 10 250GB SATA 150
Audio Integrated nForce 680i SLI/ALC850
with Microsoft drivers
Integrated ICH7R/ALC882M
with Microsoft drivers
Graphics EVGA GeForce 8800 GTS OC 640MB PCIe
with ForceWare 158.45 and 163.11 drivers
Dual Radeon HD 2900 XT 512MB PCIe
with Catalyst 7.6 and 7.7 drivers
Dual EVGA GeForce 8800 GTS OC 640MB PCIe
with ForceWare 158.45 and 163.11 drivers
Dual Radeon HD 2900 XT 1GB GDDR4 PCIe
with Catalyst 7.6 and 7.7 drivers
MSI GeForce 8800 GTX 768MB PCIe
with ForceWare 158.45 and 163.11 drivers
Dual GeForce 8800 GTX 768MB PCIe
with ForceWare 158.45
and 163.11 drivers
XFX GeForce 8800 Ultra XXX 768MB PCIe
with ForceWare 158.45
and 163.11 drivers
Dual XFX GeForce 8800 Ultra XXX 768MB PCIe
with ForceWare 158.45
and 163.11 drivers
Radeon HD 2900 XT 512MB PCIe
with Catalyst 7.6 and 7.7 drivers
Radeon HD 2900 XT 1GB GDDR4 PCIe
with Catalyst 7.6 and 7.7 drivers
OS Windows Vista Ultimate x86 Edition Windows Vista Ultimate x86 Edition
OS updates

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Turbo-Cool 1KW-SR power supply units. Thanks to OCZ for providing these units for our use in testing.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Lost Planet: Extreme Condition
Lost Planet is one of the first games to use DirectX 10, and it has a DirectX 9 mode, as well, so we can do some direct comparisons. DX9 and DX10 are more or less indistinguishable from one another visually in this game, as far as I can tell. They both look gorgeous, though. This is one very good looking game, with subtle HDR lighting and a nifty motion-blur effect.

We tested by manually playing through a specific portion of the game and recording frame rates with FRAPS. Each gameplay sequence lasted 90 seconds, and we recorded five separate sequences per graphics card. This method has the advantage of simulating real gameplay quite closely, but it comes at the expense of precise repeatability. We believe five sample sessions are sufficient to get reasonably consistent and trustworthy results. In addition to average frame rates, we’ve included the low frames rates, because those tend to reflect the user experience in performance-critical situations. In order to diminish the effect of outliers, we’ve reported the median of the five low frame rates we encountered.

We tested with fairly high quality settings. The game’s various quality options were set to “high” with the exception of shadow quality and shadow resolution, which were set to “medium.” 4X antialiasing and 16X anisotropic filtering were enabled.

We did hit one snag in the DX10 version of Lost Planet. The Radeon HD cards didn’t appear to be applying anisotropic filtering to most surfaces, regardless of whether aniso was enabled in the game settings or forced on via Catalyst Control Center. Be aware of that as you’re looking at the results.

The move to more and faster memory doesn’t seem to help the Radeon HD 2900 XT much at all. The 1GB GDDR4 card is only slightly faster than the 512MB one, if at all. That leaves the 1GB just ahead of the GeForce 8800 GTS and trailing the GeForce 8800 GTX in the DX9 version of Lost Planet. The situation grows more acute in DX10, where the 1GB GDDR4 card falls behind the 8800 GTX. Every config we tested suffered a performance drop with DX10, though. CrossFire isn’t effective with either version of DirectX. I tried setting Catalyst A.I. to “Advanced” in order to force on alternate frame rendering, but it didn’t help. The game still ran fine, but it wasn’t any faster. I also ran into an odd problem, with or without CrossFire, where Lost Planet’s main menu was exceptionally slow, to the point of being almost unusable, on the Radeons. This problem only seemed to affect the DX9 version of the game, for whatever reason.

Before we move on, I’d like to take a quick look at performance without antialiasing. I explained in my Radeon HD 2400 and 2600 series review that AMD’s new GPUs use their shader cores to handle the resolve stage of multisampled antialiasing, something we didn’t know when we first reviewed the 2900 XT. The question is: how does this limitation impact performance? I figured Lost Planet’s DX9 version would be an interesting place to have a look. Here’s what I found.

Lost Planet DX9
Average FPS
No AA
Lost Planet DX9
Average FPS
4X AA
Performance
penalty
Radeon HD 2900 XT 1GB 52.0 39.5 24%
GeForce 8800 GTX 50.1 47.1 6%

Without antialiasing, the Radeon HD 2900 XT 1GB GDDR4 is outright faster than the GeForce 8800 GTX, its direct competition. When we enable 4X AA, though, the Radeon’s performance dips precipitously, by 24%. By comparison, the 8800 GTX only suffers a 6% drop. Of course, we’re not likely to want to run without at least 4X AA when using a $400-500 graphics card, which is the whole problem. Store that away somewhere as you think about why the Radeon HD series hasn’t entirely lived up to its potential.

Company of Heroes
Company of Heroes can also use DirectX 10, and its developers have included a simple performance test in the game. This test plays through an introductory cinematic scene and reports the average and low frame rates. We tested with all of the game’s visual quality options maxed, along with 4X antialiasing. Physics detail was set to “Medium.”

The 1GB GDDR4 card isn’t much faster than its 512MB sibling here, either. If there’s a bright spot, it’s the median low frame rate for the 1GB GDDR4 card, which matches that of the GeForce 8800 Ultra. CrossFire is a bust in this test, as well. Call of Juarez
Next up is the Call of Juarez DX10 demo and benchmark, created by the game’s developers, Techland. This test employs a number of the game engine’s DirectX 10 features, including waterfall particles that use geometry shaders, soft-edged vegetation, and advanced material shaders. Looks purty, too. We tested at 1920×1200 resolution with 4X antialiasing, “normal” shadow quality, and 2048×2048 shadow map resolution.

The Radeon HD 2900 XT cards look much better in this test, beating out the 8800 GTX. One reason for this result may be Techland’s controversial decision to bypass hardware-based MSAA resolve and force all DX10 cards to use their shaders. Nvidia has complained about this decision, though it does seem to have some technical warrant. Then again, the two previous DX10 games we tested were part of Nvidia’s “The Way it’s Mean to Be Played” program, so who knows? We’ve looked at three DX10 titles, and they’re all still disputed ground. Crysis can’t come soon enough. Let’s head back to DX9 and some well-established games.

The Elder Scrolls IV: Oblivion
For this test, we went with Oblivion’s default “ultra high quality” settings augmented with 4X antialiasing and 16X anisotropic filtering, both forced on via the cards’ driver control panels. HDR lighting was enabled. We strolled around the outside of the Leyawin city wall, as show in the picture below, and recorded frame rates with FRAPS. This area has loads of vegetation, some reflective water, and some long view distances.

Even at 2560×1600 with 4X AA and 16X aniso, the 1GB GDDR4 card has only the slightest lead on the 2900 XT 512MB. Supreme Commander
Like many RTS and isometric-view RPGs, Supreme Commander isn’t exactly easy to test well, especially with a utility like FRAPS that logs frame rates as you play. Frame rates in this game seem to hit steady plateaus at different zoom levels, complicating the task of getting meaningful, repeatable, and comparable results. For this reason, we used the game’s built-in “/map perftest” option to test performance, which plays back a pre-recorded game.

I’ve omitted CrossFire results due to some compatibility problems with this game.

Once more, the 1GB GDDR4 card’s monster memory bandwidth isn’t much help.

HD video playback – H.264
Next up, we have some high-definition video playback tests. We’ve measured both CPU utilization and system-wide power consumption during playback using a couple of HD DVD movies with different encoding types. The first of those is Babel, a title encoded at a relatively high ~25 Mbps with H.264/AVC. We tested playback during a 100-second portion of Chapter 3 of this disc and captured CPU utilization with Windows’ perfmon tool. System power consumption was logged using an Extech 380803 power meter.

We conducted these tests at 1920×1080 resolution on the Radeon HD cards and at 1280×800 on the GeForce 8800 cards, since they don’t support HDCP with dual-link DVI and thus can’t play HD DVD movies at higher resolutions on our Dell 3007WFP display. (We should note that this limitation shouldn’t apply to single-link DVI connections, even at 1920×1200 resolution.) To confirm that scaling the movie down to lower resolutions wouldn’t put the GeForce cards at a disadvantage, I also did some testing with the Radeon HD 2900 XT at 1280×800 resolution and found that playing movies at the lower resolution didn’t have a significant impact on power use or CPU utilization.

Also, I disabled noise reduction on the GeForce 8800 cards for our HD DVD playback tests. Nvidia’s post-processing results in a net loss of image quality, so leaving it on didn’t make any sense. ATI’s noise reduction looks better, which is good, because it can’t be disabled via Catalyst Control Center. I did try the GeForce 8800 Ultra with noise reduction enabled, and the feature had only a minimal impact on power consumption and and CPU use.

CPU utilization is comparable among all of these high-end cards because none of them provide robust acceleration for HD video formats. We’ve seen CPU utilization numbers between 10 and 15% on the same test system with low-end cards that provide such acceleration capabilities. Among the high-end cards, the GeForce 8800s use slightly less CPU time to play the movie. The 2900 XT cards draw less power during playback, though, and the 1GB GDDR4 version averages about 6 watts less than its GDDR3 counterpart.

HD video playback – VC-1
Unlike Babel, Peter Jackson’s version of King Kong is encoded in the VC-1 format that’s more prevalent among HD DVD movies right now. This disc is encoded at a more leisurely ~17 Mbps.

This one is a clear win for the GeForce 8800 cards on CPU utilization. The 2900 XT 1GB GDDR4 again draws the least power of the bunch during playback, though.

HD HQV video image quality
We’ve seen how these cards compare in terms of CPU utilization and power consumption during HD video playback, but what about image quality? That’s where the HD HQV test comes in. This HD DVD disc presents a series of test scenes and asks the observer to score the device’s performance in dealing with specific types of potential artifacts or image quality degradation. The scoring system is somewhat subjective, but generally, the differences are fairly easy to spot. If a device fails a test, it usually does so in obvious fashion. I conducted these tests at 1920×1080 resolution on all cards. Here’s how they scored.

GeForce
8800 GTS
GeForce
8800 GTX
GeForce
8800 Ultra
Radeon HD
2900 XT
Radeon HD
2900 XT
1GB GDDR4
HD noise reduction 20 20 20 20 20
Video resolution loss 20 20 20 20 20
Jaggies 10 10 10 20 20
Film resolution loss 0 0 0 25 25
Film resolution loss – Stadium 0 0 0 10 10
Total score 50 50 50 95 95

I don’t think the video post-processing features in Nvidia’s 163.11 drivers are 100% cooked for the GeForce 8800 lineup. We’ve seen them achieve nearly a perfect score in HQV on a GeForce 8600, but no such result here. I deducted points from the GeForce cards in the jaggies test due to some odd, intermittent corruption artifacts that, when present, were worse than jaggies themselves. I suppose a score of zero might be appropriate here, if you’re not feeling lenient. Also, the film resolution loss tests are firm fails.

All cards lost points in the HD noise reduction test for artifacts. The Nvidia cards had some funky temporal artifacts, and the Radeons had some obvious de-interlacing problems. Ultimately, the Radeon HD cards come out looking much better in the final score.

That’s not the whole story, though. I’ve found that Nvidia’s noise reduction algorithm can be effective in HQV, but it’s unfortunately a net negative when used with actual HD movies. The algorithm introduces color banding and other artifacts that really annoy me. Nvidia has a long way to go before its noise reduction produces a good HQV score and is something you’d want to use every day.

That said, in my experience, noise reduction and other post-processing techniques aren’t really necessary for most high-quality HD content like HD DVD movies. Those discs were mastered with a particular look to them, and applying post-processing filters to them isn’t really necessary. The video looks great, regardless. Taking out film grain may even sully the director’s intent. HQV tests tough cases where the video source has problems. If a card can overcome those, it’s done something special and worthwhile.

For me, the biggest issue of all is the GeForce 8800 cards’ inability to support HDCP over dual-link DVI, which effectively kills their ability to play back HD movies on our gorgeous digital HD display. I’d worry a lot more about that than I would about an HD HQV score.

Power consumption
We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement.

The idle measurements were taken at the Windows Vista desktop with the Aero theme enabled. The cards were tested under load running Oblivion at 2560×1600 resolution with the game’s “ultra high quality” settings, 4X AA, and 16X anisotropic filtering. We loaded up the game and ran it in the same area where we did our performance testing.

Remember, we were forced to use a different motherboard for CrossFire testing, and that will affect system power consumption and noise levels.

The 1GB GDDR4 card again draws less power than its GDDR3 compatriot, and the GDDR4 card pulls fewer watts at idle than anything else in the field. The picture changes a little when running a game. The 2900 XT 1GB GDDR4 consumes just as much power as a GeForce 8800 GTX, but it generally offers lower performance than the GTX. Incidentally, XFX’s high-clock version of the GeForce 8800 Ultra seems to be paying for its high clock speeds and killer performance at the wall socket. The SLI rig with Ultras pulls 586 watts under load. Yow.

Noise levels and cooling
We measured noise levels on our test systems, sitting on an open test bench, using an Extech model 407727 digital sound level meter. The meter was mounted on a tripod approximately 14″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured, including the Zalman CNPS9500 LED we used to cool the CPU. The CPU cooler was set to run at a steady speed, 60% of its peak. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

For these tests, we had to swap in an OCZ GameXStream 700W power supply. The 1kW PC Power and Cooling unit offers astounding power, but it’s a little too loud to use during sound level testing.

I’ve said it before, and I’ll say it again. I’m pleased to see that my subjective impressions are confirmed by the decibel meter. The cooler on the 1GB GDDR4 card is noticeably louder than the one on our Radeon HD 2900 XT 512MB review unit. The 512MB card was an early sample, and I assume the 1GB GDDR4 card is more representative of production cards. This thing is noisy, folks. There’s a nearly 10 decibel gap between the 8800 GTX and the 2900 XT 1GB GDDR4 when running a game. That may not look like much in the graph above, but believe me, it’s an awful lot when you’re in the same room with it.

Interestingly enough, all of the 8800 cards are sufficiently quiet that, even under load, you won’t hear much out of them. I’m pretty certain that what’s registering on the dB meter for those cards is the additional noise coming out of our power supply fan, especially when it’s straining under the load of an SLI system. Not so with the 2900 XT cards, which overwhelm the PSU fan with their own coolers’ noise.

Overclocking
I didn’t spend a lot of time overclocking this card, but I did want to see how it would perform at the 825MHz core and 1050MHz (or 2100MHz DDR) speeds of the “OC” version Diamond is planning to offer.

Not bad, but not enough to catch the GeForce 8800 GTX. The card did seem to handle these clock speeds just fine, though. I didn’t see any hint of instability or image artifacts when it was running at these speeds.

Conclusions
At the end of the day, the Radeon HD 2900 XT 1GB GDDR4 remains a really intriguing product. With a full gig of space, a 512-bit memory interface, screaming fast GDDR4 chips, and more bandwidth than Fruit of the Loom, this puppy is halfway to being the fastest—and most expensive—video card on earth. Yet the GPU onboard can’t seem to capitalize on the opportunity. The 1GB GDDR4 version of the Radeon HD 2900 XT wasn’t significantly faster than the original in any of the games we tested, including brand-new DX10 titles and new-ish DX9 titles running at very high resolutions and quality levels. That’s bad news for this video card, because it costs as much as GeForce 8800 GTX and performs more like a GeForce 8800 GTS—not the best value proposition. This weakness is underscored by the fact that the 2900 XT doesn’t deliver lower CPU utilization than the GeForce 8800 series when playing back HD movies. AMD made it sound like the 2900 XT would benefit from UVD acceleration, but the GPU simply lacks that capability. In fact, the 2900 XT uses more CPU time during HD video playback than a competing GeForce 8800. AMD’s noise reduction and post-processing routines for HD video are superior, however.

Too bad that noise reduction doesn’t extend to the card’s cooler, which is just plain loud.

The 2900 XT does have some things to recommend it, including AMD’s nifty new tent filters for antialiasing. I remain convinced that the Radeon HD 2900 XT produces the highest quality images on the PC because of this feature. AMD just recently incorporated a new custom filter into its Catalyst 7.7 drivers that does an edge-detect pass and then selectively applies up to 24X antialiasing where needed. I played with that feature briefly while putting together this review, and I wasn’t impressed. The image quality is superb, but the performance hit is devastating. I expect this is only the sort of feature one would use in really old games, where performance is never an issue. I’ll have to play with the edge-detect filter more, but I think the tent filters are a better option overall.

Beyond that, most of what’s left for the Radeon HD 2900 XT 1GB GDDR is hope for a better future. Perhaps AMD will someday deliver new video drivers that produce dramatically higher performance. Perhaps the first wave of true DirectX 10-only titles will alter the landscape radically. Perhaps the graphics usage model will change over time to where the 2900 XT’s relatively weak edge AA and texture filtering capacity doesn’t matter so much. Any or all of these things could happen. Or perhaps not.

Comments closed
    • VILLAIN_xx
    • 12 years ago

    I wouldnt buy any DX10 cards at the moment lol. Price is too high and Frame are still too unacceptable even at 1280 x 1024 on the current availble Dx 10 games.

    To get a decent frame rate of 99FPS on lost planet (100fps is how much the naked eye can view for any ones curiousty) you would need two 8800 ultras SLI… last time i checked those cards were around the 600 mark.. and times 2 would be 1200 bucks… um wow! Id rather buy ps3, Xbox 360 and Wii all the same time and have room left for games.

    But in case theres animators out there like me. You can mod the HD2900 series to a FireGL and still get frame rates in games. Thats the only reason why I dont lean towards Nvidia anymore. They made it impossible after the 6000 series to mod.

    thats my 2 cents!

    • RambodasCordas
    • 12 years ago

    Some of the posted games only require a patch to run on SM2.0.
    Besides you can’t blame Ati for that, blame the lame developer(s).
    Almost everything can be done with SM2.0. The SM3.0 was created “just” to increase rendering speed (according to Microsoft web site).
    There is more than enough raw power in Ati 9xxx and x8xx generation to run any of those games.
    X800 guys are getting very good FPS in BioShock with their cards.

    • marvelous
    • 12 years ago

    what kind of review only benchmark 2560×1600 and 1920×1200 with 4x AA to show us non playable settings?

    • Damage
    • 12 years ago

    I’ve made some changes in the HD video portion of the review to clarify that the GeForce 8800 can indeed play HDCP-encrusted movies at HD resolutions over single-link DVI connections, though not dual-link ones.

    • Anemone
    • 12 years ago

    On OC comparisons, should OC all the cards. Everyone does this (compares one overclocked to the rest @ stock) and it’s just makes things horrid to compare for those that push the limit.

    • l33t-g4m3r
    • 12 years ago

    I don’t really like some of the games chosen for benchmarking this card.

    Some scores looked like the result of either drivers, or the game.
    eg: how crossfire performed, and the lost planet game wasn’t working correctly with AF.

    Because of that I have a hard time taking these scores seriously.

      • Shodan
      • 12 years ago

      Oh boo-hoo it’s not like you have a lot of DX10 games out ATM. And if it is a driver problem then it shows the current state that the card is in i.e. when you buy it now you are going to get crappy Crossfire.

        • l33t-g4m3r
        • 12 years ago

        you don’t think nvidia’s “The Way it was Meant to be Played” had something to do with it?

    • herothezero
    • 12 years ago

    Another kick in the crotch for people hoping for real competition at the high-end of the market. Why is it always about poor execution at DAAMIT?

    • Shinare
    • 12 years ago

    YIKES! 1GB? Now we’re really going to start hearing about the 2GB barrier being broken in Vista causing computer meltdowns.

    • Bensam123
    • 12 years ago

    “Perhaps AMD will someday deliver new video drivers that produce dramatically higher performance.”

    With so much theoretical performance there you really have to wonder where it’s all going.

      • morphine
      • 12 years ago

      No, that statement is very correct. They will one day have new drivers that give you higher performance… when they launch a new card 😛

    • Chrispy_
    • 12 years ago

    Nice article. I always go back to GPU reviews that were done “early” in their life, and then get massive improvements through drivers.

    With regards to AA filtering, I’m really starting to wonder if anything above 4xAA is worth the performance hit – I think you’re right about really, really, old games seeing some benefit, but for something like the original half-life, I would have thought that jaggies are the least of your concerns when you’re staring at the ugly old graphics.

    On my 8800 with the majority of games I run, I stick to 4xAA/8xAniso just because you have to inspect things with a magnifying glass to see the improvements brought on from further filtering. UT 2004 runs without AA by default and whilst I could have gone back to change it in the drivers I discovered that within about 10 seconds of gameplay I just didn’t notice the lack of AA.

    I’d rather manufacturers concentrated on bringing up those minimum framerate values, rather than pumping out increasingly higher values of AA filtering.

      • Bensam123
      • 12 years ago

      anti-aliasing != anisotropic filtering

    • Kent_dieGo
    • 12 years ago

    No comparison of warrantees? The most important item in choosing a $400 video card was omitted. The XFX card has lifetime warrantee. If the Diamond card fails you throw $400 in the trash. I would value a lifetime warrantee at $100 when compairing cards.

      • mboza
      • 12 years ago

      $100 for a warranty on a $400 dollar product with a essentially limited lifespan? How often to do expect your card to fail?

      I just settled for a two year warranty on a 8800GTS, because by the time it expires, I expect to be able to upgrade to a 10800 with vastly improved performance, or replace it with a $50 entry level card. It would have been nice to have a 5 year warranty, but not worth the extra 10-15% premium imo.

      How much do you expect the diamond card to actually be worth when the warranty expires?

        • Kent_dieGo
        • 12 years ago

        I have had to RMA more video cards than anything else. Even if it is worth $200 when fails in two years, I want it fixed! Radeon 9800 still sell close to $100. In 3-4 years the 2900 may only be worth $100. I still would want replacment as I have several computers, not all cutting edge. I guess an extended warrantee would be close to $100 so that should be taken in account when pricing cards. A $300 8800GTS = $200 2900 XT in fair pricing

    • RambodasCordas
    • 12 years ago

    Very good article that bring some light of a few things.

    -Power consuming was lower than the GDDR3 version despite the fact that it carried 1024MB instead of 512MB. Which means that AMD could release for sure one much lower power consuming GDDR4 512MB card, why it doesn’t and why nobody does it is rather strange.

    -The GDDR4 didn’t slow down the card, like many rumors in the past that the GDDR4 version was slower than the GDDR3 version.

    -Ati card may not be the fastest of all today but I’m pretty sure it will be in the future. Imagine that games will “all” start using AA methods like the Call of Juarez? It will mean that Nvidia cards will all sunk sometime in the future.

    -Ati “new” cards are “never” much faster than nvidia new cards in today games but tend to be faster than NVIDIA cards in to be released games. Just do a bunch of retro reviews with some old 9800 VS FX5900 or X800 VS 6800 and you will see what I’m talking about. The Retro review could use some of today games mixed with some of the games of time when was the card released using up to date drivers of course.

      • Forge
      • 12 years ago

      It’s nice that you cling to that life raft so tightly, but arguing about what hardware will do in the future is the sign of an indefensible position, and Call of Warez has been thoroughly discredited as a performance benchmark. The developers are very buddy-buddy with the formerly red team, and Nvidia very publically whined about how they’d detuned their benchmark particularly to make NV look bad.

        • RambodasCordas
        • 12 years ago

        OK.
        Then 3Dmark is what?
        Just a benchmark, or a benchmark that will show how “future” games will run with current cards.

          • Peldor
          • 12 years ago

          Just a benchmark.

          • morphine
          • 12 years ago

          Sorry, but 3DMark is just a benchmark, and a very bad one at that TBH.

          I personally feel that people shouldn’t use it anymore given that for some time it doesn’t reflect real-world performance in any meaningful way. Add to that the fact that you can get super-higher scores if you have a good CPU and a lesser graphics card.

      • SPOOFE
      • 12 years ago

      There weren’t any substantial rumors that the GDDR4 version was slower than GDDR3. The contention was that the XTX (the GDDR4 version with 5 mhz higher clock speed) wasn’t really any faster.

      • odizzido
      • 12 years ago

      I’ve noticed that ATI cards tend to age better too.

        • swaaye
        • 12 years ago

        The R4x0 cards can’t even run some newer games because they don’t support SM3. R5x0 is certainly superior to G7x tho, IMO, if only from an image quality standpoint. Less filtering cheats and better AA.

        Heck I think R5x0 may be a better choice than R6x0 at this point. Cooler and a heck of a lot cheaper. Better drivers. Heh.

          • SPOOFE
          • 12 years ago

          An x1950XT goes for ~$150 right about now. I agree with you about the last-gen cards.

          • Forge
          • 12 years ago

          Show me any game that *requires* Shader Model 3.0 and I will stop laughing at you.

          I also believe that the filtering cheats and questionable optimizations for single apps were primarily a problem with the GeForce FX, and have been steadily disappearing since. I’m not aware of any filtering cheats on G70 by default.

            • Chryx
            • 12 years ago

            bioshock
            lost planet
            Rainbow Six Vegas
            splinter cell double agent

            There are a couple of others that slip my mind right now, all need SM3.0

    • slaimus
    • 12 years ago

    What they need to release is a 512MB GDDR4 version. It would keep the power consumption even lower, keep the increased bandwith, and keep the price about the same,

    • PerfectCr
    • 12 years ago

    I am still not used to referring to ATI cards as “AMD”. It just doesn’t feel right. Why did they drop the ATI name? I don’t associate AMD with graphics and I doubt others do as well.

      • nookie
      • 12 years ago

      Exactly. And as #11 said, this isn’t really even AMD’s work.

        • BoBzeBuilder
        • 12 years ago

        Who gives a damn what its called? Its not called ATI because ATI doesn’t exist anymore.

          • Bensam123
          • 12 years ago

          A universal understanding rather then spending 10mins to enlighten people with “whys?”.

          • Peldor
          • 12 years ago

          Actually it is still called ATI.

          ati.amd.com

          ATI FireGL, ATI Radeon, ATI Catalyst just to cite the front page.

          About the only thing that got the AMD badge are the chipsets.

      • ssidbroadcast
      • 12 years ago

      Right. A Mazda Protoge isn’t called a Ford Protoge. It’s still a Mazda. Just because AMD acquired ATi, they shouldn’t impose their brand over ATi’s work.

      • Forge
      • 12 years ago

      There we go!

      So who let the air out of the tires? Good Spock is pretty boring when not reviewed by a *PAID MARKETER*….

      Oops!

    • Krogoth
    • 12 years ago

    Thanks for the DX10 benches. They prove to everybody with a sensible mind that none of the current generation of DX10 can handle DX10 at a playable framerate without doing a major cutback on IQ and resolution.

    I am not even taking whatever the current crop of DX10 games will be worth it gameplay-wise into consideration.

    I have a feeling that overhyped Crysis will end-up being “Far Cry II”. Crytek doesn’t know how to make a fun game, but sure do make a pretty game though.

    I have a feeling that DX10 benches between both cards are *[

      • flip-mode
      • 12 years ago

      y[

        • kilkennycat
        • 12 years ago

        y[

      • SPOOFE
      • 12 years ago

      “Crytek doesn’t know how to make a fun game”

      Such an incorrect statement has never before been uttered by man.

        • Shinare
        • 12 years ago

        I was just ignoring that remark as probable flamebait. He cant surely be serious. Unless of course he forgot to specify online multi-player when making that statement.

        • Krogoth
        • 12 years ago

        It is called an opinion.

        Opinions do not equal facts.

        I didn’t find any value or fun in the Far Cry, but it doesn’t mean somebody else does.

        You are either dense or playing some ridiculous mind game.

          • SPOOFE
          • 12 years ago

          You’re allowed to state your opinion, but I’m not allowed to state mine? I think you’re completely daft in your opinion, and I’m fully excused in feeling that. If you want to spout your opinions like they were sacrosanct, I recommend you start your own religion, Chuckles.

        • morphine
        • 12 years ago

        Well I couldn’t believe how bad the actual Far Cry “game” was, for a number of reasons. I thought it was so incredibly bad that I couldn’t stand more than an hour of it. Also a lot of my friends tried it and they basically all said the same.

        I don’t know… seems it’s a love it/hate it thing. Quite a few people seemed to like it and it sold well so…

      • l33t-g4m3r
      • 12 years ago

      crysis is not farcry2.
      farcry2 is being worked on though. AFAIK.

      • albundy
      • 12 years ago

      good point, but arn’t we past the first gen hardware? I can still taste the milk from both sides.

        • SPOOFE
        • 12 years ago

        Another thing to bear in mind is resolution; the DX10 tests were at 16×12 and 19×14… how much better would the framerate have been at least gargantuan resolutions?

        I know, I know, “who buys high-end hardware if not to game at high resolutions…” Well, take the GTS, for instance: Its frames were ugly at those resolutions, but are probably pretty good a few steps down.

        I dismiss the whole argument about the first-gen DX10 cards not running DX10 games well. I think it’s irrelevant and disingenious.

      • toxent
      • 12 years ago

      I don’t see the point in making that post at all, when the article your commenting in says the exact same thing.

        • Shobai
        • 12 years ago

        jokes: maybe he mispelt his user name “maroon” instead of “moron”?

        /jokes

    • Peldor
    • 12 years ago

    Sometimes I really envy you hardware reviewers, playing with all the new toys, knowing before the NDA expires what’s up, etc.

    Other times I pity you when you have a ‘new’ piece come in that is obviously going to be a dud yet you dutifully churn through the tests and build the graphs and scrawl out the pablum to make it sound interesting.

    This is the latter.

    • Fearless Leader
    • 12 years ago

    Two 2900 XT’s in Crossfire currently hold the world record in 3dMark06, and the cards have a respectable price in comparison to the competition.

    How is this bad?

      • thecoldanddarkone
      • 12 years ago

      Umm, this is really simple, it’s a benchmark it isn’t a real game. Most people care about games more than they do about video card benchmarks.

      • Kaleid
      • 12 years ago

      Who cares about benchmarks?

      IMO R600 is/was extremely disappointing, requires too much juice and is way too hot to cool down quietly. I’ll hold on to my x1900xt since I don’t want a Nvidia either since all my Nvidia cards have had problems with buzzing coil sound.

        • Nvidia Sucks
        • 12 years ago

        The 2900xt IS NOT LOUD!!!!!!!! I own one.. trust me. My hdd’s make more noise than it does when it’s stressed.

        It is disappointing the 2900xt couldn’t be another Radeon 9700pro. But regardless it still holds it own.

        The problem is … how do you know reviews don’t lie? Or have unfair test configurations. I’ve seen reviews that give the Nvidia card the Nforce chipset and the ATI card some other unknown chipset.. Who’s to say its not contributing to lower fps?

        I picked up my Diamond 2900xt 1gb a week ago and found out it was only running 507mhz GPU clocks and 514mhz Memory. (Compared to the 800mhz and 2.1ghz it’s supposed to get) Get this. It still averages 75fps in Bioshock at 1680×1050 completely maxxed with the exception of no AA. My 1950pro only got 25fps in the same conf. Obviously once I get this problem corrected it’s gonna kick some butt…

      • morphine
      • 12 years ago

      Um… it was a disappointing card since day 1. It was supposed to match/outmatch the 8800GTX, came over 6 months late with lots of stupid excuses, and ended up being basically a very loud, power-sucking 8800GTS. How is that *not* disappointing?

      And who cares that it gets higher performance in some benchmarks? Do you play benchmarks? I usually play games, but maybe I’m just dumb.

        • Nvidia Sucks
        • 12 years ago

        How do you know its loud? I happen to own one and for the record it has no more than a dull hum when stressed. It’s not loud at all.

        The Radeon 2900xt is highly underrated. I like to look at it in steps (and these are just price situations.. not actual prices).

        Step 1 8800GTS 320mb – $300
        Step 2 8800GTS 640mb – $400
        Step 3 Radeon 2900xt 512mb – $430

        and the 2900xt 1gb and the 8800gtx/8800ultra are a toss up. The Nvidia’s have higher top fps and the Radeons have higher lowest fps. Crossfire on the 2900 and vista completely embarrases Sli.

      • snowdog
      • 12 years ago

      Synthetic Benchmark, run on Liquid Nitrogen Cooled card. This related to reality how?

      • flip-mode
      • 12 years ago

      It’s bad because I can’t play 3DMark. 3DMark is both useless and meaningful at the same time: useless as in it’s not a game, meaningful in that it does indicate “something” with respect to how hardware *can* perform.

      • Nvidia Sucks
      • 12 years ago

      How do you know its loud? I happen to own one and for the record it has no more than a dull hum when stressed. It’s not loud at all.

      The Radeon 2900xt is highly underrated. I like to look at it in steps (and these are just price situations.. not actual prices).

      8800GTS 320mb – $300
      8800GTS 640mb – $400
      Radeon 2900xt 512mb – $430

      and the 2900xt 1gb and the 8800gtx/8800ultra are a toss up. The Nvidia’s have higher top fps and the Radeons have higher lowest fps. Crossfire on the 2900 and vista completely embarrases Sli.

    • sigher
    • 12 years ago

    AMD should fire some people over that AA/AF performance loss issue. some idiots made a disastrous decision during the planning stage and they should be told.
    Harsh I know, but really, they so sunk the whole r600.

    • Stijn
    • 12 years ago

    Although the 1GB card is disappointing, it’s nice to see a diamond-branded card after a shortbread reported on the revival of the company.

      • LoneWolf15
      • 12 years ago

      When Diamond died, their customer support sucked.

      Diamond started as an excellent company, and when I bought my first card made them (a used VLB SpeedStar Pro), they actually shipped me driver diskettes and an updated BIOS chip –for free. I reached tech support via a toll free number.

      Fast forward to the PCI video cards. Driver development worsened, and support moved to a toll call, where customers were put on hold, sometimes for hours. I was on hold for 90-120minutes for several cross-country calls to California because I was getting corrupted buttons in Windows, and got no response. Left voicemail, no response. Sent e-mails, no response. Finally got through, was promised updated drivers disk and BIOS chip from a tech, and never received them. Finally sent back my top-end Stealth 64 Video VRAM back to vendor. E-mailed and faxed the president of the company, explaining what I went through during my time with the card. Never received a reply. I bought an ATI Graphics Pro Turbo; the problems went away.

      Things didn’t improve during latter generations of of Diamond peripherals and the fact is, Diamond was using others’ GPUs and making cards you could buy from other vendors with better support, just like nowadays where you can choose from EVGA/BFG/XFX and others. By the time Diamond Multimedia was gone, it was a good thing.

      I hope whomever picked the company back up off of the scrap heap learned a lesson from Diamond’s failures. If not, they’ll disappear again.

    • Voldenuit
    • 12 years ago

    Ok, a quibble I have to mention from the Lost Planet benches:
    Lost Planet DX9
    No AA 4X AA Performance penalty
    Radeon HD 2900 XT 1GB 52.0 39.5 24%
    GeForce 8800 GTX 50.1 47.1 6%

    y[

    • evermore
    • 12 years ago

    Can you really say they “elected not to reach for the overall graphics performance crown” when the decision was made AFTER trying to create a next-generation chip, which they undoubtedly were trying to make as high-performance as possible, and then failing to make one that was better than the 8800?

    The decision probably sounded more like this: “Well crap. This thing is a dog! Guess we’ll have to keep the price a little lower so somebody will buy the damn things. But we’ll have to come up with some excuses for not having a competitor at the bleeding-edge. I know, we’ll tell them we planned it this way.”

    I don’t think you can really “target” a price point like 400 dollars when talking about designing a graphics core. They design the product with an eye towards certain performance and features, then price it based on the performance they can get out of it and what the market will accept. Otherwise they’d be saying “hey, you designers, we’re thinking we don’t want to pay too much so don’t put TOO much effort into making it fast”. It’s not like it’s a materials cost issue. When materials are involved, then you can design to a price point because you can figure in how much the parts will cost.

    Also, I hate trying to think of it as “AMD did this” and “AMD’s this design”. The merger is still too new, AMD hasn’t really done much, it was all done by ATI beforehand. When really new stuff starts coming out, things that began life after the merger, then it’ll be AMD’s products.

      • gratuitous
      • 12 years ago
      • snowdog
      • 12 years ago

      +1 Digg. Yeah, what he said. It is clear with being 6month later than NV and a monstrous amount of memory bandwidth they were aiming to take the performance crown, unfortunately their aim was not so good.

      I keep wondering how they could have miscalculated so much, dropping the ball on Texture performance and AA performance.

      I think they went to theoretical, someone did some calculations showing they could do everything (in theory) with shaders. After it was built Theory met Reality and only Reality survived.

      I wonder what is next. It doesn’t seem sufficient to merely tweak this architecture. They need some serious rework.

    • brites
    • 12 years ago

    Well… I believe the main problem of this cards is the “Unified Superscalar Shader Architecture”… this is the only major difference between ATI and NVidia… and these new DX10 cards from ATI are really poor performers in DX10…

      • marvelous
      • 12 years ago

      that and nvidia superior fillrate and texturing abilities

        • brites
        • 12 years ago

        yes… the lower fillrate and texturing abilities of ATI are manly due to the new arch…oh well I prefer a good TOP DX9 card than a 2900XT… if I had the bucks I would chose the Ultra SLI…

    • Ryu Connor
    • 12 years ago

    These cards clearly call for four way crossfire on a 32bit OS.

    Lawl.

      • morphine
      • 12 years ago

      Heh… Wouldn’t it be ironic if you only ended up with 640KB left after all the mapping was done? 🙂

    • morphine
    • 12 years ago

    *Yawn*. Same crap as the 2900XT, different smell. Loud crap, too.

      • shalmon
      • 12 years ago

      hahaha….nice

      • SPOOFE
      • 12 years ago

      EDIT: Wrong reply.

    • marvelous
    • 12 years ago

    1gig is a waste. doesn’t do anything special except cost more. boo hoo~

      • kvndoom
      • 12 years ago

      Yesh, but you just wait for the 2 gig cards! They’re gonna be twice as… -[

    • toxent
    • 12 years ago

    ATI has really disappointed me this generation. I generally bought and recommended ATI, but now with the their dismal 2xxx series and Nvidia’s superior 8xxx series. I have stopped doing that.

    Lets hope that R650 has something more to offer.

    Nice review btw, it was something i’d been waiting for.

    • Forge
    • 12 years ago

    I seem to recall some guy on the forums talking a bunch about how the 1GB DDR4 2900XT was going to kick some butt.

    The cards themselves seem to have missed that memo.

    A minor quibble, not worth a PM: You refer to 128GB/s being nearly double the 86GB/s of the 8800 GTX. It’s a lot closer to 1.5X than 2X. 2X would be 172GB/s, 1.5X is 129GB/s. Still a mighty huge plate piled high with juicy tender bandwidth, though.

    A good read, I hope AMD brings the competition back with R650/R700/R-Next.

      • robg1701
      • 12 years ago

      They were probably referring to the increased performance of the other version of the1GB card, since its got a ~10% increase in core speed. Given how much memory bandwidth the card already had, the extra these cards have didnt really seem like itd make that much difference to me, the issue is the core.

      • Mithent
      • 12 years ago

      Not terribly surprising it didn’t, after the 8800 GTS 320MB was seen to be very similar to the 640MB version in most cases. Going from 512MB to 1GB (albeit on a different architecture, but if anything one less limited by bandwidth) would be likely to give even smaller returns, and so it does.

Pin It on Pinterest