The Radeon HD 2900 XT is an impressive, full-featured DirectX 10-ready graphics processor. Its unified shader architecture is a clear advance over the previous generation of Radeons and is the same class of product as Nvidia's GeForce 8800 series in terms of basic capabilities. The GPU even has some cool distinctive features, like its tessellator, that the GeForce 8800 can't match. As we've discussed, the scheduling required to achieve efficient utilization of this GPU's VLIW superscalar stream processing engines could prove to be tricky, putting it at a disadvantage compared to its competition. Some of the synthetic shader benchmarks we ran illustrated that possibility. However, this GPU design has a bias toward massive amounts of parallel shader processing power, and I'm largely persuaded that shader power won't be a weakness for it. I'm more concerned about its texture filtering capacity. Our tests showed its texturing throughput to be substantially lower than the GeForce 8800 GTS with 16X anisotropic filtering. One can't help but wonder if the 2900 XT's performance in today's DX9 games wouldn't be higher if it had more filtering throughput.
The 2900 XT does match the GeForce 8800 series on image quality generally, which was by no means a foregone conclusion. Kudos to AMD for jettisoning the Radeon X1000 series' lousy angle-dependent aniso for a higher quality default algorithm. I also happen to like the 2900 XT's custom tent filters for antialiasing an awful lotan outcome I didn't expect, until I saw it in action for myself. Now I'm hooked, and I consider the Radeon HD's image quality to be second to none on the PC as a result. Nvidia may yet even the score with its own custom AA filters, though.
The HDCP support over dual-link DVI ports and HDMI audio support are both welcome additions, too. We haven't yet had time to test CPU utilization during HD-DVD or Blu-ray playback, but we've got that on the list for a follow-up article (along with GPU overclocking, edge-detect AA filters, dual-link DVI with HDCP on the Dell 3007WFP, AMD's Stream computing plans, and a whole host of other items).
Ultimately, though, we can't overlook the fact that AMD built a GPU with 700M transistors that has 320 stream processor ALUs and a 512-bit memory interface, yet it just matches or slightly exceeds the real-world performance of the GeForce 8800 GTS. The GTS is an Nvidia G80 with 25% of its shader core disabled and only 60% of the memory bandwidth of the Radeon HD 2900 XT. That's gotta be a little embarrassing. At the same time, the Radeon HD 2900 XT draws quite a bit more power under load than the full-on GeForce 8800 GTX, and it needs a relatively noisy cooler to keep it in check. If you ask folks at AMD why they didn't aim for the performance crown with a faster version of the R600, they won't say it outright, but they will hint that leakage with this GPU on TSMC's 80HS fab process was a problem. All of the telltale signs are certainly there.
There are many things we don't yet know about the GeForce 8800 and Radeon HD 2900 GPUs, not least of which is how they will perform in DirectX 10 games. I don't think our single DX10 benchmark with a pre-release game tell us much, so we'll probably just have to wait and see. Things could look very different six months from now, even if the chips themselves haven't changed.