AMD’s Radeon HD 2900 XT graphics processor

THOSE OF US WHO ARE into graphics processors sure have it easy these days. We’re addicted to advances in technology, and we seem to get a hit every couple of months or so in the form of a new spin of a GPU or graphics card. What’s more, graphics chip companies AMD and Nvidia cook up an entirely new form of drug with startling frequency. Less than two years since the debut of the Radeon X1000 series, the former ATI is now ready to release its new generation of technology, the Radeon HD family. What’s more, we’ve been complaining because the thing is actually late. Nvidia’s competing GeForce 8-series technology hit the market last November, and GPU-watchers have been waiting impatiently for the battle to be joined.

Who could blame us, really, for being impatient? The GeForce 8800 is a stunning achievement, and we’re eager to see whether AMD can match it. You’ll have to forgive the most eager among us, the hollow-eyed Radeon fanboys inhabiting the depths of our forums, wandering aimlessly while carrying their near-empty bottles of X1000-series eye candy and stopping periodically to endure an episode of the shakes. We’ve all heard the stories about AMD’s new GPU, code-named R600, and wondered what manner of chip it might be. We’ve heard whispers of jaw-dropping potential, but—especially as the delays piled up—doubts crept in, as well.

Happily, R600 is at last ready to roll. The Radeon HD 2900 XT graphics card should hit the shelves of online stores today, and we have spent the past couple of weeks dissecting it. Has AMD managed to deliver the goods? Keep reading for our in-depth review of the Radeon HD 2900 XT.

Into the R600
The R600 is easily the biggest technology leap from the Radeon folks since the release of the seminal R300 GPU in the Radeon 9700, and it’s also the first Radeon since that represents a true break from R300 technology. That’s due in part to the fact that R600 is designed to work in concert with Microsoft’s DirectX 10 graphics programming interface, which modifies the traditional graphics pipeline to unlock more programmability and flexibility. As a state-of-the-art GPU, the R600 is also a tremendously powerful parallel computing engine. We’re going to look at some aspects of R600 in some detail, but let’s start with an overview of the entire chip, so we have a basis for the rest of our discussion.


A logical block diagram of the R600 GPU. Source: AMD.

The R600’s most fundamental innovation is the introduction of a unified shader architecture that can process the three types of graphics programs—pixel shaders, vertex shaders, and geometry shaders—established by DX10’s Shader Model 4.0 using a single type of processing unit. This arrangement allows for dynamic load balancing between these three thread types, making it possible for R600 to bring the majority of its processing power to bear on the most urgent computational need at hand during the rendering of a frame. In theory, a unified shader architecture can be vastly more efficient and effective than a GPU with fixed shader types, as all DX9-class (and prior) desktop GPUs were.

A high-level diagram of the R600 architecture like the one above will no doubt invoke memories of ATI’s first unified shader architecture, the Xenos GPU inside the Xbox 360. The basic arrangement of functional units looks very similar, but R600 is in fact a new and different design in key respects like shader architecture and thread dispatch. One might also wish to draw parallels to the unified shader architecture of Nvidia’s G80 GPU, but the R600 arranges its execution resources quite differently from G80, as well. In its GeForce 8800 GTX incarnation, the G80 has 128 scalar stream processors running at 1.35GHz. The R600 is more parallel and runs at lower frequencies; AMD counts 320 stream processors running at 742MHz on the Radeon HD 2900 XT. That’s not an inaccurate portrayal of the GPU’s structure, but there’s much more to it than that, as we’ll discuss briefly.

First, though, let’s have a look at the R600 chip itself, because, well, see for yourself.

Like the G80, it’s frickin’ huge. With the cooler removed, you can see it from space. AMD estimates the chip at 700 million transistors, and TSMC packs those transistors onto a die using an 80nm fab process. I measured the R600 at roughly 21 mm by 20 mm, which works out to 420 mm².

I’d like to give you a side-by-side comparison with the G80, but that chip is typically covered by a metal cap, making pictures and measurements difficult. (Yes, I probably should sacrifice a card for science, but I haven’t done it yet.) Nvidia says the G80 has 680 million transistors, and it’s produced on a larger 90nm fab process at TSMC. I’ve seen die size estimates for G80 that range from roughly 420 to 490 mm², although Nvidia won’t confirm exact numbers. R600, however, doesn’t have to rely on a separate chip to provide display logic, so it’s almost certainly smaller overall.

 

Command processing, setup, and dispatch
I continue to be amazed by the growing amount of disclosure we get from AMD and Nvidia as they introduce ever more complex GPUs, and R600 is no exception on that front. At its R600 media event, AMD chip architect Eric Demers gave the assembled press a whirlwind tour of the GPU, most of which whizzed wholly unimpeded over our heads. I’ll try to distill down the bits I caught with as much accuracy as I can.

Our tour of the R600 began, appropriately, with the GPU’s command processor. Demers said previous Radeons have also had logic to process the command stream from the graphics driver, but on the R600, this is actually a processor; it has memory, can handle math, and downloads microcode every time it boots up. The reason this command processor is so robust is so it can offload work from the graphics driver. In keeping with a DirectX 10 theme, it’s intended to reduce state management overhead. DirectX 9 tends to group work in lots of small batches, creating substantial overhead just to manage all of the objects in a scene. That work typically falls to the graphics driver, burdening the CPU. Demers described the R600 command processor as “somewhat self-aware,” snooping to determine and manage state itself. The result? A claimed reduction in CPU overhead of up to 30% in DirectX 9 applications, with even less overhead in DX10.

Next in line beyond the command processor is the setup engine, which prepares data for processing. It has three functions for DX10’s three shader program types: vertex assembly (for vertex shaders), geometry assembly (for geometry shaders), and scan conversion and interpolation (for pixel shaders). Each function can submit threads to the dispatch processor.

One item of note near the vertex assembler is a dedicated hardware engine for tessellation. This unit is a bit of secret sauce for AMD, since the G80 doesn’t have anything quite like it. The tessellator allows for the use of very high polygon surfaces with a minimal memory footprint by using a form of compression. This hardware takes two inputs—a low-poly model and a mathematical description of a curved surface—and outputs a very detailed, high-poly model. AMD’s Natalya Tatarchuk showed a jaw-dropping demo of the tessellator in action, during which I kept thinking to myself, “Man, I wish she’d switch to wireframe mode so I could see what’s going on.” Until I realized the thing was in wireframe mode, and the almost-solid object I was seeing was comprised of millions of polygons nearly the size of a pixel.

This tessellator may live in a bit of an odd place for this generation of hardware. It’s not a part of the DirectX 10 spec, but AMD will expose it via vertex shader calls for developers who wish to use it. We’ve seen such features go largely unused in the past, but AMD thinks we might see games ported from the Xbox 360 using this hardware since the Xbox 360 GPU has a similar tessellator unit. Also, tessellation capabilities are a part of Microsoft’s direction for future incarnations of DirectX, and AMD says it’s committed to this feature for the long term (unlike the ill-fated Truform feature that it built into the original Radeon hardware, only to abandon it in the subsequent generation). We’ll have to see whether game developers use it.


R600’s threaded dispatch processor. Source: AMD.

The setup engine passes data to the R600’s threaded dispatch processor. This part of the GPU, as Demers put it “is where the magic is.” Its job is to keep all of the shader cores occupied, which it does by managing a large number of threads of three different types (vertex, geometry, and pixel shaders) and switching between them. The R600’s dispatch processor keeps track of “hundreds of threads” in flight at any given time, dynamically deciding which ones should execute and which ones should go to sleep depending on the work being queued, the availability of data requested from memory, and the like. By keeping a large number of threads in waiting, it can switch from one to another as needed in order to keep the shader processors busy.

The thread dispatch process involves multiple levels of arbitration between the three thread types in waiting and the work already being done. Each of the R600’s four SIMD arrays of shader processors has two arbiter units associated with it, as the diagram shows, and each one of those has a sequencer attached. The arbiter decides which thread to process next based on a range of variables, and the sequencer then determines the best ordering of instructions for execution of that thread. The SIMD arrays are pipelined, and the two arbiters per SIMD allow for execution of two different threads in interleaved fashion. Notice, also, that vertex and texture fetches have their own arbiters, so they can run independently of shader ops.

As you may be gathering, this dispatch processor involves lots of complexity and a good deal of mystery about its exact operation, as well. Robust thread handling is the reason why GPUs are very effective parallel computing devices, because they can keep themselves very well occupied. If a thread has to stop and wait for the retrieval of data from memory, which can take hundreds of GPU cycles, other threads are ready and waiting to execute in the interim. This logic almost has to occupy substantial amounts chip area, since the dispatch processor must keep track of all of the threads in flight and make “smart” decisions about what to do next.

Shader processing


A single stream processor block.
Source: AMD.

In its shader core, the R600’s most basic unit is a stream processing block like the one depicted in the diagram on the right. This unit has five arithmetic logic units (ALUs), arranged together in superscalar fashion—that is, each of the ALUs can execute a different instruction, but the instructions must all be issued together at once. You’ll notice that one of the five ALUs is “fat.” That’s because this ALU’s capabilities are a superset of the others’; it can be called on to handle transcendental instructions (like sine and cosine), as well. All four of the others have the same capabilities. Optimally, each of the five ALUs can execute a single multiply-add (MAD) instruction per clock on 32-bit floating-point data. (Like G80, the R600 essentially meets IEEE 754 standards for precision.) The stream processor block also includes a dedicated unit for branch execution, so the stream processors themselves don’t have to worry about flow control.

These stream processor blocks are arranged in arrays of 16 on the chip, for a SIMD (single instruction multiple data) arrangement, and are controlled via VLIW (very long instruction word) commands. At a basic level, that means as many as six instructions, five math and one for the branch unit, are grouped into a single instruction word. This one instruction word then controls all 16 execution blocks, which operate in parallel on similar data, be it pixels, vertices, or what have you.

The four SIMD arrays on the chip operate independently, so branch granularity is determined by the width of the SIMD and the depth of the pipeline. For pixel shaders, the effective “width” of the SIMD should typically be 16 pixels, since each stream processor block can process a single four-component pixel (with a fifth slot available for special functions or other tasks). The stream processor units are pipelined with eight cycles of latency, but as we’ve noted, they always execute two threads at once. That makes the effective instruction latency per thread four cycles, which brings us to 64 pixels of branch granularity for R600. Some other members of the R600 family have smaller SIMD arrays and thus finer branch granularity.

Phew.

Let’s stop and run some numbers so we can address the stream processor count claimed by AMD. Each SIMD on the R600 has 16 of these five-ALU-wide superscalar execution blocks. That’s a total of 80 ALUs per SIMD, and the R600 has four of those. Four times 80 is 320, and that’s where you get the “320 stream processors” number. Only it’s not quite that simple.

The superscalar VLIW design of the R600’s stream processor units presents some classic challenges. AMD’s compiler—a real-time compiler built into its graphics drivers—will have to work overtime to keep all five of those ALUs busy with work every cycle, if at all possible. That will be a challenge, especially because the chip cannot co-issue instructions when one is dependent on the results of the other. When executing shaders with few components and lots of dependencies, the R600 may operate at much less than its peak capacity. (Cue sounds of crashing metal and human screams alongside images of other VLIW designs like GeForce FX, Itanium, and Crusoe.)

The R600 has many things going for it, however, not least of which is the fact that the machine maps pretty well to graphics workloads, as one might expect. Vertex shader data often has five components and pixel shader data four, although graphics usage models are becoming more diverse as programmable shading takes off. The fact that the shader ALUs all have the same basic set of capabilities should help reduce scheduling complexity, as well.

Still, Nvidia has already begun crowing about how much more efficient and easier to utilize its scalar stream processors in the G80 are. For its part, AMD is talking about potential for big performance gains as its compiler matures. I expect this to be an ongoing rhetorical battle in this generation of GPU technology.

So how does R600’s shader power compare to G80? Both AMD and Nvidia like to throw around peak FLOPS numbers when talking about their chips. Mercifully, they both seem to have agreed to count programmable operations from the shader core, bracketing out fixed-function units for graphics-only operations. Nvidia has cited a peak FLOPS capacity for the GeForce 8800 GTX of 518.4 GFLOPS. The G80 can co-issue one MAD and one MUL instruction per clock to each of its 128 scalar SPs. That’s three operations (multiply-add and multiply) per cycle at 1.35GHz, or 518.4 GFLOPS. However, the guys at B3D have shown that that extra MUL is not always available, which makes counting it questionable. If you simply count the MAD, you get a peak of 345.6 GFLOPS for G80.

By comparison, the R600’s 320 stream processors running at 742MHz give it a peak capacity of 475 GFLOPS. Mike Houston, the GPGPU guru from Stanford, told us he had achieved an observed compute throughput of 470 GFLOPS on R600 with “just a giant MAD kernel.” So R600 seems capable of hitting something very near its peak throughput in the right situation. What happens in graphics and games, of course, may vary quite a bit from that.

 

Shader performance
The best way to solve these shader performance disputes, of course, is to test the chips. We have a few tests that may give us some insight into these matters.

The Radeon 2900 XT comes out looking good in 3DMark’s vertex shader tests, solidly ahead of the GeForce 8800 GTX. Oddly, though, we’ve seen similar or better performance in this test out of a mid-range GeForce 8600 GTS than we see here from the 8800 GTX. The GTX may be limited by other factors here or simply not allocating all of its shader power to vertex processing.

The tables turn in 3DMark’s pixel shader test, and the R600 ends up in a virtual dead heat with the GeForce 8800 GTS, a cut-down version of the G80 with only 96 stream processors.

This particle test runs a physics simulation in a shader, using vertex texture fetch to store and access the results. Here, the Radeon 2900 XT is slower than the 8800 GTX, but well ahead of the GTS. The Radeon X1950 XTX can’t participate since lacks vertex texture fetch.

Futuremark says the Perlin noise test “computes six octaves of 3-dimensional Perlin simplex noise using a combination of arithmetic instructions and texture lookups.” They expect such things for become popular in future games for use in procedural modeling and texturing, although procedural texturing has always been right around the corner and never seems to make its way here. If and when it does, the R600 should be well prepared, because runs this shader quite well.

Next up is a series of shaders in ye old ShaderMark, a test that’s been around forever but may yet offer some insights.

The Radeon HD 2900 XT lands somewhere north of the GeForce 8800 GTS, but it can’t match the full-fledged G80 in ShaderMark generally.

ShaderMark also gives us an intriguing look at image quality by quantifying how closely each graphics cards’ output matches that of Microsoft’s reference rasterizer for DirectX 9. We can’t really quantify image quality, but this does tell us something about the computational precision and adherence to Microsoft’s standards in these GPUs.

DirectX 10 has much tighter standards for image quality, and these DX10-class GPUs are remarkably close together, both overall and in individual shaders.

Finally, here’s a last-minute addition to our shader tests courtesy of AMD. Apparently already aware of the trash talk going on about the potential scheduling pitfalls of its superscalar shading core, AMD sent out a simple set of DirectX 10 shader tests in order to prove a point. I decided to go ahead and run these tests and present you with the results, although the source of the benchmarks is not exactly an uninterested third party, to say the least. The results are informative, though, because they present some difficult scheduling cases for the R600 shader core. You can make of them what you will. First, the results, and then the test explanations:

The first thing to be said is that G80 again appears to be limited somehow in its vertex shader performance, as we saw with 3DMark’s vertex tests. That hasn’t yet been an issue for the G80 in real-world games, so I’d say the pixel shader results are the more interesting ones. Here are AMD’s explanations of the tests, edited and reformatted for brevity’s sake:

1) “float MAD serial” – Dependant Scalar Instructions — Basically this test issues a bunch of scalar MAD instructions that are sequentially executed. This way only one out of 5 slot of the super-scalar instruction could be utilized. This is absolutely the worst case that would rarely be seen in the real-world shaders.

2) “float4 MAD parallel” – Vector Instructions — This test issues 2 sequences of MAD instructions operating on float4 vectors. The smart compiler in the driver is able to split 4D vectors among multiple instructions to fill all 5 slots. This case represents one of the best utilization cases and is quite representative of instruction chains that would be seen in many shaders. This also demontrates [sic] the flexibility of the architecture where not only trivial case like 3+2 or 4+1 can be handled.

3) “float SQRT serial” – Special Function — This is a test that utilizes the 5th “supped up” [sic] scalar instruction slot that can execute regular (ADD, MUL, and etc.) instructions along with transcendental instructions.

4) “float 5-instruction issue” – Non Dependant Scalar Instructions — This test has 5 different types of scalar instructions (MUL, MAD, MIN, MAX, SQRT), each with it’s own operand data, that are co-issued into one super-scalar instruction. This represents a typical case where in-driver shader compiler is able to co-issue instructions for maximal efficiency. This again shows how efficiently instructions can be combined by the shader compiler.

5) “int MAD serial” – Dependant DX10 Integer Instructions — This test shows the worst case scalar instruction issue with sequential execution. This is similar to test 1, but uses integer instructions instead of floating point ones.

6) “int4 MAD parallel” – DX10 Integer Vector Instructions — Similar to test 2, however integer instructions are used instead of floating point ones.

The GeForce 8800 GTX is just under three times the speed of the Radeon HD 2900 XT in AMD’s own worst-case scenario, the float MAD serial with dependencies preventing superscalar parallelism. From there, the R600 begins to look better. The example of the float4 MAD parallel is impressive, since AMD’s compiler does appear to be making good use of the R600’s potential when compared to G80. The next two floating-point tests make use of the “fat” ALU in the R600, and so the R600 looks quite good.

We get the point, I think. Computationally, the R600 can be formidable. One worry is that these shaders look to be executing pure math, with no texture lookups. We should probably talk about texturing rather than dwell on these results.

 

Texturing and memory bandwidth
AMD has endowed the R600 with four texture units that operate independently of the chip’s shader core. The R600’s texture units and total texture addressing and filtering capacity look similar to the Radeon X1950 XTX’s, but with some notable improvements. Those improvements include the ability to filter FP16-format textures—popular for high dynamic range lighting—at full speed (16 pixels per clock) and FP32 textures at half speed. The R600 can do trilinear and anisotropic filtering for all formats. The Radeon X1950 XTX couldn’t handle these texture formats in its filtering hardware and had to resorts to its pixel shaders instead, so AMD estimates R600 is roughly seven times the speed of its predecessor in this respect.


Logical diagram of the R600’s texture units. Source: AMD.

Like the Radeon X1950 XTX, each R600 texture unit can grab an additional four unfiltered textures per clock from memory using its fetch4 ability, which is the reason you see the four teensy additional texture address processors and texture samplers in the diagram above. This additional capacity to grab data from memory can be useful for certain tasks like shadowing or stream computing applications.

The texture units can access several of the GPU’s many caches, as appropriate, including the L1 texture cache, the vertex cache (32KB), and the L2 texture cache (256KB).


The R600’s memory controller layout. Source: AMD.

The memory controller the R600 is evolved from the one in the R580. Demers said this one is a fully distributed ring bus, not a hybrid like the R580’s. Demers asserts the ring bus is simpler to design and easier to adapt to new products than the more commonly used crossbar arrangement. The R600’s ring is comprised of four sets of wires running around the chip in read/write pairs, for a total of about 2000 wires and 1024 bits of communication capacity. The ring bus has about 84 read clients and 70 write clients inside the chip, and PCI Express is just one of the many ring stops, as are the eight 64-bit channels to local memory.

In case I caught you snoozing, I said eight 64-bit channels—that works out to a 512-bit-wide path to memory, well above the 384 bits of the G80. What does all of this mean to the Radeon HD 2900 XT?

  Core
clock
(MHz)
Pixels/
clock
Peak
pixel
fill rate
(Gpixels/s)
Bilinear
filtered
textures/
clock
Peak
texel
fill rate
(Gtexels/s)
Bilinear
filtered
FP16
textures/
clock
Peak
FP16
filtering
rate
(Gtexels/s)
Effective
memory
clock
(MHz)
Memory
bus width
(bits)
Peak
memory
bandwidth
(GB/s)
Radeon X1950 XTX 650 16 10.4 16 10.4 2000 256 64.0
GeForce 8800 GTS 500 20 10.0 24 12.0 24 12.0 1600 320 64.0
GeForce 8800 GTX 575 24 13.8 32 18.4 32 18.4 1800 384 86.4
GeForce 8800 Ultra 612 24 14.7 32 19.6 32 19.6 2160 384 103.7
Radeon HD 2900 XT 742 16 11.9 16 11.9 16 11.9 1650 512 105.6

With 512MB GDDR3 of memory running at 1.65GHz, the Radeon HD 2900 XT has a torrential 105.6 GB/s of peak memory bandwidth, higher that of the GeForce 8800 Ultra and well above the GTX or GTS. Yet its peak multitextured fill rate is only about 12 Gtexels/s, close to that of the GeForce 8800 GTS and well behind the GTX. AMD seems to have been pleased with the basic fill rate and filtering capabilities of the Radeon X1950 XTX and chose only to extend them in R600 to include HDR texture formats. Texturing is indeed becoming less important as programmable shaders gain traction, but many of those shaders store or access data in textures, which is a concern. The Radeon HD 2900 XT trails Nvidia’s fastest graphics cards by miles here, despite having a wider path to memory.

Here are a few quick texture fill rate and filtering tests to see how these theoretical peak numbers play out.

The Radeon HD 2900 XT gets closer than any of the other cards to its theoretical maximum pixel fill rate, probably because it has sufficient memory bandwidth to make that happen. When we switch to multitexturing, the chips reach very near their theoretical limits, which puts the Radeon HD 2900 XT just behind the GeForce 8800 GTS. These are not FP16 textures, so the Radeon X1950 XTX performs reasonably well, too.

 

Texture filtering quality and performance
It’s time to enter the psychedelic tunnel once again and see how these GPUs handle anisotropic filtering. The images below are output from the D3D AF tester, and what you’re basically doing is looking down a 3D rendered tube with a checkerboard pattern applied. The colored bands indicated different mip-map levels, and you can see that the GPUs vary the level of detail they’re using depending on the angle of the surface.

Default quality
Radeon X1950 XTX Radeon HD 2900 XT GeForce 8800 GTX

The Radeon X1950 XTX, err, cheats quite a bit at certain angles of inclination. Flat floors and walls get good treatment, but other surfaces do not. Nvidia did the same thing with the GeForce 7 series, but they banished this trick in G80, which produces a nice, nearly round pattern. To match it, AMD has instituted a tighter pattern in the Radeon HD 2900 XT, which is the same as the “high quality” option on the X1000 series. In fact, AMD has simply removed this lower quality choice from the R600’s repertoire.

If none of this makes any sense to you, perhaps an illustration will help. Here’s a screenshot from Half-Life 2 that shows what happens when the angle of a surface goes the wrong way on each of these GPUs.

Default quality
Radeon X1950 XTX

Radeon HD 2900 XT

GeForce 8800 GTX

The flat surface looks great on all three, but things turn to mush at a different angle on the Radeon X1950 XTX. Fortunately, the newer GPUs avoid this nastiness.

High quality
Radeon X1950 XTX Radeon HD 2900 XT (default) GeForce 8800 GTX

Here’s a look at the high-quality settings for the X1950 XTX and 8800 GTX alongside the HD 2900 XT’s one-and-only option. As you can tell, AMD just uses the high-quality algorithm from the R580 at all times on the R600. This algorithm produces good results, but it’s not quite as close to perfect as the G80. Look at the practical impact in our example, though.

High quality
Radeon X1950 XTX

Radeon HD 2900 XT (default)

GeForce 8800 GTX

All three GPUs produce very similar results. The colored test patterns do suggest the R600 is a little weak at 45° angles of inclination. I tried to capture an example of this weakness in our Half-Life 2 sample scene by changing the angle a bit, but honestly, I couldn’t see it. I later tried in other games, again to no avail.

So in my book, the off-angle aniso optimization is effectively dead, and thank goodness. That doesn’t mean I’m entirely pleased with the state of texture filtering. It looks to me like AMD has retained the same adaptive trilinear filtering algorithm in R600 has in its previous GPUs, with no substantial changes. That means the same quirks are carried over. The G80’s texture filtering may be a little better, but I’m not entirely decided on that issue. Maybe its particular quirks are just newer. Many of the remaining problems with both algorithms are motion-based and difficult to capture with a screenshot, so I’m going to have to invent a new way to complain.

How does all of this high quality texture filtering impact performance? Here’s a look. It’s not FP16 filtering, unfortunately, but it’s still useful info.

Uh oh. D3D RightMark shows us how the GPUs scale by filtering type, and the story is a rough one for AMD. The Radeon HD 2900 XT starts out more or less as expected but falls increasingly behind the GeForce 8800 GTS as the filtering complexity increases.

 

Render back-ends and antialiasing


R600 render back-end logical diagram
Source: AMD.

To the right is a logical diagram of one of the R600’s render back-ends. (Nvidia calls these ROPs, if you’re wondering.) The R600 packs four of these units, and they work pretty much as you’d expect from the diagram. They can output four pixels per clock to the frame buffer and can process depth and stencil tests at twice that rate. Among the improvements from R580 are higher peak rates of Z and stencil compression, some improvements to common Z-buffer optimizations, and the ability to use FP32-format Z buffers for higher depth precision.

The render back-ends are also traditionally the place where the resolve process for multisampled antialiasing happens. AMD has carried over all of the previous antialiasing goodness of its prior chips in R600, including gamma-correct blends, programmable sample patterns, temporal AA, and Super AA modes for CrossFire load balancing. The R600 trades the older GPU’s 6X multisampling mode for a new 8X mode that, duh, offers higher quality by virtue of more samples. I’ve added the R600’s default sample patterns to my Giant Chart of AA Sample Patterns, producing the following glorious cornucopia of colored dots. As always, the green dots represent texture/shader samples, pink dots represent color/Z and coverage samples, and teensy red dots represent coverage samples alone (for Nvidia’s CSAA modes).

I’ve included the Radeon HD 2900 XT’s CrossFire SuperAA mode in a separate column, although SuperAA is presently limited to a single mode on the 2900 XT. I’ve also included composite sample patterns for the 2900 XT’s temporal AA modes. These sample patterns actually occur in two halves over the course of two frames whenever frame rates go above 60 FPS. My current assessment of temporal AA: meh. It sounded like a good idea at the time, but AMD could spike it and I’d be happy.

  GeForce
7900 GTX
GeForce
7900 GTX
SLI
GeForce
7950 GX2
SLI
GeForce
8800 GTX
Radeon
X1950 XTX
Radeon
X1950 XTX
CrossFire
Radeon
HD
2900 XT
Radeon
HD
2900 XT
Temporal 
Radeon
HD
2900 XT
CrossFire
2X

   

 

 
4X

   

 

 
6X        

     
8x      

 

 
8xS/8xQ
/8X/10X

 

     
12X          

     
14X          

     
16X  

       

16xQ      

         
32X    

           

And so the grand table adds the R600’s distinctiveness to its own. As ever, AMD has used a nice quasi-random pattern in the R600’s new 8X multisampled mode.

So that’s part of the story. After seeing Nvidia’s very smart coverage sample antialiasing technique in the G80, I had doubts about whether AMD could answer with something as good and innovative itself. To recap in a nutshell, coverage sampled AA does what it appears to do in the table above: stores more samples to determine polygon coverage while discarding color/Z samples it doesn’t necessarily need. That keeps its memory footprint and performance overhead low, yet it generally produces good results, as you’ll see in the examples on the following pages.

AMD’s answer to coverage sampled AA is made possible by the fact that the render back-ends in the R600 can now quickly pass data back to the shaders, and that leads to AMD’s latest innovation: custom filter antialiasing. The essence of CFAA is that R600 can run a multitude of antialiasing filters, with a programmable resolve stage, allowing for all kinds of new and different AA voodoo. That voodoo starts with a couple of new filters AMD has included with the first round of Radeon HD drivers: a pair of tent filters. Unlike the traditional box filter, these tent filters reach outside of the bounds of the pixel to grab extra samples. Here are a couple of examples, with narrow and wide tent filters using the Radeon HD’s 8X sample pattern, from AMD.

The narrow tent grabs a single sample from each neighboring pixel, while the wide tent grabs two. That leads to an effective sample size of 12X for the narrow tent and 16X for the wide tent. The HD 2900 XT can also combine narrow and wide tent filters with its 2X and 4X AA modes for effective sample sizes of 4X, 6X, 6X again, and 8X.

Those of you who are old-school PC graphics guys like me may be having some serious, gut-wrenching flashbacks right now to Nvidia’s screen-blurring Quincunx mode from GeForces of old. These tent filters are fairly smart about how they go about their business, though; they compute a weighted average of the samples based on a linear function that decreases the weight of samples further from the pixel center. Tent filters do introduce a measure of blurring across the whole screen, but the effect is very subtle, as you can see in the example below. The base AA mode is 8X multisampled.

Box – 8X MSAA Narrow tent – 12X Wide tent – 16X

The blurring is most obvious in the text, but it is in fact a full-scene affair. Look at the leaves on the sidewalk below the park bench, the bricks and windowpanes of the building behind, or the cobblestone texture on the street. The tent filters blur all of these things subtly, which leads to a tradeoff: images aren’t as sharp, but high-frequency “pixelation” is reduced throughout the scene.

Frankly, I was all set not to like CFAA’s tent filters when I first heard about them. They make things blurry, don’t involve clever tricks like Nvidia’s coverage sampling, and hey, Quincunx sucked. But here’s the thing: I really like them. It’s hard to argue with results, and CFAA’s tent filters do some important things well. Have look at this example shot from Oblivion.

GeForce 8800 GTS
CSAA 8X
GeForce 8800 GTS
CSAA 16X
Radeon HD 2900 XT
CFAA 8X – 4X MSAA + Wide tent

This CFAA mode with 8 samples produces extremely clean edges and does an excellent job of resolving very fine geometry, like the tips of the spires on the cathedral. Even 16X CSAA can’t match it. Also, have a look at the tree leaves in these shots. They use alpha transparency, and I don’t have transparency AA enabled, so you see some jagged edges on the GeForce 8800. The wide tent filter’s subtle blending takes care of these edges, even without transparency AA.

You may not be convinced yet, and I don’t blame you. CFAA’s tent filters may not be for everyone. I would encourage you to try them, though, before writing them off. There is ample theoretical backing for the effectiveness of tent filters, and as with any AA method, much of their effectiveness must be seen in full motion in order to be properly appreciated. I prefer the 4X MSAA + wide tent filter to anything Nvidia offers, in spite of myself. I’ve found that it looks great on the 30″ wide-screen LCD attached to my GPU test rig. The reduction in high-frequency pixel noise is a good thing on a sharp LCD display; it adds a certain solidity to objects that just.. works. Oblivion has never looked better on the PC than it does on the Radeon HD 2900 XT.

How does this AA voodoo perform, you ask? Here’s a test using one of 3DMark’s HDR tests, which uses FP16 texture formats.

Another feature of CFAA tent filters is that they have no additional memory footprint or sampling requirements, and in this case, that translates to almost no performance overhead. Ok, my graph here is hard to read, but if you look closely, you’ll see that CFAA’s narrow and wide tent filters don’t slow down the 2X and 4X MSAA modes on which they’re based. There is a performance penalty involved when they’re combined with 8X MSAA, but it’s not too punishing.

In its current state, then, the R600’s CFAA is an impressive answer to Nvidia’s CSAA, the, er, Quincunx smear aside. The thing about custom filters is that they can do many things, and AMD has big plans for them. They’re talking about a custom filter than runs an edge-detect pass on the entire image and then goes back and applies AA selectively. In fact, they even delivered a driver to us late in our testing along with a separate executable to enable this filter. Unfortunately, I wasn’t able to get it working in time to try it out. We’ll have to look at it later.

Oh, and it is possible that Nvidia could counter CFAA with some shader-based custom AA filters of its own, completely stealing AMD’s thunder. For the record, I’d wholeheartedly endorse that move.

 

Antialiasing image quality – GPUs side by side
Here’s a look at the Radeon HD 2900 XT’s edge AA image quality. These images come from Half-Life 2, and they’re blown up to 4X their normal size so you can see the pixel colors along the three angled edges shown prominently in this example.

Antialiasing quality
Radeon X1950 XTX
CrossFire
GeForce 8800 GTX Radeon HD 2900 XT Radeon HD 2900 XT
Narrow tent filter
Radeon HD 2900 XT
Wide tent filter
No AA

   

2X

   
4X

 
6X     6X CFAA 6X CFAA

   

SuperAA 8X 8X CSAA     8X CFAA

   

SuperAA 10X 8xQ 8X    

   
Super AA 12X     12X CFAA  

   

 
Super AA 14X 16X CSAA     16X CFAA

   

  16xQ SuperAA 16X    
 

   

CFAA’s tent filters look reasonably good in these classic edge cases, as do AMD’s 8X multisampled and SuperAA 16X modes. I think our example from Oblivion on the previous page does a better job of showcasing the tent filters’ strengths, though.

 

Antialiasing image quality – Alpha transparency
Here’s one final AA image quality example, focused on the methods that AMD and Nvidia have devised to handle the tough case of textures with alpha transparency cutouts in them. Nvidia calls its method transparency AA and AMD calls its adaptive AA, but they are fundamentally similar. The scene below, again from Half-Life 2, has two examples of alpha-transparent textures: the leaves on the tree and the letters in the sign. 4X multisampling is enabled in all cases. The top row shows images without transparency AA enabled. The second row shows the lower-quality variants of transparency/adaptive AA from Nvidia and AMD, and the bottom row shows the highest quality option from each.

For what it’s worth, I took these screenshots on the Radeon HD 2900 XT with an updated driver that AMD says provides performance improvements in adaptive AA, so I believe this is a new algorithm.

Alpha transparency antialiasing quality w/4X AA
Radeon X1950 XTX GeForce 8800 GTX Radeon HD 2900 XT
Box filter
Radeon HD 2900 XT
Narrow tent filter
Radeon HD 2900 XT
Wide tent filter

Looks like you can get away with the lower quality adaptive AA method on the Radeon HD 2900 XT. If you combine it with a tent filter, the results are pretty good.

 

Avivo HD video processing, display, and audio support
In addition to all of the new graphics goodness, the R600 brings with it some important new capabilities for high-definition displays and video playback. The most prominent among them is a brand-new video processor AMD has dubbed the UVD, for universal video decoder. The UVD is a dedicated processor with its own instruction and data caches, and it can accelerate key stages of the decompression and playback of HD video formats for both HD-DVD and Blu-ray, including H.264 and VC-1. Nvidia just recently introduced a pair of lower-end G80 derivatives with a new H.264 decode acceleration unit, but the AMD folks like to point out that it can’t do bitstream processing or entropy decode for videos in the VC-1 format. More importantly, perhaps, the G80 lacks this unit and cannot provide more-or-less “full” acceleration of even H.264 video playback. The R600’s UVD should allow it to play HD movies with much lower CPU utilization and power consumption than the G80, as a result. Update: Turns out the R600 lacks UVD acceleration, which is confined to lower-end Radeon HD GPUs. See our explanation here.

The display portion of the R600 can drive a pair of dual-link DVI ports for some insane maximum resolutions, and it can support HDCP over those DL-DVI connections, allowing monitors like my Dell 3007WFP to play back DRM-encrusted movies at the display’s full resolution. That’s the theory, at least; I have yet to try it. AMD has also embedded the HDCP crypto keys into the GPU, eliminating the need for an external crypto ROM chip.

Finally, as long rumored, the R600 includes a six-channel audio controller, but only for a single purpose: support of 5.1 audio over HDMI. Radeon HD cards won’t have any other form of analog or digital audio output.

The cards, specs, and prices
We’ve talked quite a bit about the R600 GPU without saying much about the card on which it’s based. As you may have gathered, the Radeon HD 2900 XT has a 742MHz R600 GPU onboard and 512MB of GDDR3 memory clocked at 825MHz (or 1650MHz effective). The card has a dual-slot cooler and is 9.5″ long, or just a little shorter than a GeForce 8800 GTX but longer than a GTS.

The 2900 XT comes with two PCIe auxiliary power plugs on board, and one of the two is of the brand-new eight-pin variety. We were able to use our review unit with an older power supply by attaching two six-pin aux power connectors, but AMD has limited GPU overclocking to boards with an eight-pin connector attached. Also, we found that our 700W power supply wasn’t up to the task of powering a Radeon HD 2900 XT CrossFire rig. In order to achieve stability, we had to switch to a new Thermaltake 1kW PSU with a pair of eight-pin connectors that AMD supplied.

Like the Radeon X1950 Pro, the HD 2900 XT comes with a pair of internal CrossFire connectors onboard. The day of dongles is behind us, and the dual connectors may someday allow more than two cards to be teamed up in the systems of the filthy rich or criminally insane.

The board itself has two DVI connectors, but it can also support HDMI—with audio— via a plug adapter from AMD.

The Radeon HD 2900 XT is slated to become available today at online vendors for $399, and it will be bundled with a coupon for getting a trio of games from Valve via the Steam distribution service when they’re released: Half-Life 2: Episode Two, Portal, and Team Fortress 2.

And, well, that’s the whole plan. AMD has no higher-end products to announce; it’s just positioning the Radeon HD 2900 XT against the GeForce 8800 GTS at $399 and calling it good. Now that’s a remarkable change of strategy from the past, oh, five years of intense one-upsmanship. It seems AMD wasn’t quite able to extract sufficient performance from the R600 in its current state to challenge the GTX for the outright performance crown, so they decided to go for a price-performance win instead. CrossFire, they say, will serve the high end of the market.

That leaves the HD 2900 XT to contend with products like this “superclocked” EVGA version of the GeForce 8800 GTS that Nvidia sent us when it caught wind of AMD’s plans. (They can be aggressive that way, in case you hadn’t noticed.)

This card has a 575MHz core clock and 1.7GHz memory—a formidable boost from the stock 8800 GTS.—and as I write it can be had for under $400 at online vendors. Nvidia got us this card (actually a pair of them) and some fancy new drivers for it after we were deep into our testing last week. As a result, we were only able to include in a subset of our tests, and only in single-card mode. We expect to follow up with it later, since it does represent real products available now competing with the Radeon HD 2900 XT. We do have an “overclocked in the box” version of the GeForce 8800 GTS 320MB throughout our results, and it often outperforms the stock-clocked 640MB GTS at lower resolutions.

The rest of the family
Joining the Radeon HD 2900 XT shortly will be a family of products based on two new lower end GPUs. Both of these chips will be DX10-compliant and derived from R600 technology, but both will be manufactured on TSMC’s 65nm fab process. Like the R600, they will have Avivo HD decode and playback acceleration and HDMI support with audio.

The mid-range variant is the GPU code-named RV630, and cards based on it will be in the Radeon HD 2600 lineup. The RV630 has an estimated 390M transistors, and this scaled-down R600 derivative has three SIMD arrays, each of which has eight stream processor units (or “120 stream processors” in AMD-speak). This GPU has two texture units, a 128KB L2 texture cache, one render back-end, and a 128-bit external memory interface. AMD plans Radeon HD 2600 Pro and XT cards ranging in price from $99 to $199. The most intriguing of those from an enthusiast standpoint will no doubt be the $199 2600 XT, pitted directly against the GeForce 8600 GTS. The XT will come with native CrossFire connectors, a single-slot cooler, and no PCIe auxiliary power connector.


The Radeon HD 2600 XT comes with a single-slot cooler. Source: AMD.

The Radeon HD 2400 series will occupy the low end of the market, powered by the RV610 GPU. This 180 million transistor chip packs two SIMD arrays with four stream processing units each, for a total of 40 stream processors, as AMD likes to count ’em. It has a single texture unit and render back end, uses a shared texture and vertex cache, and has a 64-bit memory interface. Befitting its station in life, Radeon HD 2400 XT and Pro cards will sell for $99 and less. Some versions will ship with only a passive cooler like the one below.


Radeon HD 2400 series card with passive cooler and HDMI adapter. Source: AMD.

You can imagine that puppy driving a giant television via an HDMI link in a silent HTPC box, no?

The rest of the Radeon HD family is slated to join the 2900 XT on store shelves on July 1. AMD also has plans for a full mobility lineup based on Radeon HD tech, and those parts are due in July, as well. You may see the Mobility Radeon HD 2300 kicking around before then, but it’s not a DX10-capable part. Kind of like the Radeon 9000 back in the day, it’s an older 3D core being pulled into a new naming scheme.

And now, on to the benchmarks…

 

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core 2 Extreme X6800 2.93GHz Core 2 Extreme X6800 2.93GHz
System bus 1066MHz (266MHz quad-pumped) 1066MHz (266MHz quad-pumped)
Motherboard XFX nForce 680i SLI Asus P5W DH Deluxe
BIOS revision P26 1901
North bridge nForce 680i SLI SPP 975X MCH
South bridge nForce 680i SLI MCP ICH7R
Chipset drivers ForceWare 15.00 INF update 8.1.1.1010
Matrix Storage Manager 6.21
Memory size 4GB (4 DIMMs) 4GB (4 DIMMs)
Memory type 2 x Corsair TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
2 x Corsair TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
CAS latency (CL) 4 4
RAS to CAS delay (tRCD) 4 4
RAS precharge (tRP) 4 4
Cycle time (tRAS) 18 18
Command rate 2T 2T
Hard drive Maxtor DiamondMax 10 250GB SATA 150 Maxtor DiamondMax 10 250GB SATA 150
Audio Integrated nForce 680i SLI/ALC850
with Microsoft drivers
Integrated ICH7R/ALC882M
with Microsoft drivers
Graphics GeForce 8800 Ultra 768MB PCIe
with ForceWare 158.18 drivers
Radeon X1950 XTX512MB PCIe
+ Radeon X1950 CrossFire
with Catalyst 7.4 drivers
GeForce 8800 GTX 768MB PCIe
with ForceWare 158.18 drivers
Dual Radeon HD 2900 XT 512MB PCIe
with 8.37.4.070419a-046506E drivers
Dual GeForce 8800 GTX 768MB PCIe
with ForceWare 158.18 drivers
 
BFG GeForce 8800 GTS 640MB PCIe
with ForceWare 158.18 drivers
 
Dual BFG GeForce 8800 GTS SLI 640MB PCIe
with ForceWare 158.18 drivers
 
XFX GeForce 8800 GTS 320MB PCIe
with ForceWare 158.18 drivers
 
EVGA GeForce 8800 GTS 640MB PCIe
with ForceWare 158.42 drivers
 
Dual EVGA GeForce 8800 GTS 640MB PCIe
with ForceWare 158.42 drivers
 
Radeon X1950 XTX512MB PCIe
with Catalyst 7.4 drivers
 
Radeon HD 2900 XT 512MB PCIe
with 8.37.4.070419a-046506E drivers
 
OS Windows Vista Ultimate x86 Edition Windows Vista Ultimate x86 Edition
OS updates

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by OCZ GameXStream 700W power supply units. Thanks to OCZ for providing these units for our use in testing.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults.

The test systems’ Windows desktops were set at 1600×1200 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

S.T.A.L.K.E.R.: Shadow of Chernobyl
We tested S.T.A.L.K.E.R. by manually playing through a specific point in the game five times while recording frame rates using the FRAPS utility. Each gameplay sequence lasted 60 seconds. This method has the advantage of simulating real gameplay quite closely, but it comes at the expense of precise repeatability. We believe five sample sessions are sufficient to get reasonably consistent and trustworthy results. In addition to average frame rates, we’ve included the low frames rates, because those tend to reflect the user experience in performance-critical situations. In order to diminish the effect of outliers, we’ve reported the median of the five low frame rates we encountered.

For this test, we set the game to its “maximum” quality settings at 2560×1600 resolution. Unfortunately, the game crashed on both GeForce and Radeon cards when we set it to use dynamic lighting, so we had to stick with its static lighting option. Nevertheless, this is a good-looking game some nice shader effects and lots of vegetation everywhere.

The Radeon HD 2900 XT kicks off our game benchmarks with a mixed result. It’s slower than the competing GeForce 8800 GTS 640MB in this game, but it’s quicker in CrossFire mode than its competitor is in SLI. We found throughout our benchmarks that Nvidia’s SLI support in Windows Vista doesn’t tend to scale as well as it long has in Windows XP. AMD’s CrossFire support is generally superior in Vista.

Supreme Commander
Here’s another new game, and a very popular request for us to try. Like many RTS and isometric-view RPGs, though, Supreme Commander isn’t exactly easy to test well, especially with a utility like FRAPS that logs frame rates as you play. Frame rates in this game seem to hit steady plateaus at different zoom levels, complicating the task of getting meaningful, repeatable, and comparable results. For this reason, we used the game’s built-in “/map perftest” option to test performance, which plays back a pre-recorded game.

Another note: the frame rates you see below look pretty low, but for this type of game, they’re really not bad. We’ve observed frame rates in the game similar to the numbers from the performance test, but they’re still largely acceptable, even at higher resolutions. This is simply different from an action game, where always-fluid motion is required for smooth gameplay.

And a final note: you’ll see that SLI performance doesn’t scale in this game, but we’ve included those scores simply because it worked. We weren’t able to get either of our CrossFire systems, Radeon X1950 XTX or HD 2900 XT, working with Supreme Commander, which is why those scores were omitted.

The 2900 XT runs neck-and-neck with the 8800 GTS in average frame rates, but check out its low frame rate numbers. They’re consistently higher. This may well be the R600’s reduced state management overhead in action.

 

Battlefield 2142
We tested this one with FRAPS, much like we did S.T.A.L.K.E.R. In order to get this game to present any kind of challenge to these cards, we had to turn up 16X anisotropic filtering, 4X antialiasing, and transparency supersampling (or the equivalent on the Radeons, “quality” adaptive AA). I’d have run the game at 2560×1600 resolution if it supported that display mode.

We’ve tested BF2 with a pair of drivers on the Radeon HD 2900 XT. The normal set of results comes from the driver we used throughout most of this review, and the other one comes from an early alpha driver that improves adaptive AA performance.

With the new alpha driver, the 2900 XT still can’t quite match the “overclocked in the box” version of the GeForce 8800 GTS 640MB, and that card’s fancy driver doesn’t look to be doing it any favors; the stock-clocked GTS is faster yet here.

Half-Life 2: Episode One
This one combines high dynamic range lighting with 4X antialiasing and still has fluid frame rates at very high resolutions. Unfortunately, though, we encountered some fog rendering problems on the Radeon HD 2900 XT in this game. AMD says it’s working with Valve on a fix, but doesn’t have one ready just yet. We’ve gone ahead and included the results here with the hope that performance won’t change with the fix.

This one is a clear win for AMD. The HD 2900 XT outperforms the 8800 GTS 640MB, and the HD 2900 XT CrossFire rig proves fastest overall.

 
The Elder Scrolls IV: Oblivion
We turned up all of Oblivion’s graphical settings to their highest quality levels for this test. The screen resolution was set to 1920×1200 resolution, with HDR lighting enabled. 16X anisotropic filtering was forced on via the cards’ driver control panels. We strolled around the outside of the Leyawin city wall, as show in the picture below, and recorded frame rates with FRAPS. This area has loads of vegetation, some reflective water, and some long view distances.

We tested this one with and without antialiasing. Without AA, performance was like so:

The HD 2900 XT looks pretty good. We then worked around some AA issues in Nvidia’s drivers and were able to test with AA enabled. We also added a couple of new configs: the Radeon HD 2900 XT with new alpha drivers to improve performance with AA in Oblivion and that GeForce 8800 GTS 640MB OC with its updated drivers.

Ah, the drama! The Radeon HD’s new alpha driver allows it to just barely edge past the GeForce 8800 GTS 640MB OC.

Rainbow Six: Vegas
This game is notable because it’s the first game we’ve tested based on Unreal Engine 3. As with Oblivion, we tested with FRAPS. This time, I played through a 90-second portion of the “Dante’s” map in the game’s Terrorist Hunt mode, with all of the game’s quality options cranked. The game engine doesn’t seem to work well with multisampled antialiasing, so we didn’t enable AA.

AMD’s new baby nearly matches the GeForce 8800 GTX here, and the 8800 GTS 640MB trails by over 10 frames per second.

 

3DMark06

The HD 2900 XT bests all three of the 8800 GTS incarnations we tested in 3DMark, and thanks to superior CrossFire scaling, it’s the fastest multi-GPU solution overall in 3DMark. For the record, we’ve seen much better SLI performance out of the 8800 in Windows XP.

Call of Juarez
For our final benchmark, we have an early copy of a DirectX 10 game provided to us by AMD. This is our first chance to look at a DirectX 10 game, even if it is an unfinished one. Nvidia let us know in no uncertain terms that this build of Call of Juarez should not be used for benchmarking, so of course, we had to give it a spin.

….aaaand, it’s a dead freaking heat between the two $399 graphics cards. How’s that for reading the tea leaves for DX10 games? I have no idea what this means exactly.

 

Power consumption
We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement.

The idle measurements were taken at the Windows desktop. The cards were tested under load running Oblivion at 1920×1200 resolution with 16X anisotropic filtering. We loaded up the game and ran it in the same area where we did our performance testing.

The cards were measured on the same motherboard when possible, but we had to use a different board in order to run the Radeons in CrossFire, so keep that in mind. We even had to use a larger 1kW PSU for the HD 2900 XT CrossFire system, which will no doubt change overall system power consumption.

Idle power consumption on the Radeon HD 2900 XT looks very much in line with the GeForce 8800 cards from Nvidia. The larger PSU and different motherboard raises the stakes some for the 2900 XT CrossFire system.

When running a game, though, the R600 does pull quite a bit of juice. The system with a single Radeon HD 2900 XT draws 48W more than the system with the souped-up GeForce 8800 GTS 320MB, and the 2900 XT CrossFire rig with its massive PSU sets what I believe is a new single-system power draw record for Damage Labs at 490W. That’s a crown AMD’s graphics division has stolen from its CPU guys, whose Quad FX platform reached over 460W. Think what would happen if the two could combine their powers.

Seriously, though, the 2900 XT’s power draw is a strong clue as to why AMD elected not to pursue the overall performance crown.

Noise levels and cooling
We measured noise levels on our test systems, sitting on an open test bench, using an Extech model 407727 digital sound level meter. The meter was mounted on a tripod approximately 14″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured, including the Zalman CNPS9500 LED we used to cool the CPU. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

Here’s where one place where those power draw numbers have an impact. AMD has equipped the Radeon HD 2900 XT with a blower than can move an awful lot of hot air, and that inevitably translates into noise. This isn’t anything close to GeForce FX 5800 Ultra Dustbuster levels—that card hit 58.8 dB on the same meter—but the 2900 XT is in a class by itself among high-end graphics cards. I think I probably could live with these noise levels, since the thing is only likely to crank up during games, but it definitely makes its presence known. The GeForce 8800’s hiss is whisper-quiet by comparison.

 
Conclusions
The Radeon HD 2900 XT is an impressive, full-featured DirectX 10-ready graphics processor. Its unified shader architecture is a clear advance over the previous generation of Radeons and is the same class of product as Nvidia’s GeForce 8800 series in terms of basic capabilities. The GPU even has some cool distinctive features, like its tessellator, that the GeForce 8800 can’t match. As we’ve discussed, the scheduling required to achieve efficient utilization of this GPU’s VLIW superscalar stream processing engines could prove to be tricky, putting it at a disadvantage compared to its competition. Some of the synthetic shader benchmarks we ran illustrated that possibility. However, this GPU design has a bias toward massive amounts of parallel shader processing power, and I’m largely persuaded that shader power won’t be a weakness for it. I’m more concerned about its texture filtering capacity. Our tests showed its texturing throughput to be substantially lower than the GeForce 8800 GTS with 16X anisotropic filtering. One can’t help but wonder if the 2900 XT’s performance in today’s DX9 games wouldn’t be higher if it had more filtering throughput.

The 2900 XT does match the GeForce 8800 series on image quality generally, which was by no means a foregone conclusion. Kudos to AMD for jettisoning the Radeon X1000 series’ lousy angle-dependent aniso for a higher quality default algorithm. I also happen to like the 2900 XT’s custom tent filters for antialiasing an awful lot—an outcome I didn’t expect, until I saw it in action for myself. Now I’m hooked, and I consider the Radeon HD’s image quality to be second to none on the PC as a result. Nvidia may yet even the score with its own custom AA filters, though.

The HDCP support over dual-link DVI ports and HDMI audio support are both welcome additions, too. We haven’t yet had time to test CPU utilization during HD-DVD or Blu-ray playback, but we’ve got that on the list for a follow-up article (along with GPU overclocking, edge-detect AA filters, dual-link DVI with HDCP on the Dell 3007WFP, AMD’s Stream computing plans, and a whole host of other items).

Ultimately, though, we can’t overlook the fact that AMD built a GPU with 700M transistors that has 320 stream processor ALUs and a 512-bit memory interface, yet it just matches or slightly exceeds the real-world performance of the GeForce 8800 GTS. The GTS is an Nvidia G80 with 25% of its shader core disabled and only 60% of the memory bandwidth of the Radeon HD 2900 XT. That’s gotta be a little embarrassing. At the same time, the Radeon HD 2900 XT draws quite a bit more power under load than the full-on GeForce 8800 GTX, and it needs a relatively noisy cooler to keep it in check. If you ask folks at AMD why they didn’t aim for the performance crown with a faster version of the R600, they won’t say it outright, but they will hint that leakage with this GPU on TSMC’s 80HS fab process was a problem. All of the telltale signs are certainly there.

Given that, AMD was probably smart not to try to go after the performance crown with a two-foot-long graphics card attached to a micro-A/C unit and sporting a thousand-dollar price tag. Instead, they’ve delivered a pretty good value in a $399 graphics card, so long as you’re willing to overlook its higher power draw and noise output while you’re gaming.

There are many things we don’t yet know about the GeForce 8800 and Radeon HD 2900 GPUs, not least of which is how they will perform in DirectX 10 games. I don’t think our single DX10 benchmark with a pre-release game tell us much, so we’ll probably just have to wait and see. Things could look very different six months from now, even if the chips themselves haven’t changed. 

Comments closed
    • Rakhmaninov3
    • 13 years ago

    Considering how little people care about the quarter-chip comparison, I do see a lot of chit-chat about it. lol! I don’t care about it really, and I definitely didn’t care enough to go the trouble of typing about it. Until now.

    Looks like we’re back to the old days, where ATI offered better image quality and nVidia offered better perf. BUT here it looks like ATI’s got the upper hand with drivers, from the little I’ve read, at least.

    I still have to fix my good ol’ Athlon 2100+/9700 Pro/512MB/160GB system. FIX that one. Not spend $400 on a graphix card for a new box.

    I’m a poor med student. But I do love my old computer. It trucked on, day and night, for 4 years. Not gonna give up on her yet.

    • speedslyde
    • 13 years ago

    to the person who said 640MB Ultra… it’s 768MB. the geforce 6800gt had
    all 16 pipes but ran at a lower speed than ultra and had a slightly less
    robust cooler than that card. you can find referenced clocked 8800gts 640MB for around 350 usd and right now it seems a certain major e-tailer
    is selling HD2900XT for a little more than MSRP of 400usd but you get the
    coupon for the valve bundle of hl2 ep2 , Portal and Team Fortress 2 as well
    as a logitech G5 laser mouse thrown in free in most cases and with reason-
    able shipping. they have similar performance and some might argue that
    HD2900XT has better image quality overall, but the 8800GTS 640MB can
    generally be overclocked quite well. were coming down to familiar choices
    at this price point.

    • Wintermane
    • 13 years ago

    The problem is r600 will need the same driver tweaks the nv 5000 series needed and that cant be good for amd and it sucks for you when the next gen comes out and they stop tweaking.

    Amd it cant be seen as ok that this thing requires 1.65 ghz ram 512 bit at that and so much power just to be nearly competative…

    Amd what the heck does amd do next? They cant dieshrink after this cycle as they will be at 65 already so.. whats the refresh? Nvidia will have monster 65 nm chips.. and amd?

      • totoro
      • 13 years ago

      I forgot about that (and I don’t think it’s been mentioned in ANY article).
      Nvidia claimed that their ‘real-time compiler’ would eventually drastically improve performance on the FX series.
      Hmmph.

    • Ricardo Dawkins
    • 13 years ago

    I dont know what happened to ATI after the R300 days but this is inexcusable. Come on, Red Team

    • Wintermane
    • 13 years ago

    Heh anyone else notice how they missed thier 750 mhz min clockthey were shooing for for months now?

      • Lord.Blue
      • 13 years ago

      nm….bad post

    • SPOOFE
    • 13 years ago

    Ah, I see, you were referring to the underlying technology and not the card itself. A die shrink is usually beneficial; it won’t, however, do anything to help cards that have already been created.

    Anyway, the point is that better drivers may help, but to say “definitely” – especially in the context of the rest of your previous post – would be hyperbolic.

    • Wintermane
    • 13 years ago

    As I said way back ati simply didnt expect anything remoyely like what the 8800 is. Even after all this time they cant make thier design do enough even using much better 80 nm lith and much highr ram and heat budgets.

    They just didnt plan for it to have to go this fast.

    • mushroom
    • 13 years ago

    I dont know how to describe this card other than it being a BIG disappointment…
    you have the paper specification of a flagship card equalling or beating 8800 gtx…
    but it turns out that the performance is of the 8800gts level… but with the much bigger power consumption and heat.. and loud and distracting fan noise.. that you *might* consider tolerating if it has the performance of GTX or better… but it hasnt

    i have heard quite a few who says it is good for its price… i gotta say that it is priced this way because of its subpar performance.. and AMD is not being generous… AMD has no choice .. if AMD has a choice, it would have preferred to price it like gtx and thats what its original performance target and price range it aim for… AMD must suffer horribly on the profit margin on this card

      • Jigar
      • 13 years ago

      please read the article and post your comments.. 2900XT is aimed at 8800GTS… not the GTX. We are yet to see 2900XTX which is aimed at 8800ULTRA and GTX.

        • mushroom
        • 13 years ago

        Jigar2speed5095,

        i did read the article.. and i know it is currently being aimed at 8800gts, no matter how costly it might be for AMD … what i meant in my prior post #184 is that i just dont buy the argument that this card of such an specification is *originally* aiming at 8800gts level of performance…

        do you know it has 700M transister, 512bit interface.. consuming even more power and giving more heat than 8800gtx ultra… *if* what you said is true, then you must question ATI’s engineering talent and design to have such an inefficient GPU… and you also need to question its management and business sense too…this is one hell of an expensive chip/design be to be aiming at 8800gts level… no wonder ati is not earning money, if not losing…

        like i said before, i will say it again for you…. the fact that this radeon HD 2900 XT is targetting at gts level is not by design, it is because of disappointing performance (probably to do design imbalance, hard to optimise compiler etc)… and therefore amd has no choice but to price it according to performance level it can achieve, which is more or less a 8800gts at the moment…

        so far as i know, the xtx as originally conceived is canned because of disappointing performance improvement with gddr4… and expensive…
        maybe we need to wait till the 65nm r600 as xtx…

        sometimes, you need not just read, you need to think…

          • Jigar
          • 13 years ago

          Oh yes sir i have my brain in my head and yes i think while i read.

          But I think you still haven’t read the technical specs of the 2900XT or else you wouldn’t have counter back on me. Anyways i know many of you guys are confused about the technical specs here. But to let you know 2900XT is really handicap here and the only thing that is pushing the card at 8800GTS’s level is 512 bit….

          §[<http://theinquirer.net/images/articles/nV_FUD_07.jpg<]§ I hope now you will understand why this card is not a big performer. But i see 2900XTX to perform better then 8800ULTRA. :)

            • mushroom
            • 13 years ago

            ok using the slide you quote… that kinda prove my point that it is the result of r600’s less than optimal design.. r600 has 700M transistor, comparable to 8800GTX!! yet you get a performing part like a 8800gts …
            You need to know where the hell the transistor budget has used? used in some space filler? isnt 700M transistor something to go for it apart from the 512bit interface? why does it use so much power?

            The dismal figures you quote for r600 could very well be the *result* of design decisions that went into r600 gone bad… it is the *result* of a less than optimal design given the transistor *budget* of a flagship card like 8800GTX (~700M -ish)… thats why this card is not a big performer but for different reasons than you think

            • Jigar
            • 13 years ago

            I agree with you but you have to understand 2900XT is not the only product that will be sold in the market. 2900XTX and XTS will also follow within some months. Engineers cannot make different chips for different models. Example 6800 ULTRA, 6800 Vanila and 6800GT used the same chip but the only difference was that some pipelines were disabled in 6800 Vanila and i guess in 6800GT too. (not sure don’t remember)
            To sum up my comment if 6800 Vanila was launch before 6800ULTRA then you would have said the same thing about Nvidia’s 6 Series architecture, but that would have been a wrong comment isn’t it?

            • mushroom
            • 13 years ago

            i would not have commented the same under that situation..
            there is a difference.. and this is a signficant one…
            the situation with the 2800 xt is that, i *assume*, the r600 chip that get picked in 2800xt is the everything-fully-working shaders version of chip, nothing castrated…

            i dont know about the history of 6800s launches and its spec… and assume what you said:
            first, 6800 vanilla didnt launch first,
            second, even if the vanilla launch first, so long as the ultra part based on the *same chip* launches, this represent the maximum potential under that architecture without any castration… and its maximum performance should be judged based on the ultra… .versus. r600, which i assume its current incarnation in hd2800 xt is the full version without any castration

            • Jigar
            • 13 years ago

            But it is not .. as you can see 2900XT is not a full throttle product and was never meant to hurt 8800GTX nor 8800Ultra.

            • mushroom
            • 13 years ago

            well i think we have to agree to disagree…

            even if i were to go along with your line that the the r600 has always been aiming for 8800gts level.. i would have to say that the r600 is less than stellar
            Because:
            1. it has quite a bigger transister budget compared to 8800gts
            2. it consume much more power than 8800gts
            3. the fan need to spin so loud to the extend of being annoying, because of the need to cool down the hot chip
            4. the production cost of 8800gts should be quite a bit cheaper than 2800-xt
            of cos there are plus points for 2800xt

            and there is another problem.. it seems xtx (or full throttle product in your word) is nowhere to be seen in very near future…
            so does that mean ATI probably need to be 9 months late or more to have a comparable/better counterpart against gtx and ultra? and all this appears to be ATI’s original plan if what you said is to be believed… if 65nm r600 is the xtx.. do you really believe this is really what ATI had originally planned to be used to compete against 8800 gtx?

            or is it that something didnt go as well as originally planned?

            • Jigar
            • 13 years ago

            This article gives you the hint that 80nm has gone bad for AMD, leakage problem is the biggest issue. Leakage problem always leads to heat and performance issues so you are right 2900XT has problems.

            Now as far as AMD’s midrange product goes (remember this where the big earning comes) AMD might has seen the leakage problem in 2900 chip hence they went for 65nm tech. So that they have a cheaper and cold chip in there bag. Offcourse issues were addressed very late and hence you see the delay in the product line.

            Rumors are 2900XTX will be the R650 chip not the R600 as it has so many issues.

            • Mithent
            • 13 years ago

            These companies would never willingly launch their range with a card that wasn’t faster than what their competitors have anyway. It’s not that most of the profit comes from the flagship card, it’s rather somewhat of an advertisement for what the range can do, and it gets attention. Launching a range with a card that’s only an acceptable competitor for a card a fair bit slower than nVidia’s best doesn’t exactly cause confidence in R600.

            • Lord.Blue
            • 13 years ago

            The XTX and XTS models will most likely be on the 65nm die shrink, they just had to get SOMETHING out the door for the early summer. The 2900XT happened to be that card.

            • Jigar
            • 13 years ago

            That’s exactly what i was trying to explain him .. Thanks

            BUT 80nm XTX model exist, it was just not a killer card so AMD dropped it.

            • Lord.Blue
            • 13 years ago

            Yeah, the XTX 80nm was having trouble, the heat was just too much.

            • crichards
            • 13 years ago

            I have to agree with Mushroom here. ATI’s intent from the start was to go up against the GTX. They failed.

            80nm->65nm I think will do little except reduce heat and increase profit. I’ve never seen a die shrink in itself yield significant performance gains.

            • Jigar
            • 13 years ago

            Indirectly it does.. 65nm chips will produce less heat then 80nm chips, this gives ATI an opportunity to increase the Core clock of the GPU, thus increasing the overall performance of the card. 😉

            • crichards
            • 13 years ago

            I did say ‘significant’. That’s just the time the Extreme/Ultra/Uber/ToTheMax version comes out.

            (wink duly noted by the way 🙂

            • swaaye
            • 13 years ago

            And what do you think a G80 on 65 nm will do? They get to enjoy an even greater drop in die size.

      • wierdo
      • 13 years ago

      I’m thinking this is a software issue and not necessarily a hardware one. Seems the way this card is designed makes it more dependent on drivers than previous generations.

      My main beef with this product is not the numbers – I actually thought they were decent, and the AA options were attractive – but rather their high power consumption. They need to work on their process technology and get this leakage problem under control.

      • snowdog
      • 13 years ago

      Totally agree. Designed to go against GTX, it falls a notch to compete with GTS.

      Not only that but the GTS is much cheaper. In Canada the 2900xt is going for $489 and on the same page at NCIX they had a 640MB GTS for $369 and a 320MB GTS for $329.

      I really think the 2900 is only for diehard ATI fans. BTW I have never owned Nvidia, but several ATI, currently using an 9700pro. But unless ATI fixes things, my next will be Nvidia.

      • Shobai
      • 13 years ago

      i think the other thing to remember is that the 2900xt seems to overclock well, if you can get rid of the heat.

      OCW has a page showing some cascade cooling on the xt, and it shows promise. if changing to the 65nm process will reduce leakage losses, etc, and make the card run cooler/more efficiently, then i can see that the xtx [on 65nm] will perform well.

      irrespective of the uselessness of drivers from both teams, and how long they’ve had to get them working well, the fact remains that both are showing improvements as the revisions roll on. mature drivers or no, the xt card does compete fairly well with the gts – obviously because it’s a different design, they don’t perform exactly the same.

      so, as far as i can see, if AMD can get 65nm parts out the door, with reduced heat/leakage, they should be well equipped to compete with the gtx/ultra.

      • Austin
      • 13 years ago

      r[<;o)<]r I agree with Mush too. The R600 XT was designed to compete with (or beat) the 8800GTX while the XTX (1ghz GDDR4) was a touch faster. When AmdTI got the early results they had to postpone the launch, they were surely no where near 8800GTS at that point. Even launching as late as they did left them unable to compete with the 8800GTX and with the minor improvement GDDR4 made they dropped their original plans for R600 XTX, they wanted to keep the name in reserve for when they could answer the high-end nVidia cards. Why they went with HD2900 instead of HD2800 I'll never understand. Of course the real crunch comes when DX10 games are out. l[<:o)<]l As for R600's switch to 65nm I also think it can potentially yield more for AmdTI than nVidia, although it should benefit both (these cards really need 65nm). If nothing else it will give AmdTI some chance to tweak the drivers and possibly the hardware too. No question though, this round to nVidia.

    • Disco
    • 13 years ago

    I’m a bit disappointed by this card, but I’m very curious about DX10 performance. With all the forward looking design decisions, it could jump ahead with more efficient processing of those titles. I keep my cards around for a long time (currently using a 9800pro which replaced a still active 8500) and I expect my next one to play DX10 games for the next 3 years at least.

    I’m most disappointed in the power useage and noise. I think that future drivers will get the kinks out of the performance (see improved BF2142 and Oblivion results with alpha driver), and the image quality seems really good. But can the noise/power be fixed by drivers – I doubt it. I wonder if this card will run OK with the built-in 450W power supply that comes with the Antec Sonata II case.

    But, I don’t understand all the hate in these comments. The performance is not too bad considering that no one has been optimizing their games for it yet. In most of the games, it BEATS the GTS 320/640 (Stalker, Supreme Commander, HalfLife2, Oblivion, Vegas) even the OC’d versions and sometimes approaches the GTX. The card also does really well on F.E.A.R., company of heros, and Prey (OpenGL!! able to equal nVidia 8800GTX in Doom3 engine!! that’s a real first for ATI) always beating the 8800GTS’s in the ExtremeTech review. And you should be able to get another 10-15% OC with it.

    And the CrossFire configuration has definitely come a long way since it was first introduced. In most situations it shows better efficiencies than the 8800GTS SLI. And don’t forget the coupon for HL2 episode 2 which is worth $40 (since I would be buying that anyway).

    It’s too bad about the power and noise. I’m in the mood to replace my 5-year-old system (Athlon 1900+) within the next month or so and am very interested in these new cards. I’m going to give it a bit more time and see how the prices settle out. I like the idea of getting better performance than a 8800GTS (can’t afford a GTX), but the price will have to be right to make up for the heat/noise/power issues.

    EDiT: i missed mentioning that it beats the GTX in Call of Duty 2 at 1920×1200, with and without AA/AF high quality – ExtremeTech review.

      • Voldenuit
      • 13 years ago

      I imagine using Vista is skewing Scott’s benchmarks, since it is a well-known fact that nvidia’s Vista drivers are still below par. And until DX10 games arive, Vista offers me nothing except eye-candy and DRM. Yum. Notice also that TR couldn’t get dynamic lighting working in STALKER, so the results are not…indicative of a high end gaming engine.

      When tested on XP, [H] showed that the 2900XT was behind the 640MB 8800GTS (stock clocks) in every gaming benchmark, yet it consumed up to 100W more power!

      The performance of the 2900XT is about where I expected, but the power draw is just ridiculous.

    • deathBOB
    • 13 years ago

    Scott, Can we see an update or addition with a Sacagawea dollar?

    • shank15217
    • 13 years ago

    Driver revision is upto 8.37.4.3 and there are significant performance gains with this revision. Maybe a second look next month with catalyst 7.5 is warranted.

    • flip-mode
    • 13 years ago

    Heroes delivers more than the R600.

    • PetMiceRnice
    • 13 years ago

    The quality of drivers is more important to me than raw performance when it comes to video cards. As long as the card is reasonably fast and can run everything I want as well as I want, that’s really all I care about.

    • Forge
    • 13 years ago

    Underwhelming. I expected more from AMDTI.

    At least them thar 8000GTS are still lookin sexy, and that 8800GTX ought to be getting a minor price drop soonish…. I hope.

    • flip-mode
    • 13 years ago

    Good grief the guy that criticizes the use of the quarter gets all the replies… we’re to easy to bait around here.

      • UberGerbil
      • 13 years ago

      No klidding. My thought: the quarter’s for the phone, so you can call someone who might care.

      • provoko
      • 13 years ago

      Haha. Strange that a video card review didn’t start off a flame war between nvida and ati fanboys

      Maybe the quarter guy is an nvidia fanboy and knew this would happen. Haha.

        • imtheunknown176
        • 13 years ago

        It makes more sense for him to be an amd fanboy. Keeps people distracted that way.

          • sigher
          • 13 years ago

          I’m not a fanboi of either, and I own both an ATI as well as a nvidia card, imagine that, twice the driver pain 😮

          That made me laugh though provoko 🙂

          Do they still have phones that work with quarters btw? well not with US quarters here at least I fear, DAMN YOU US, ruined UberGErbil’s plan 🙂
          Of course I could take the quarter and travel to the US, least I can do heh.

            • Bensam123
            • 13 years ago

            I personally find it offensive that the cooler is taken off the GPU without a censoring. It’s against my religion… or something.

            • Jigar
            • 13 years ago

            I personally find it offensive when reviews show ATI losing .. BAN THIS SITE 😉

      • crichards
      • 13 years ago

      Yes, that thread has become a bit out of hand hasn’t it.

      The original point was valid though (if not well put) – putting something alien to the reader next to something alien to the reader as a point of reference is useless to that reader.

      I’m from England, saying to me that something’s about as big as a quarter doesn’t help me much.

      There must be something more universal?

    • sigher
    • 13 years ago

    I wonder if they now drop driver updates for anything before the r600, like nvidia did after releasing their dx10 card.

      • Krogoth
      • 13 years ago

      Not likely not all. It was very recent that DAAMIT sunset support on their DX7/DX8 era hardware. I expect support on their DX9-era hardware for quite some time yet. By the time it sunsets. DX10 would have hit mainstream for both hardware and software useage.

        • derFunkenstein
        • 13 years ago

        DAAMIT *sigh*

    • TheTechReporter
    • 13 years ago

    Well, I’m glad that ATi finally has _something_ to answer nVidia’s 8×00 series.
    On the other hand, compare the HD 2900 XT to the 8800 GTS 640MB (non-OC’ed) and the ATi card starts to look pretty bad — higher power consumption (and greater heat, no doubt), louder fan, and equal or worse performance for the same amount of money, and only 6 or 12 months late to boot! Wow! 😛
    I guess it’s a good choice if you’re _really_ into HD.

    Also, I noticed a lot of posts about using a quarter for size comparison. People actually care about that? Really? Wow, I have really overestimated people as a whole.
    It does seem rather obvious to just use a ruler instead of a quarter, though. Then again, that’d probably start an argument about British vs. Metric units…

    • albundy
    • 13 years ago

    well, at least the card is not extremely expensive like NV’s Ultra cards, but it is very very inefficient. With the amount of resources this company has, they should have made an effort to reduce power and noise (like the Powercolor X1950 Pro style passive cooling). Very nice and complete review, though. Performance wise, it is running neck and neck with the current GF, so its really a step back, considering that you can get a 8800GTS for about $260 on newegg. Lastly, I’d swear that I can smell the stench of shader model 5.0 around the corner, so I’m gonna hold off on this one.

    • shalmon
    • 13 years ago

    pardon me if i missed it…i wasn’t intent on reading ALL 145 posts, or every single word of that article. I have no direct experience with vista and amd or nvidia’s vista drivers, but i have read an awful lot of negative issues with nvidia’s vista offerings. Anyways, an intangible i think might not be reflected in the interview is amd’s vista drivers. To me, the performance parameters are close enough…give or take here or there. If the drivers are that much better, then hey, that tips the scales for me.

    And although i do agree with another poster’s comment that they’ve had 6 months to mature their drivers, i think it’s reasonable to assume that they’re still relatively young and have some hidden potential to tap into yet.

    • Rectal Prolapse
    • 13 years ago

    #123 – Ah you’re probably right. I can’t wait for the followup.

    • donkeycrock
    • 13 years ago

    I think that AMD is in trouble, they delayed the R600 because they new it sucked, i bet the same is true for the benchmarks of their quad cores. once nvidia releases their 65 nm chips and intel releases bearlake and new procesesors, bye bye amd.. for a bit.. hopefully they can recover

      • Stijn
      • 13 years ago

      Maybe you’re right, but I’ve never seen them make statements that the R600 would be faster then the G80.

    • Skyline57GTR
    • 13 years ago

    Then I’ll wait for they would fix their ATI driver alot of issues to improve.

    • wierdo
    • 13 years ago

    The performance variation based on driver versions is quite fascinating it seems:

    §[<http://www.computerbase.de/news/treiber/grafikkarten/ati/2007/mai/ati_radeon_hd_2900_xt_treibervergleich/<]§ Some of that was mentioned in this analysis, explaining how this architecture is more dependent on drivers than usual: §[<http://arstechnica.com/news.ars/post/20070514-amd-launches-the-hd-2000-series.html<]§

      • sigher
      • 13 years ago

      WOW, weird that they didn’t do an effort to make sure that driver was included originally, worse PR department ever?

        • wierdo
        • 13 years ago

        Yeah really… I mean we’re talking going from 15 to 40fps in one of the cases, that’s ridiculous.

          • format_C
          • 13 years ago

          8.37.4.2 reduced image quality in AA. That explains the higher fps.

            • shank15217
            • 13 years ago

            lower quality AA? or more efficient AA?

            • wierdo
            • 13 years ago

            link?

            • format_C
            • 13 years ago

            It’s explained inside the review you linked to:
            “Allem Anschein nach hat ATi aber Bildqualität zu Gunsten von Leistung geopfert, indem man ein wenig am AAA gewerkelt hat.”
            Translated:
            “Apparently ATI sacrificed image quality for performance by making some changes to AA.”

            • wierdo
            • 13 years ago

            ahhh, nice catch. But can that possibly account for that huge of a difference if true? 15->40fps is a big jump.

            • format_C
            • 13 years ago

            According to 3DCenter, the 4.0 drivers suffered from a bug in the adaptive AA mode that cost a lot of performance. In the 4.2 drivers they replaced the AAA algorithm with another one called “EATM” (multisampling) which has compatibility problems with games.
            Maybe they went back to the old AAA method in the 4.3 drivers. That would explain why this driver performs almost identical like the 4.0 driver.

            Source (last 2 paragraphs):
            §[<http://www.3dcenter.de/artikel/radeon_hd_2900_xt/index2.php#2<]§

      • crichards
      • 13 years ago

      Good lord! Could it be true?

      If it’s really the case (as some have supposed) that it’s poor performance is merely down to driver issues. Then…then….<head explodes>

        • provoko
        • 13 years ago

        slow down, read the article, it’s reduced image quality that gave the boost. they’re unstable and broken drivers. the card will never be that fast. maybe in directx10

          • crichards
          • 13 years ago

          I know, I was being over-gleeful for effect.

          However it is the biggest difference I’ve seen a driver version make to performance, like ever.

    • Rectal Prolapse
    • 13 years ago

    No review of the onboard audio? How come? Doesn’t this card have HDMI audio?

      • Sikthskies
      • 13 years ago

      Good point. Maybe they’re saving it for a 2900xtx review or one of the lower end versions which would be more likely to be in a htpc

        • sigher
        • 13 years ago

        The audio was mentioned, it’s for HDMI HDCP compatibility only and has no other connections.
        I don’t think there is that much more to be said about it really, I’m sure the sound is good enough for HD-DVD/BluRay movies and that’s what it is for, hooray for ATI’s support of DRM I guess.

    • willyolio
    • 13 years ago

    i’ll wait for reviews on the midrange line. at least they’re single-slotted and reasonably priced.

    high-end video cards are the embodiment of waste: wasted power, heat, money, space/expansion slots. i will never buy a dual-slot video card on that principle.

    • marvelous
    • 13 years ago

    It’s quite simple

    ATI designed their high end card with 16 rops and 16 textures with high clock speed. While Nvidia’s flagship card has 24 rops and 32 textures with 20 % lower clocks.

    What good is a 512 bit memory controller when you don’t have enough fillrate to feed it? Nvidia’s FX line was very similar to this situation and so was Matrox parhelia.

    Fx5800 4Rops vs Radeon 9700pro’s 8 Rops.

      • Krogoth
      • 13 years ago

      You got it mixed-up. It is the G80 that has more and higher-clocked steam processors, while R600 has less steam processors but they can do more per pass.

      NV3x and R3xx fill-rate trade-off analogy still holds.

      BTW, TMU, ROPs and shading units are no longer accurate ways to measure the internal design of DX10-class GPUs.

        • marvelous
        • 13 years ago

        ROPS and TMU units are no longer accurate but it still determines how much fill rate the video card has.

        Just look at multi-textured fillrate test of the 3dmark. 2900xt provides barely match 8800gts in this department. As a matter of fact I still think ROPS, clock speed and TMU matters.

        §[<https://techreport.com/reviews/2007q2/radeon-hd-2900xt/index.x?pg=4<]§

    • FubbHead
    • 13 years ago

    I’m getting fed up with the PC graphics “culture”.. The manufacturers rely too much on specific optimizations in drivers, which always seem to break something else (which game developers most of the time need to counter), and their drivers just keep growing. And the power consumption just keeps growing.. Bleh…

    • Chrispy_
    • 13 years ago

    Sad news. All indicators say that this was supposed to be an 8800U killer but failed. Massive price reductions to cut their losses and recuperate some R&D effort is what this looks like.

    Sure, there are some good points, and some bad points and at the $400 price point the card is average value.

    However, playing catchup, being on the back foot, being the underdog, having just been swallowed by the underdog of the CPU world is depressingly bad for competition, which means you and I, the end-product consumer will see some stagnation in the market for another generation, simply because this will be another R300 vs NV3x boatrace but with the clock wound forwards another 4 years.

      • Inkedsphynx
      • 13 years ago

      Because neither team has ever shown in their history that they’re capable of taking an underperforming card, refreshing it, and coming out with a great product.

      *cough*1800xt-1900xt*cough*

      I swear. Reading this site sometimes makes me think of a boulevard lined with doomsday-sign holding bums.

    • sigher
    • 13 years ago

    What’s the deal with putting a US quarter next to the die? you do realize that lots of foreigners read you site (see your own poll and server stats) and we don’t get in contact with US quarters much do we..
    Why don’t you find something else that we all know, like a USB connector for instance, all computer users know the size of that, something that’s a bit more universal might be more appropriate.
    Ironically you do list the die size in millimeter, is that the idea? quarter for the “dumb americans” and actual metric size for the rest? 😛
    Just wondering.

      • Shark
      • 13 years ago

      First, I’m sure the quarter was not done based on demographics.

      Secondly, die sizes are always done in metrics.

      • astrotech66
      • 13 years ago

      Geez, settle down … don’t make a big issue out of nothing. Putting the quarter there is just a quick way to show the size of the die. I don’t think it’s cause to create an international incident. And someone who can’t even capitalize words properly shouldn’t be making references to “dumb americans [sic].”

        • sigher
        • 13 years ago

        Of course the quarter was not ‘an insult’ or a ‘patriotic’ act, but it is a bit thoughtless on an international site to pick a US (or whatever area) quarter for that is just not a universally recognized object.
        Same applies if say an international oriented european site would use a 20ct euro coin, how the hell would the average american take any indication of size from that?
        As for not capitalizing country names, I do that on purpose, thanks for noticing, obviously my firefox alerts me every time and I don’t need you to do that.
        Oh and the use of “” and the 😛 should make it clear to even the dumbest of americans that I was joking (I’m not this time however because you clearly made a case that there’s no need to classify such things as joke)

          • astrotech66
          • 13 years ago

          I have a good sense of humor and am pretty good at detecting sarcasm. I read your post several times and it didn’t seem like it was made in a spirit of fun. But if you say otherwise, okay. I didn’t realize that 😛 (sticking your tongue out) was the universal symbol for joking.

          And as far as your capitalization goes, it wasn’t just “americans” that I was referring to. It was words at the beginning of a sentence, or in your case, sentence fragments.

            • sigher
            • 13 years ago

            Every sentence started with a capital, seems you need glasses.

            • astrotech66
            • 13 years ago

            y[

            • sigher
            • 13 years ago

            I guess you are uptight and new to the internet, go major in English and shoot up a school or something, have fun.

            • astrotech66
            • 13 years ago

            y[

            • sigher
            • 13 years ago

            Typical to assume I’m european and typical to think everybody on the world is infected with the typical american insanity of their ‘patriotism’.
            Europe can go to hell as far as I’m concerned, as can my country, what about that eh.

            • sigher
            • 13 years ago

            Oh and someone who says about himself he has a good sense of humour is most often wrong about that.

          • Sargent Duck
          • 13 years ago

          Seeing as how Scott is American, with a handful of American quarters readily available to him, perhaps you should send him some Euro’s. I’m sure he’d be more than happy to throw an extra picture up if you paid him 🙂

            • sigher
            • 13 years ago

            Seems these comments are full of 100% mentally challenged people, I’m surprised you managed to learn to read at all.
            ” I didn’t realize that 😛 (sticking your tongue out) was the universal symbol for joking.”
            What did you think it meant? that I want fuck you?

            • Jigar
            • 13 years ago

            Mind your language moron. This is a family site alright.

            • sigher
            • 13 years ago

            OMFG you used the m word, now my kids are ruined for life!
            I knew it was a mistake when I sat down with the family to read techreport comments :/

      • Sargent Duck
      • 13 years ago

      It doesn’t really matter, as long as the quarter used in each GPU picture is the same size. You can say “hey, this die is as big as a quarter, but the x1900 die is only 3/4 big”. It’s merely a point of refrence. Just like we measure the Earth’s rotation around the sun, I’m sure there are some aliens reading our history books that are complaining we measure time in hours, when we should be measuring our time in something more universal, like the decay rate of Uranium. Don’t get your knickers in a knot. And just for the reference, I’m not American.

      • Dposcorp
      • 13 years ago

      sigher, i am with you brotha.

      This site is just a bunch of fan bois.

      ATI Fan Boys.

      Nvidia fan Boys.

      And now, Scott has showed his true FAN BOY soul, buy using a American Coin.

      I always knew he was partial to US Currency, and I am sure he gets kick backs from the Dept of the Treasury / US Mint for doing that.

      whisper<don’t worry though, sigher, because as soon as the NDA lifts on the new euro coins, and how well the over–[clock]- spend, the Euros will be back on top……….trust me, i know, I saw the prelim benches on QuickenMark2007.

      o[

        • sigher
        • 13 years ago

        I got the sarcasm, no need for hidden messages for me, although I know it’s sometimes damn hard for people to get sarcasm obviously.

          • wierdo
          • 13 years ago

          sarcasm is not easy to detect without the non-verbal component of communication that comes with it. A smiley doesn’t always clarify that, for it could be misinterpreted as other things, like ridicule or grinning etc… can’t blame others for not being inside your head…

          oh and 😛

            • derFunkenstein
            • 13 years ago

            thing is, with sarcasm detection/non-detection you have people INSIDE YOUR HEAD. Psychoanalysis. I got in trouble for that once.

        • provoko
        • 13 years ago

        Dposcorp is on the money, this was TR’s original photo for the review:

        §[<http://i1.tinypic.com/52dzslv.jpg<]§ Notice the over use of american flags and the words made in usa on the gpu it self. Hahaha.

          • Stijn
          • 13 years ago

          LOL whahaahaha =D

          • Krogoth
          • 13 years ago

          ROFL, that couldn’t be further from the truth.

          There should be Chinese flags on those chips!

        • donkeycrock
        • 13 years ago

        i think he has a good point…. i think maybe a r[

          • sigher
          • 13 years ago

          This sub-thread is no longer about the point but purely about messing with me.

          • DASQ
          • 13 years ago

          That would be FAR worse.

          There are HUNDREDS of different sizes of paper clips. I don’t think there is a ‘standard international’ size of paperclip.

          “Official Paperclip Size of Earth”

            • sigher
            • 13 years ago

            That’s why I suggested a USB connector, every computer user knows them and they have a globally standardized size or else they would not fit of course.
            I bet this thread would have been more fun if people tried to think up objects to use though, rather than take a run at me as it is now.

            • DASQ
            • 13 years ago

            I honestly don’t care.

            If the Scott the Reviewer lists the physical dimensions (21mm x 20mm if I recall correctly) and I’m too goddamn lazy to find a friggin’ ruler, that’s entirely my fault.

            • sigher
            • 13 years ago

            Well because as you say he DID put the dimensions with it and they were metric so not too known to an american audience I added the what I meant as a joke about the proverbial “dumb” (notice the quotation marks) americans, because of the typical american quarter and the typical non-american metric precise size.
            But my quotation marks and the added tongue smiley didn’t get noticed I fear, or some decided to pretend to not notice might be more correct, and it ended up a bit ugly just while I was already in a bit of a mood and with a rather pronounced headache.
            I’m sorry about the mess.

            Factoid: a long time ago I already pointed out in relation to another article that the quarter wasn’t a very universal object to relate size.

            • DASQ
            • 13 years ago

            I didn’t even read your original posting.

            He should have put an Oreo next to it. It transcends language, and differs in physical appearance to it’s miniature and snacking breatherin.

            Long live the Oreo!

      • provoko
      • 13 years ago

      dumb european gets upset over a quarter

        • sigher
        • 13 years ago

        Why don’t you elect someone from the Bush family for president or governor, just to prove you are so smart and of high intellect, if non is available elect some dimwitted actor that can’t even speak your own language, I’m sure that will show the world america is full of extremely intelligent people, can’t fail can it.

          • sigher
          • 13 years ago

          Incidentally, I know you are trying to troll me, but I bet you guys are easier to troll back.

      • sigher
      • 13 years ago

      It’s a nice looking quarter I have to say, I like the classical look and the silver color.

      • DrDillyBar
      • 13 years ago

      The quarter was there for sex appeal. duhhh

      • sroylance
      • 13 years ago

      You’re right, I think he should use a 4×2 lego plank instead… Is there anything more universal among geeks than legos?

    • rythex
    • 13 years ago

    Wow, ATI fanboys (and Nvidia fanboys included way back in the 58xx days) sure don’t know when to accept reality..

    Drivers will probably not help this card.. I’m pretty sure ATI had enough time to tweak their drivers from their original release date till now

    Also, I’m pretty sure ATI isn’t ahead of their time either with their GPU design since it’s probably overengineered to the point where it’s only good for theoretical performance or in certain situations which don’t apply to 98.99999941223% of most games / applications.

    I smell a big fat flop.. it almost looks as bad as the Pentiumn4 release

      • Shark
      • 13 years ago

      I’m sure there driver team is quite split right now with playing catch up with Vista as well as trying to roll out new drivers for this set of hardware.

      I would wager there is still a fair amount of room for improvements on both ends.

    • BoBzeBuilder
    • 13 years ago

    VIA has fanboys?!

    • Illissius
    • 13 years ago

    AMD is ahead of their time. This is probably their biggest problem. Remember the X1600XT? (And perhaps dubiously, the 9600XT vs 5900XT)? For the past few years, they’ve consistently sacrificed performance in current applications for (presumably) future ones. The trouble is that people want to play their games /[

      • swaaye
      • 13 years ago

      I don’t think X1600XT’s dreamily super shader-oriented loads have even appeared yet. 7600GT is certainly still the better card.

      And I wouldn’t exactly say that R4x0 were future proofed compared to NV4x. NV was more forward looking there. But roles switched with R5x0 and G7x.

      R300 was less impressive (feature-wise) than NV3x, believe it or not.

      Feature-wise:
      R100 > NV1x (well, this is sorta arguable)
      R200 > NV2x
      R300 < NV3x
      R4x0 < NV4x
      R5x0 > G7x
      And who knows with these newest cards, honestly. They are so programmable….

      • rythex
      • 13 years ago

      yes, they’re really ahead of their time that they’re releasing their product half a year later and still don’t beat the competitions upper mid range card..

      • Deli
      • 13 years ago

      The 9600XT was supposed to be placed against the 5600/5700. 9800pro was the competitor to 5900XT

      but fastforwarding to now, this launch is quite disappointing. Though the crossfire #s look good, but i shudder at the thought of the heat/power requirements.

        • Illissius
        • 13 years ago

        Not quite. It went something like this, iirc:

        9600 Pro vs 5600 Ultra
        9600 XT vs 5700 Ultra
        9600 XT vs 5900 XT

        In other words, Nvidia introduced the 5900 XT in the same price category as — and hence, as a direct competitor to — the 9600 XT, once it became clear that the 5700 Ultra wasn’t going to be enough. The 9800 Pro was positioned against the 5900 Ultra.

        Anyways, the X1600XT is a much better example, but this one also came to mind. The situation was reversed in the 6800 versus X800 era, except those two were also very comparable in terms of performance.

          • DASQ
          • 13 years ago

          Yeah I remember when nVidia released their “XT” cards that were of distinctly lower performance, a surprisingly underhanded move by nVidia.

          I suppose they had to make up for the suck of 5xxx somehow. Awful, awful series. I got a FX5600SE for free and I went back to my 9000Pro. Sure the 9000Pro could only do DX8.1, but what’s the point of being able to render DX9.0 when it incurs a painful performance hit?

    • matnath1
    • 13 years ago

    As usual, Scott’s articles are much more enjoyable to read than anyone elses’

    Didn’t I read somewhere that this board was supposed to include Physics Processing as well? What’s up with that???

      • SPOOFE
      • 13 years ago

      ATI cards were supposed to include Physics Processing for two generations now.

    • provoko
    • 13 years ago

    Wow, ATI fanboys to the resuce? Sorry but $399 for sh!t performance is a ripoff no matter how you look at it. It can’t beat the 8800 GTS 640 which costs $329 now. It’s new AA modes are blurtastic so they’re useless. And the card makes Al Gore cry at night; for all that wattage you would think it would perform better, but it doesn’t.

    As for hoping the 2900 drivers will justify an almost $100 ripoff over an 8 series card, just keep in mind that Nvidia’s drivers are new too and are being optimized often. So the 2900 and 8xxx with fully optimized drivers will keep the same ratio of performance.

    The fanboy’s only hope is the mainstream $150-$300 HD 2400 and HD 2600 cards. They might be better, but I doubt it, looking at AMD’s numbers for those cards, they’re horribly stripped down versions of the HD 2900.

    Forgot to mention, Awesome TR review. =)

      • Krogoth
      • 13 years ago

      They are both terrible choices for sensible buyers. 8xxx and HD 2xxxx aren’t that impressive over the previous generation without spend a ton of $$$$.

      I suspect that the law of diminishing returns is becoming to creep up more and more.

        • NeXus 6
        • 13 years ago

        $329 ain’t a bad price for card that blows away the 7 series cards in IQ and performance. Being able to max out the eye candy in most games is definitely a plus and was worth the cost to me.

    • rythex
    • 13 years ago

    I think some ATI fanboys just curled up and died today..

    (I’m not a nvidia fanboy.. I just laugh at brand fanboys in general)

      • eitje
      • 13 years ago

      i’m a VIA fanboy. 🙁

        • d0g_p00p
        • 13 years ago

        I am a Trident fanboy, TGUI USERS UNITE!!!

          • wesley96
          • 13 years ago

          TGUI FTW. All Hail Trident.

            • verpissdich
            • 13 years ago

            Meh… you know Cirrus Logic pwns Trident. You poor pitchfork guys always living in denial, well at least until the mid 90s! 😉

            • TO11MTM
            • 13 years ago

            Cirrus logic was indeed Awesome back in the day…

            • swaaye
            • 13 years ago

            FYI, GD5426 obliterated 8900C. Good luck winning an argument there, punks!

      • DukenukemX
      • 13 years ago

      We still have hardware fanboys? I thought they all died in a horrible fire after the Geforce FX and Pentium 4 fiasco?

        • Jigar
        • 13 years ago

        So have you seen this movie.. The return of the black troll fanboys ?? 😉

    • maroon1
    • 13 years ago

    #78, 2900XT has 320 stream processors and it operates at 742MHz. 8800GTX has 128 stream processors but it operates at 1.35GHz !!!!

    2900XT has only 16 Render Output Pipelines and only 16 Texture mapping units

    8800GTX has 24 Render Output Pipelines and 32 Texture mapping units

    So, no R600 is not ahead of its time, not at all !!!

      • Chryx
      • 13 years ago

      Methinks you completely misunderstood what was meant.

        • SPOOFE
        • 13 years ago

        If so, and if you didn’t, you should probably explain.

          • Chryx
          • 13 years ago

          the 2900XT has been designed for shader centric workloads, and actual things we can do with the cards aren’t that shader centric, leaving it floundering around spinning its wheels..

          Same thing with the X19x0’s, only moreso

            • SPOOFE
            • 13 years ago

            I thought the whole point of a unified architecture was so that “shader-centric” would become an obsolete term.

      • Nullvoid
      • 13 years ago

      No doubt completely false thinking but:

      1350/742 = 1.8

      320/128 = 2.5

      So yes, the ati stream processors run a lot slower, but in theory shouldn’t the sheer number of them make up for it?

        • Lord.Blue
        • 13 years ago

        If they were being utilized intelligently, yes. This I think will improve drastically with drivers. Remember we are still on beta software for both the 8800 (come on nVidia, it’s been how long?!) and the 2900. And that’s in XP, not to mention Vista.

      • sigher
      • 13 years ago

      Oh you read that powerpoint nvidia released to snow on ATI eh, might have mentioned the source though, but then again nobody does.

    • JoshMST
    • 13 years ago

    I can say I am not disappointed in the HD 2900, I think it delivers a lot of performance and features for $399. Yeah, its late, and the drivers aren’t mature, but it’s still a lot of card for the price. Considering in a couple of weeks it will be in the $370 to $380 range, its a pretty decent value.

    Heat and power are definitely concerns, but I like the new AA options. It also seems like they worked a lot on CrossFire, it is good to see it beat up on SLI for a change. It would be a big tossup for me though between the HD 2900 XT and an overclocked 8800 GTS 640 if I were in the market for a video card at that price point. But at least we have competition in this space for a change!

    • totoro
    • 13 years ago

    Fantastic article with top notch information as usual from TR.
    Good job!

    • sigher
    • 13 years ago

    Maturing drivers? BS, how long would they need for that? they’ve been delaying the r600 for ages and had a year to ‘mature’ drivers.
    Besides, it seems that their driver-writing talent is moving backwards if I observe their drivers for the X— line, I’m still on 7.1 because the 7.2, 7.3 and 7.4 lack things that were present in 7.1
    They lost the game, the r600 drops horribly in performance when you enable any AA or AF (and AF isn’t much better than the r580 (HQ) but with a larger loss of performance), and the new AA modes are a blurry horror to many and the thing needs more power than 2 nvdia cards and a CPU combined (I overstate it yeah) and needs yet another kind of connector, I’d say this is a huge disappointment and they are keeping in the race with this, but limping behind
    I just hope the new AMD CPU won’t repeat this performance, that would be really really sad.

    /[http://www.computerbase.de/news/treiber/grafikkarten/ati/2007/mai/ati_radeon_hd_2900_xt_treibervergleich/<]§

      • sigher
      • 13 years ago

      After this negativity I’d like to add that it does seem to have better vertex power than the 8800gtx from what I’ve seen, but then again if you read about the microcode speedup: “The result? A claimed reduction in CPU overhead of up to 30% in DirectX 9 applications, with even less overhead in DX10.” then when see that it’s only 20% faster at best (with no AA/AF) than a 8800gts then you start to scratch your head even more.
      But then there’s this statement “AMD’s compiler—a real-time compiler built into its graphics drivers—will have to work overtime to keep all five of those ALUs busy with work every cycle, if at all possible.”
      So one line says a reduction of CPU cycles, another an increase, no wonder it doesn’t help.
      It’s mentions that that tessalation stuff is something the coders have to implement, well nice but ATI put some good stuff in their chips before that coders had to code for.. and none did AFAIK :/
      For some reason nobody want to use propriety ATI code, not even as an optional path, perhaps they should provide sample code? but things like their rendermonkey etc. doesn’t get updated much so that part of their support seems a bit dull to say the least
      Then there’s the *[

    • PRIME1
    • 13 years ago

    They need to drop the price of this card $100. It’s not quite as fast as a GTS640 and those have been selling as low as $350. I could not see recommending anyone to pay more for less.

    This card fits in between the 320 and 640 and should be priced the same. Although looking at the benchmarks the 320 did pretty well and that has been on sale for $260 in recent weeks.

      • shank15217
      • 13 years ago

      it certainly matches the 640 in performance, i’m not sure what benchmarks you were looking at.

    • Nelliesboo
    • 13 years ago

    So it is what I thought then… the $230-260 8800 GTS is king. (Price / Performance)

      • BlockheadBrown
      • 13 years ago

      If I could find a 8800 FOR $230, I’d buy one today.

    • leor
    • 13 years ago

    it’s the curse of the xbox! nvidia’s crap FX 5800 GPU was hot, loud, very late, a crap performer, and directly after they made the xbox GPU!

    FX5800, HD2900 . . . they even kinda sound alike!

    • derFunkenstein
    • 13 years ago

    *yawn* wake me if the mid-range parts ship in July. I’ll probably be in the video card market around then.

    • Peldor
    • 13 years ago

    All I’ve got to say is -[

    • flip-mode
    • 13 years ago

    I’m gonna have to recant a remark I made below that this is an acceptable offering at $399. Performance wise it might be acceptable at $325-$350, but then you take into account the power consumption and you have to knock off another $50 to make up for your higher power bill. So this card needs to be priced at least $100 lower than it is now. What happened to all that noise AMD was making about redefining the price / performance landscape?

    Boo.

    The moral of an AMD fan is in the crapper right now. <– At least it’s been confirmed that Stars family chips will plug into AM2. Yippy – the mobo I have is an overclocking monster (cept for the crappy limited RAM voltage).That and the fact that I’ll never spend north of $200 on a graphics card anyway so while this card is a disappointment, it’s impact is academic rather than practical.

    • VirulentShadow
    • 13 years ago

    Way too small performance gains for how late it is. Interesting feature set, and some good performance in some games and 3dmark… But it looks like AMD/ATI was focusing more on Crossfire power (ignoring power consumption) and their own strong points (Half Life), which isn’t necessarily a bad thing, just way too late…

      • poulpy
      • 13 years ago

      I don’t see the point of saying “for a 6 months late product it doesn’t perform that much better” it’s a product on the market now, period. Yes the R600 is late but apart from ATi/AMD who could have used its revenue earlier who cares? They put a hold on it for various reasons but not to make it a R600+ IMO.

      As a consumer this product (2900XT) fits nicely alongside NVIDIA’s lineup and will put some pressure/competition on them, good for us. We just have to hope that the rest of the lineup will match/outperform NVIDIA’s offering too.
      Nvidia will then counter with a refresh and then ATi will have to get things together for the refresh and deliver in due time in order to make money.

      Nvidia got in a situation like this before and now the table has turned, it could/will turn a few more times..

        • VirulentShadow
        • 13 years ago

        Just saying, considering how quickly technology evolves and surpasses the previous generation, releasing a card with virtually the same or less performance than others, this much later, is a bad move on AMD’s part.

          • poulpy
          • 13 years ago

          – releasing nothing is a bad move
          – releasing clearly under-performing parts is a bad move
          – releasing non price competitive parts is a bad move

          This is not a bad move as the card is price and performance wise on part with the other manufacturer.

          The next round of graphic cards isn’t going to be a new architecture but a refresh so ATi will need to catch up on the refresh like they did with the X1900s which were really good products. And then not mess up the next generation to avoid losing too many $ again.

          But anyway even though NVIDIA enjoyed a 6 months head start 8800 and HD2900 are both on the same market now, were conceived roughly at the same time and should be compared as such IMO.

    • lethal
    • 13 years ago

    “For its part, AMD is talking about potential for big performance gains as its compiler matures”. I’m almost 100% sure that Nvidia said those exact words in the FX era 0.o . And their optimizations weren’t pretty. I’m sure both AMD and Nvidia learned from that “misstep”.

    • maroon1
    • 13 years ago

    #49, According to these benchmarks, 8800GTS beats HD2900XT in most cases
    §[<http://enthusiast.hardocp.com/article.html?art=MTM0MSwxMiwsaGVudGh1c2lhc3Q=<]§ §[<http://www.vr-zone.com/?i=4946&s=12<]§ And no, 8800GTS is cheaper than HD2900XT look here §[<http://www.newegg.com/Product/Product.aspx?Item=N82E16814130071<]§ §[<http://www.newegg.com/Product/Product.aspx?Item=N82E16814102095<]§ HD2900XT also produce more heat and consume more power than 8800GTS

      • zgirl
      • 13 years ago

      y[

      • totoro
      • 13 years ago

      He said the GT*[

    • Krogoth
    • 13 years ago

    R600 is another R520.

    It seems like it was build for DX10 in mind not DX9. The bias of shading opteration and general steaming computing makes it painfully apparent.

    G80 took the safer approach of maintaining strong texture processing performance for older titles. It is no thanks to its much higher clocked steam processors. Nvidia is waiting until DX10 shows up for the next refresh that make G80 and R600 look pathetic in DX10.

    Nether 8xxxx and HD 2xxxx are compelling options for users that are on last generation’s architects unless they got huge monitors and want bleeding-edge performance no matter the cost.

    • eitje
    • 13 years ago

    no folding client yet?! 🙁

    • Fighterpilot
    • 13 years ago

    With the GTS at basically the same price,HD2900 XT outperforms it,has way more features,better image quality and is likely to get faster with better drivers.
    A GTX costs about $150 more.
    That’s a lot of money for some extra FPS.

    • seeker010
    • 13 years ago

    “AMD’s own worst-case scenario, the float4 MAD serial with dependencies”

    shouldn’t that be float MAD serial with dependencies? float4 should be a barn burner on R600.

    • lyc
    • 13 years ago

    excellent review, the writing style and attention to details is always a pleasure 🙂

    i have a 8800 gts, but if i were buying a gaming gpu today it’d be the 2900 xt entirely on the basis of its awesome antialiasing capabilities. however, i don’t really play games, and CUDA/linux support sealed that deal for me…

    • wierdo
    • 13 years ago

    Omg drugs are bad, I’m getting some patches.

    That was an interesting read… the general impression I got was that the nVidia solution gets higher avg frames but lower minimum frames, so the ATI solution is more resistant to frame drops in some games relative to nVidia’s solutions.

    The new AA options are also lookin’ good indeed, a major selling point for me personally since it comes at a reasonable performance cost for that much improvement as well.

    The power consumption is pretty high though, ouch… they also need to get better cooling cause the fan noise needs to go down some more, I’m a fan of quiet PCs so this kills the deal for me I’m afraid… hope they fix that weakness.

    Well actually… I’ll probably be looking at their midrange stuff, I’m hoping someone will manage to come up with passive cooling solution there, so I guess the noise in the performance segment is a problem in a different market segment than the one I buy from anyway, I’m sure many performance fanatics can tolerate these flaws for the sake of framerates at all costs I guess 😀

    AMD needs to hurry up and move this core to a new process, seems they’ll need to get the power/leakage thing reduced asap, nVidia’s no doubt gonna react to this competition rather swiftly.

    For a high-end-ish product, the price tag is also relatively reasonable, so that’s something going for it.

    • Jigar
    • 13 years ago

    Sorry double post …..

    • danazar
    • 13 years ago

    I’m getting the impression that the key to performance with this card will be refining the drivers, which as of yet are holding it back vs. the more mature 8800 series. Looking at the results between the first release and new alpha drivers where they were tested in the review, there’s obviously more power in this card left to tap.

      • shank15217
      • 13 years ago

      no one is doubting the theoretical power of this card. I bet this is a stream processing monster. The r600 is ahead of its time, its a very biased architecture but when it completes its evolution it will definitely be a faster overall architecture.

        • SPOOFE
        • 13 years ago

        “Definitely” is one of those funny words that sometimes shouldn’t be used so liberally.

          • shank15217
          • 13 years ago

          performance in graphic applications can be determined a lot easier than general purpose code. A die shrink with better drivers will improve the performance of this architecture.

        • rythex
        • 13 years ago

        Yes, faster overall in theoretical situations… which apparently does nothing for practical games.. <rolls eyes>

    • Bensam123
    • 13 years ago

    It’s pretty cool how the benchmarks were split up while explaining parts of the processor, kudos.

    I don’t like the whole tent filter thing. This is just like the bluring that games like Vegas make over use of. Instead of properly detailing things, they just blur over the top of it so you can’t tell what looks good and what doesn’t.

    I guess if you set further back you can’t tell either way, but when you’re trying to pay attention to details and it looks like you need glasses it’s a bit too much.

    IMO and just about everyone I’ve talked to IRL, bluring is the abomination of the gaming industry. It should’ve never come around, it should never been around, it should just die out.

    Not to be mean, but maybe in 20 years I won’t care if I’m staring at a oil painting as long as the jaggies are gone either. Right now though, I rather have jaggies then a blurry screen.

    BLURING MAKES ME ANGRY!

    These results do look dismal, but I too think that in six months things will change a lot. Hopefully it won’t be optmizations that make me ANGRY.

    If you’re ever looking for a good scene to test the texture filtering. Start up a game of CS:S. I noticed something odd the other day when I was playing. Certain CS:S servers have deathbeams. When you die they point at the person that kills you. 99.9% of the time when you look at it without it running paralled, the texture bluring becomes VERY aparent. As in one end looks like a line and the other looks like a straw that was flatened out.

    Company of Heroes also showed signs of this. If you look at a big change in the terrain, IE the side of a mountain that isn’t uniform, you should be able to see a distinct change in filtering between something that looks farther away then it really is and something that looks closer. The algorithem messes up and part of the mountain is blurry and the other part is very clean and precise.

      • Anomymous Gerbil
      • 13 years ago

      Soooo….. you don’t like blurring?

        • thecoldanddarkone
        • 13 years ago

        He’s deffinitely not the only one, I don’t like bluring as well.

      • Sanctusx2
      • 13 years ago

      +++

      I love Vegas, I love the engine, but damnit all the blurring drives me absolutely insane. There are one or two levels in particular early on that had me constantly squinting, unsure if that was an enemy in the distance or a piece of debris. I didn’t learn about the hacks to disable it till about halfway through. It was a godsend, everything became crystal clear.

      But if Damage is vouching for it, then hopefully it’s not as bad as Vegas. The effect on the leaves and the sign were quite surprising, but as he mentioned I think I’d have to see it in person to make a final judgement call.

        • Bensam123
        • 13 years ago

        I don’t know man… when you look at the picture comparing leaves between the tent filters, they look pretty blurred out where the one without the filter is pretty crisp and clean.

        We can only hope though.

      • Krogoth
      • 13 years ago

      Blurring = realism if done correctly.

      I am sorry, but your eyes will blur objects when they are resolve something from a considerable distance for example an man-size object that is 800m away or greater.

        • lyc
        • 13 years ago

        there is a distinction to be made between blurring and doing a true integration over some area. for example, a lot of ppl (thinking demosceners here) back in the day used to just blur their image to get rid of the jaggies, but then it just looks blurred… the same goes for blurred shadowmap sampling versus proper sampling of area lights, and for depth of field (to which you were referring) and simply blurring the image based on distance… neither of those operations respect occluding geometry.

        in any case, i think we’re still a way off properly computing those kinds of effects in games (many of which simply can’t be done by rasterisation and need ray tracing), i mean look how long decent (filtered) antialiasing took…

        • Bensam123
        • 13 years ago

        Other then distance can you name one other example of bluring IRL? Focusing on a single object doesn’t count as well (a new “feature” of the new crytek engine comming out). You automatically blur out objects around you when you concentrate on something. It doesn’t need to be replicated.

        Similarly bluring takes place in 3D reguardless of intentional bluring. Things get so far away they just mush together. It doesn’t need a special effect, it’s something that already happens.

        The samething could be said about HDR. HDR is supposed to simulate changes from light to dark, as you would experience in the real world. Alas you still experience them sitting infront of the monitor as it gets light to dark. It doesn’t need to be replicated.

        It’s like they ran out of ideas and said “Hey, we should make things blurry!” or something. It’s the bane of all gamers and should be destroyed. They invented glasses for a reason.

          • Krogoth
          • 13 years ago

          Ummmm…. I have tons of examples and you probably have them too, but you selectively don’t consider them to a problem.

          Blurring is just physics at work. The human eyes only have so much resolution that first finer details of man-size objects (only in square mms in size) blur after about 100-200m. The man-size object starts to blur at around 800-2km and it begins to form a faint point in your FOV. That point will eventually disappear at a distance greater then 5-8kms.

            • Bensam123
            • 13 years ago

            “The human eyes only have so much resolution that first finer details of man-size objects (only in square mms in size) blur after about 100-200m.”

            Exactly, so why does it need to be immitated if we already do it?

            • Usacomp2k3
            • 13 years ago

            Because your monitor is only a fixed distance away from your eye, and the objects on it are always 3-6 feet from your face, regardless of how far they are in the virtual world.

            • murfn
            • 13 years ago

            The tent anti-aliasing is not blurring for the sake of blurring. Anti-aliasing is a legitimate technique to remove the jaggies , which is necessary because real vision does not have jaggies. Anti-aliasing is better the more samples you have within a particular pixel, but more samples are computationally expensive. The tent method uses samples from adjacent pixels to increase the number of samples. Since a particular sample can affect the color of more than one pixel it causes blurring.

            • Bensam123
            • 13 years ago

            But they still have perspective in the virtual world. Meaning the farther away they get, the closer they get together, and the less they can be differentiated from other pixels to the point where you can’t make objects out, because they aren’t really objects anymore.

            It doesn’t matter if you’re 3 feet from the monitor, the depth is still there.

            The only scenario where the blurring would represent depth perception is in a 2D environment that’s trying to look 3D. Alas thats probably where they got the idea, as in a screen is flat so it obviously needs some sort of depth perception, and what happens when objects get farther away? They become blurred.

            They didn’t take into account that it already happens in a 3D environment.

            • murfn
            • 13 years ago

            If things become blurred when they are far away then you might be suffering from short sightedness. Otherwise, your cornea will adjust its focal length to ensure that whatever object you are looking at is not blurred, i.e. in focus. When a movie is shot or a photo taken, a view of the world is captured where objects within a specific range of distance are in focus and the rest is blurred. Some game developers have been looking into making their games, and specifically cut-scenes more like movies. They employ, or plan to employ, a technique called depth-of-vision, which is a deliberate blurring of objects outside the imaginary camera’s focal length. This is not a good technique for in-game scenes because in real life you are able to focus on whatever object you wish to.

            • Krogoth
            • 13 years ago

            Short-sightiness is when blurring happens at fairly close distances depending on severity. The human eye has some physical limits that even people with the sharpest vision will experience blurring when trying to resolve fairly small objects at very far distances.

            I believe the OP complains are that at very short distances some games had texture blurring going on. It usually results of ether LOD optimizations, performance for image quality trade-offs or just lazy digital artist.

            • murfn
            • 13 years ago

            Short sightedness is when close objects are clear and far objects are blurred. If your eyesight is as it should be or you are wearing corrective lenses then distant objects are not blurred. If looking at the moon on a clear night you should see a sharp focused image.

            The OP was referring to the blurring of the tent filter method. As I have explained, the blurring results from the fact that the color of a particular pixel is resolved after taking into account a number of samples, and in particular because some of the samples lie outside the pixel area.

            The other blurring he was referring to is texture blurring when doing anisotropic filtering at an angle. This issue is not deliberate and has been explained at length in TR articles. It is the result of bad LOD choice, as you say. However, this problem seems to have been fully resolved in the G80 and almost fully resolved with the R600. See §[<https://techreport.com/reviews/2007q2/radeon-hd-2900xt/index.x?pg=5<]§

            • Bensam123
            • 13 years ago

            It was pretty much a rant on how current game developers just like to put a giant blurring filter over everything so they won’t need to detail everything, how companies like ATI are trying to cut corners by messing with LOD and not doing true anti-aliasing, and how blurring doesn’t actually do anything useful as what does it in real life (the human eye) still does it when you’re sitting in front of the computer.

            I think blurring and zoom are getting confused here. When I’m talking about things blurring at long range, I’m not talking about things just magically getting blurry, I mean they’re too far away to actually see. It works out that way in game, as things farther away get so small to the point where they aren’t recognizable. That is no different then the way things work out in life, blurring isn’t needed.

            The other thing I was talking about was the blurring effect they’re adding for depth perception (the new crysis engine). When you focus on something, everything around you becomes naturally blurred. If you focus on something in the game, everything on the rest of the screen becomes blurred naturally. Once again blurring isn’t needed All it would do is get you killed when you un-focus on whatever you’re focusing on to look at a different part of the screen, because you failed to remove the “games” focus.

            They’re trying to take something that you do naturally and force it down your throat at their discretion.

            About the only useful thing I could think blurring could be used for is heat distortion and I’m pretty sure that’s actually different from blurring.

            • murfn
            • 13 years ago

            You said it was a rant, and I can respect that. Having said that, you are touching on a lot of issues which are not really connected and your criticism is mostly unjustified IMO. If you are noticing blurring in a particular game, please point us to a screenshot so I can at least understand what we are talking about.

            The new Crysis game is not out yet AFAIK. Whatever video you have seen may have been produced with some cinematic flair in mind, and the depth-of-vision effect was added for that reason. I doubt the game would include it. It is just too annoying to play that way. And you say it can be removed as an option, so that would solve that problem. The option might only be there to show-off the features of the new engine.

            Another type of blurring is motion-blurring. This is an effect added to make fast moving objects look like they are moving smoothly rather than teleporting from one frame position to the next. Sometimes camera haze is added to soften a scene, but again it is more of a cinematic effect.

            Other that that, blurring is mostly unintentional. Blurring may save a lot of computation as in the AA and AF methods of the past. You can always choose a superior AA method. And as you see the AF was fixed in the new cards.

            • Bensam123
            • 13 years ago

            Yes, I also loathe motion blur as well. Don’t things get blurry when they’re moving too fast in real life?

            It’s like game developers think you don’t have eyes when you play games and the data is directly fed into your brain. If things move too fast in game they will be blurred because your eyes can’t process data fast enough. You don’t need a fancy feature for that.

            §[<http://www.fileshack.com/file.x/10398/Crysis+GDC+2007+Trailer<]§ That's a link to a tech demo direct from the people making the engine. No editing done or fancy movie features. Yea, the AF does seem to be fixed in the newer cards, and I especially applaud the 8800 for a almost perfect image. I originally bought a Radeon card as I heard it produced better video quality then the Geforce, alas the times have changed. I've largely learned to ignore the image quality differences in game (if you play a game long enough you can clearly see almost a line where it changes quality), but getting a card that no longer has that makes me want to jump for joy. I still remember when they first started doing that crap. It was back when I had a Radeon 8500 and was playing Tribes 2. After a driver release all of a sudden stuff farther away started getting blurry or "fuzzed". It did of course improve my framerate, but made me angry. That isn't to say they won't add some special "filtering" later on, when their cards start to sux. I don't know how new the drivers are that Scott pulled those images from, but when I went from 6.9 to 7.2 it got a lot worst (I have a x1800XT). I will pull a pic from CS later tonight if I remember and give you guys a link. I don't play CoH unless I'm at my friends, so if I remember I'll pull a pic from there as well. There is also shadow issues I have with my x1800XT and x800XL and certain games, especially in Dawn of War. Sometimes shadows just disappear from buildings when it can't determine the depth, so they flicker if they're near the edge of the screen and you have a depth change (ramp going down a incline or up) also on screen.

            • green
            • 13 years ago

            if i put my hand about 4 inches from my face and focus on it, everything past my hand is out of focus which appears to be blurred

            leave my hand where it is and focusing on what’s past my hand i see double images of my hand because it’s out of focus
            i don’t see as much detail in the hand but I see some of the general patterns (ie. any lines, wrinkles, scars, etc)

            unfortunately we don’t have technology that figures out what i’m focusing on in the game
            if i want to focus on something close (like a gun i’m holding in an fps) it should be sharp
            on the other end, if i’m shooting a guy thats far away it better be as clear as it can get as that’s where i’m focusing
            but how is the game gonna know that i want the guy or background blurred?
            i could be shooting at something far randomly but focusing on the gun
            ati’s answer seems to be: blur everything

            murfin is right about the short-sightness thing
            if you’re looking far off in the distance it shouldn’t be a blur
            for example, being able to see distinct lines in clouds
            whats happening is we’re dropping ‘pixels’ (for lack of a better word)

            • Bensam123
            • 13 years ago

            The answer is you don’t need to blur anything. You do it already as you explained.

            Just because you’re looking at a virtual world, doesn’t mean your eyes act differently. You’re still sitting in real life.

            The blurring effect would only need to be added IF the eyes were no longer used for visual input. IE, you’re connected to the computer through a wire to your brain, which I believe isn’t the case yet.

        • snowdog
        • 13 years ago

        Except this is NOT done correctly. They blur everything, even the 2d interface text is blurred. This is just a lame blur filter.

        From Hardocp:

        Notice that the heads up display showing game information is more blurry with Wide Tent filtering, even the FRAPS framerate counter has been blurred. This was noticeable in every game we tested, text and textures became blurry with Narrow and Wide Tent filtering. Even bringing up the console in Half Life 2 had blurry text.

          • sigher
          • 13 years ago

          Horrible and shocking.

            • Bensam123
            • 13 years ago

            Shocking and horrible.

            So would you rather see all of life including it’s ugly side or just… smooth it all over?

            Philisophical and metaphorical.

    • danny e.
    • 13 years ago

    well, looks like ATI just converted me to Nvidia for this generation.
    sad. i’m not going to buy something for $400 when i can get the GTX for $540 that uses less power and performs significantly better in the majority of tests.

    even the GTS 640MB can be had for around $350 now.
    ATI should be releasing this card right around $299.. and then they’d have a winner.

    • FireGryphon
    • 13 years ago

    Numbering schemes are so ridiculous these days. When I woke up this morning and went to Tech Report, I saw the article title and thought, “Oh, a new low-end card.” Then I slowly realized the 2900 was actually the long-awaited R600. Granted that I had just woken up, but I was actually confused by the horrible naming and numbering scheme that companies use these days. Feh.

    The article itself is terrific. Another monster review, another gem from TR. This is the meat and potatoes of the hardware review circuit, folks.

      • timbits
      • 13 years ago

      meat and potatoes indeed. I saw the flurry of Me First! reviews yesterday and waited for the TR review.

    • Fighterpilot
    • 13 years ago

    That was a nice review Scott.
    For $399 its a high tech,latest features graphics card with good speed and class leading image quality.
    They apparently overclock like crazy and I’d expect Catalyst 7.5 or 7.6 to give some pretty reasonable performance tweaks.
    I’d give it 8.5 outta 10 which is good enough for any ATI fan.
    It is $429.00 cheaper than an 8800 Ultra.

    • SGT Lindy
    • 13 years ago

    Dam $399…..I am so glad I dont game on a PC anymore. Good article though.

      • Nullvoid
      • 13 years ago

      Contrary to what the graphics card manufacturers would have us believe, you don’t need anything like this card to play games 🙂

      My lowly old radeon 9600le manages to play oblivion/hl2/supreme commander/c&c3 well enough, and it copes splendidly with my current game of choice: lord of the rings online. Sure I might be missing out on some spangly bells and whistles, but I’m getting sufficient performance to enjoy each of the mentioned games.

        • crichards
        • 13 years ago

        Agreed. I only recently upgraded my 9800pro.

          • Vaughn
          • 13 years ago

          & #29 While you may enjoy playing your games in low detail at 800×600, some of us want more!

            • Nullvoid
            • 13 years ago

            hey, take that back! I usually get away with medium at 1024×768! 🙂

            • Mithent
            • 13 years ago

            You can have more, of course – but then you have to pay for it. But I agree that it’s rather fallacious for people claim that you have to spend $400 on a new card every 6 months when what they really mean is that you have to do that if you want to have all the bells and whistles on a high-res screen, not to actually play. Sure, you don’t have to pay to upgrade a console, but that’s almost like choosing not to upgrade your PC graphics.- they don’t get much better (although they do usually put effort into eking out more performance from the consoles, true).

            • sigher
            • 13 years ago

            I hope the buyers already bought a beefy PSU earlier, and a 8 to 6 pin connector, else they will be looking at 499.-

        • seeker010
        • 13 years ago

        well no, you don’t. I play some games fine on integrated graphics. but if you want to turn up the technobabble settings you might.

    • Prospero424
    • 13 years ago

    Great article. It’s exactly what I’ve been waiting for; very informative.

    I really, really wanted to see a decisive win for AMD here, at least at the debut price point. But the fact is that with current software, Nvidia’s part (8800 GTS) available on the big hardware sales sites for roughly $300-350 basically matches the 2900XT’s performance and image quality. That’s tough.

    I know that performance will undoubtedly improve as the drivers mature and software transitions to DX10, but by the time that happens months down the road, I’m willing to bet that Nvidia will have a refresh waiting.

    Honestly, I own three Radeons and only one Geforce. I was really pulling for AMD, but the sad fact seems to be that they were just too late to market with this.

    Hopefully, performance increases due to DX10 adoption as well as driver and software optimization will prove me wrong.

    • toxent
    • 13 years ago

    About damned time!

    Nice review, I found the section about the new AA modes very informative.

    The card looks good, but like so many others, i’ll probably wait until the refresh comes along in a few months before i start getting really interested.

    Dugg…

    • flip-mode
    • 13 years ago

    Wasson, that was a whopping monster of an article. Thanks for letting you passion for GPUs take control.

    As for the card, it’s an acceptable offering at the $399 price.

    As for the 6-month lateness, disgustingly unacceptable.

    As for the lack of a competitor in the $500+ range – I couldn’t care less if you paid me.

    As for the failure to deliver a top-to-bottom DX10 family of cards as promised – that’s irritating. The real card of interest is the HD2600XT.

    As for the future: it appears to be all up to AMDs drivers. Interesting.

      • poulpy
      • 13 years ago

      I’m sure you could care less if you were paid -say- $500 🙂

        • toxent
        • 13 years ago

        lol, I know i would…

        • flip-mode
        • 13 years ago

        You can’t care less than not at all 😉

      • toxent
      • 13 years ago

      I agree with you on (almost) all points!

      • R2P2
      • 13 years ago

      q[

      • Dposcorp
      • 13 years ago

      you said it.

      Scott, that was quite a awsome review; one of your best ever.
      I like playing with new hardware as much as the next guy, but damn all that benching and swapping and typing must take its toll. I got tired just reading. Job well done.

      With that being said, I think AMD is doing ok to be honest.
      I think that AMD/ATI is in a “Radeon 8500 / Athlong XP (K7)” phase right now, where they have a ok products at a ok price, but not best, and not the performance crown in a CPU of GPU at this time.

      I expect the prices go low and stay low on this items.

      However, after the 8500 came the 9700Pro, which was great, and I can see the next revised GPU from AMD to be a monster. Also, when the A64 came out, it cleaned the clock of P4, and X2 A64s cleaned the clock of the first dual core chips.

      Right now, I still have faith in AMD, as we know that ATI has in the past been able to keep up with Nvidia, and surpass them at times with their GPUs. In that same way, AMD CPUs have been able to keep and surpass Intel cpus at times as well.

    • Stijn
    • 13 years ago

    I expected that the 2900XT would match the performance of the 8800GTX, but the price tag of $399 makes it sound less bad.

    However, after reading the article, I’m left with the impression they focused too much on stream processing instead of ‘graphic’ processing…

    • Stijn
    • 13 years ago

    update: double post, check my other message

    • green
    • 13 years ago

    i was fairly disappointed
    after 6 months of knowing nvidia performance numbers all they do is match or slightly beat the 8800gts
    i assume more mature drivers will yield better performance
    also good to see crossfire doing quite well
    but it seems to be the x1800 all over again

    i was left with the impression that i should either get a 8800gts or wait for the 65nm chips

      • Jigar
      • 13 years ago

      It is an XT version not a XTX version.. so it was not aimed for Performance Crown.

      Also Did i just read that XT has DDR 2 ??

        • Mithent
        • 13 years ago

        It’s a little worrying that they didn’t launch with one that could, though – I’m sure they’d have wanted to if the architecture let them.

          • wierdo
          • 13 years ago

          I got the impression it wasn’t the architecture per se that’s holding them back, but rather the process they’re using is leaking too much right now.

      • wulfher
      • 13 years ago

      My Though where the same, but im just happy in a way that they didn’t beat my 8800GTX or i would have been bit pissed bcs i just bought one:)

      But nice to see that ATI did a fine Job and started to give Nvidia a bit of challenge.

      • rythex
      • 13 years ago

      Match or slightly beat? From what I saw in the benches the 8800GTS still beat the 2900XT in a majority of them.

    • crichards
    • 13 years ago

    y[

    • flip-mode
    • 13 years ago

    Page 2, paragraph 3, line 2: DX10 should be possessive.

    Page 2, paragraph 8, line 3 should say: other threads are waiting [to] execute

    Page 2, paragraph 9, line 5 should say: each of [the] five of these ALUs

    • maroon1
    • 13 years ago

    §[<http://www.vr-zone.com/?i=4946&s=12<]§ §[<http://enthusiast.hardocp.com/article.html?art=MTM0MSwxMiwsaGVudGh1c2lhc3Q=<]§ 2900XT is 6 months late and G80 still beats it !!!! Nvidia > ATI

    • crichards
    • 13 years ago

    y[

    • moose17145
    • 13 years ago

    Well i will be honest… performance wise i was hoping for a lot more, especially given the amount of power it draws. Interesting to see that ATI/AMD is no longer in the lead for single video card performance (which they held for oh so long), and are now instead the leaders of multicard configs (for the most part).

    Guess we’ll all have to wait and see what 2950 does, hopefully it will perform better than this and eat less electricity in the process. Can’t wait to see what the 2600’s do though! Should be a fun review when it comes out!

    Edit: Was also hoping for a lot more given this is technically ATI’s 2nd generation of a unified shader design. Kudos to NVidia though for finally having a definite winning product! Hopefully their drivers eventually catch up with their hardware! The whole Beta thing is getting old.

      • poulpy
      • 13 years ago

      I’m not disappointed really but that’s surely because I wasn’t expecting a trashing. The 8800 is a very good product (with dubious drivers though) and the 2900XT is more than a decent challenger in DX9 titles with cool features and rightly priced.
      I’d wait for more mature drivers and proper DX10 software to bench against to make up my mind.
      Then anyway I’d also wait for the review of the 2600-series (.65nm, rated at 45W) which are clearly more in my price bracket 🙂

Pin It on Pinterest

Share This