ATI’s Radeon X800 graphics processors

ATI’s Ruby

BACK IN THE DAY, graphics geeks would demonstrate a new effect using a model of a teapot. Nowadays, the object of our graphics demos is the sexpot. And if the sexpot demo is a measure of a graphics card, the new Radeon X800 series wins, hands down. The X800 is ATI’s second generation of DirectX 9-class GPUs with the floating-point datatypes that have taken real-time graphics to a new plane, and the Ruby demo ATI has created to show off its new GPU is simply stunning.

But I’m getting ahead of myself. First, let’s set the stage a bit. You’re probably already aware that ATI has had a very good past couple of years, since the introduction of the Radeon 9700 GPU (code-named R300). ATI’s arch-rival, NVIDIA, struggled mightily with its competing NV30-series of graphics chips, but just recently recovered quite nicely with the introduction of its NV40 chip, now better known as the GeForce 6800 series of GPUs.

You might do well to go read our GeForce 6800 Ultra review if you haven’t already, but for the lazy among us, I’ll sum things up. The GeForce 6800 Ultra is a massive chip with a sweet sixteen pixel pipelines and out-of-this-world performance several steps beyond ATI’s previous top-end chip, the Radeon 9800 XT. I concluded my review of the 6800 Ultra by saying that NVIDIA had left ATI with no margin for error, and I meant it. Fortunately, it looks like ATI hasn’t stumbled in the least, and the choice between graphics chips will come down to better versus best.

The Radeon X800 family
ATI’s new top-end graphics card is the $499 Radeon X800 XT Platinum Edition. With sixteen pixel pipes, a 520MHz core clock, and 256MB of DDR3 memory running at 1.12GHz, the X800 XT will be making a strong claim on the title of “fastest graphics card on the planet.” Why the name Platinum Edition? I think in part because ATI didn’t want to have only one letter’s worth of difference between the name of its old $499 card and its new $499 card. Also, the company wanted to let folks know that this puppy is something out of the ordinary, and apparently the platinum name was available. That doesn’t mean, ATI assures us, that only 42 of these cards will ever ship to consumers; the X800 XT Platinum Edition will be available in quantities similar to the Radeon 9800 XT.


The X800 XT Platinum Edition


…and the same from the back

Further down the product line, ATI has the Radeon X800 Pro for the low, low price of $399. This card has twelve pixel pipes, a 475MHz GPU, and 256MB of DDR3 memory at 900MHz. Although the X800 XT Platinum Edition is pictured above, the X800 Pro card looks identical. The Pro is probably the more exciting card for most folks, since its price is a little less stratospheric. Amazingly, this card will be shipping today, May 4, to retailers, so you should be able to buy one very soon. The X800 XT Platinum Edition will be following along a little later, on May 21.

Note some details in the pictures above. The cooler is a single-slot design for both cards. Both the X800 Pro and XT Platinum Edition require only one aux power connector, and they should work with any decent 300W power supply unit. In fact, one of the ATI engineers brought an X800-equipped Shuttle XPC to the X800 launch event, just to show it was possible to use the card in it.

Oh, and in case you’re wondering, that yellow, four-pin port next to the aux power connector is a composite video-in connector. ATI says this connector will likely be omitted from North American versions of the card, but the Euros will get them.

NVIDIA’s new numbers
ATI waited for NVIDIA to release its GeForce 6 series before setting the final specifications for its Radeon X800 line. In doing so, ATI was able to ensure, with some certitude, that its new cards would be able to outperform NVIDIA’s GeForce 6800 and GeForce 6800 Ultra. However, NVIDIA didn’t play all of its, erm, cards when it introduced the GeForce 6800 Ultra, it seems. To counter ATI, NVIDIA delivered to us late last week a pair of new cards representing two new GeForce 6800 models, plus a new driver for these cards.

To face off against the X800 Pro, NVIDIA will be releasing its own $399 card, the GeForce 6800 GT. Like the X800 Pro, the GT has a single-slot cooling solution and requires only one auxiliary power connector. NVIDIA says the GeForce 6800 GT’s power supply requirements will be similar to those of the Radeon 9800 XT and GeForce FX 5900 Ultra, so a fairly “normal” PSU ought to suffice for it.


The GeForce 6800 GT reference card

The GT will feature a full sixteen pixel pipes running at 350MHz, but its real ace in the hole will be its 256MB of GDDR3 memory running at 1GHz, or 100MHz more than the X800 Pro. The GT should be a tough competitor for the Radeon X800 Pro in the $399 price range, once it arrives. Expect to see 6800 GT cards on store shelves in “mid June,” according to NVIDIA.

Then we have the big momma, the GeForce 6800 Ultra “Extreme” card. Apparently aware that the 16-pipe, 400MHz GeForce 6800 Ultra might have its hands full with the 16-pipe, 520MHz Radeon X800 XT Platinum Edition, NVIDIA has decided to release a limited run of GeForce 6800 Ultra cards clocked at 450MHz. These beasts have dual-slot coolers, dual auxiliary power connectors, and require a 480W power supply, just like the regular GeForce 6800 Ultra.


The GeForce 6800 Ultra “Extreme” runs at 450MHz

The “Extreme” cards will be sold in systems made by select high-end PC builders like VoodooPC and Falcon Northwest, and through card makers like Gainward and XFX. I don’t have an official list of those partners just yet, but NVIDIA says to expect announcements at E3. Perhaps then we’ll learn more about what these puppies will cost, as well. These cards should be available in June in whatever quantities NVIDIA and its partners can muster.

I must note, by the way, that NVIDIA’s GeForce 6800 Ultra reference designs have dual DVI ports, while ATI’s Radeon X800 XT Platinum Edition has only one DVI port plus a VGA connector. Personally, I think any card that costs over 300 bucks or so should come with a pair of DVI ports, so that LCD owners can double up on digital goodness. Let’s hope NVIDIA’s board partners follow NVIDIA’s lead and include both DVI ports. Heck, let’s hope ATI’s board partners follow NVIDIA’s lead, as well.

What’s new, what’s not
For graphics freaks like us, one of the most exciting developments of the past few weeks has been NVIDIA’s new willingness to divulge information about the internals of its graphics processors. The NVIDIA media briefings on GeForce 6800 were chock full of block diagrams of the NV40 chip, its internal units, and all kinds of hairy detail about how things worked. This was a big change from the NV30 days, to say the least. We now have a pretty good idea how the internals of the NV40 look. Of course, ATI has long been fairly open about its the R300 architecture, but NVIDIA’s newfound openness has forced ATI’s hand a bit. As a result, we now have a clearer understanding about the amount of internal parallelism and the richness of features in ATI’s graphics processors—both the new X800 (code-named R420) and the R300 series from which it’s derived.

In the R420, the ATI R3xx core has been reworked to allow for higher performance and better scalability. Let’s have a look at some of the key differences between the R420 and its predecessors.

  • More parallelism — The X800’s pixel pipelines have been organized into sets of four, much like those in NVIDIA’s NV40 chip. These pixel pipe quads can be enabled or disabled as needed, so the X800 chip can scale from four pipes to eight, twelve, or sixteen, depending on chip yields and market demands.


    An overview of the Radeon X800 architecture. Source: ATI.


    The X800’s four pixel pipe quads. Source: ATI.

    In addition to more pixel pipes, the Radeon X800 has two more vertex shader units than the Radeon 9800 XT, for a total of six. Combined with higher clock speeds, ATI is claiming the X800 has double the vertex shader power of the Radeon 9800 XT. Although the X800 Pro will have one of its pixel pipeline quads disabled, it will retain all six vertex shader units intact.


    The Radeon X800’s vertex shader units. Source: ATI.

    One more thing. You may recall that NVIDIA’s recent GPUs, including the NV3x and NV4x chips, can render additional pixels per clock when doing certain types of rendering, like Z or stencil operations. This has led to the NV40 being called a “16×1 or 32×0” design. Turns out ATI’s chips can render two Z/stencil ops per pipeline per clock, as well, so long as antialiasing is enabled.

  • Better performance at higher resolutions — Each pixel quad in the R420 has its own Z compression and hierarchical Z capability, including its own local cache. ATI has sized these caches to allow for Z-handling enhancements to operate at resolutions up to 1920×1080. (The R300’s global Z cache was sized for resolutions up to 1600×1200, the RV3xx-series chips’ to less.) Also, on the R420, if the screen resolution is too high for the Z cache to accommodate everything, the chip will use its available Z cache to render a portion of the screen, rather than simply turning off Z enhancements.


    Detail of a single four-pipeline quad. Source: ATI.

    ATI’s engineers have also boosted the X800’s peak Z compression ratio from 4:1 to 8:1, making the maximum possible peak compression (with 6X antialiasing and color compression) 48:1, but that’s just a fancy number they like to throw around to impress the ladies.

  • A new memory interface — One of the Radeon 9800 XT’s few weaknesses was its relatively slow memory speeds. The GeForce FX 5950 Ultra had 950MHz memory, while the Radeon 9800 XT’s RAM topped out at 730MHz. Part of the reason for this disparity, it turns out, was the chip’s memory interface, which didn’t like high clock speeds. ATI has addressed this problem by giving the X800 GPU a new memory interface capable of clock speeds up to 50% higher than the 9800 XT.


    The Radeon X800’s crossbar memory controller. Source: ATI.

    From 10,000 feet up, this new setup doesn’t look dramatically different from the 9800 XT’s; it’s a crossbar type design with four 64-bit data channels talking over a switch to four independent memory controllers. There are some important differences, though, beyond higher clock speeds. First and foremost, this memory interface can make use of the swanky new GDDR3 memory type that ATI helped create. Also, ATI says this new memory interface is more efficient, and it offers extensive tuning options. If a given application (or application type) typically accesses memory according to certain patterns, ATI’s driver team may be able to reconfigure the memory controllers to perform better in that type of app.

  • Longer shader programs — The X800’s pixel shaders still have 24 bits of floating-point color precision for each color channel (red, green, blue, and alpha), and they do not have the branching and looping capabilities of the pixel shaders in the NV40. They do, however, have the ability to execute shader programs with longer instruction lengths, up from 160 in the 9800 XT to 1,536 in the X800. The X800’s revised pixel shaders also have some register enhancements, including more temporary registers (up from 12 to 32). We’ll look at the question of pixel shaders and shader models in more detail below.

  • 3Dc normal map compression — Normal maps are simply grown-up bump maps. Like bump maps, they contain information about the elevation of a surface, but unlike bump maps, they use three-component coordinate system to describe a surface, with X, Y, and Z coordinates. Game developers are now commonly taking high-polygon models and generating from them two things: a low-poly mesh and a normal map. When mated back together inside a graphics card, these elements combine to look like a high-poly model, but they’re much easier to handle and render. Trouble is, like all textures, normal maps tend to chew up video memory, but normal maps don’t tolerate well the compression artifacts caused by compression algorithms like DirectX Texture Compression (DXTC). If a normal map becomes blocky, the perceived elevation of a surface will become blocky and uneven, ruining the effect. ATI has tackled this problem by adapting the DTXC algorithm for alpha channel compression to work on normal maps. Specifically, the DXT5 alpha compression algorithm is used on the red and green channels, which store X and Y coordinate info, respectively. (Z values are discarded and computed later in the pixel shader.) This format is reasonably well suited for normal maps, and offers 4:1 compression ratios. Like any texture compression method, it should allow the use of higher resolution textures in a given amount of texture memory.


    3Dc versus DTXC color compression. Source: ATI.

    The X800 GPU supports this method of normal map compression, dubbed 3Dc, in hardware. Both DirectX and OpenGL support 3Dc via extensions, and game developers should be able to take advantage of it with minimal effort.

  • Temporal antialiasing — This new antialiasing feature is actually a driver trick that exploits the programmability of ATI’s antialiasing hardware. We’ll discuss it in more detail in the antialiasing section of the review.

Those are some of the more notable changes and tweaks ATI has made to the X800. We’ll discuss some of them in more detail below, and we’ll see the impact of the performance tweaks, in particular, in our benchmark results.

Die size comparison

Both of these chips are large. NVIDIA says the NV40 GPU is 222 million transistors, while ATI says the X800 is roughly 160 million transistors. However, the two companies don’t seem to be counting transistors the same way. NVIDIA likes to count up all possible transistors on a chip, while ATI’s estimates are more conservative.

Regardless, transistor counts are less important, in reality, than die size, and we can measure that. ATI’s chips are manufactured by TSMC on a 0.13-micron, low-k “Black Diamond” process. The use of a low-capacitance dielectric can reduce crosstalk and allow a chip to run at higher speeds with less power consumption. NVIDIA’s NV40, meanwhile, is manufactured by IBM on its 0.13-micron fab process, though without the benefit of a low-k dielectric.

The picture on the right should give you some idea of the relative sizes of these two chips. Pictured between them is an Athlon XP “Thoroughbred” processor, also manufactured on a 0.13-micron process. As you can tell, these GPUs are much larger than your run-of-the-mill desktop CPU, and they are more similar in size to one another than the transistor counts would seem to indicate. By my measurements with my highly accurate plastic Garfield ruler (complete with Odie), the ATI R420 chip is 16.25mm by 16mm, or 260mm2. The NV40 is slightly larger at 18mm by 16mm, or 288mm2.

What Garfield is trying to tell us here is that these are not small chips, Jon. The NV40 isn’t massively larger than the R420, either, despite the different numbers coming out of the two companies.

The Ruby demo
In order to show off the power of the X800 chip, ATI set out to make a graphics demo very different from the usual single-object, single-effect demos we’re accustomed to seeing. They also went outside for creative help, tapping graphics studio RhinoFX as a partner. RhinoFX works with high-end, CPU-based rendering tools to produce most of its work, and the company’s hallmark has traditionally been realism. In this case, ATI wanted to create something unique in real time, so they gave RhinoFX some polygon, lighting, and animation budgets in line with what they expected the Radeon X800 to be able to handle. RhinoFX used its usual tools, like Maya and Renderman, to create an animation that worked within those budgets, and ATI’s demo team took on the task of converting this sequence into a real-time graphics demo.

The results are astounding. The demo is a full-on action short in a kind of realistic, comic-book-esque style. The sequence makes ample use of cinematic techniques like depth of field and motion-captured character animation to tell a story. Nearly every surface in the thing is covered with one or more shaders, from real-looking leather to convincing skin and just-waxed floors.

Oh yeah, and Ruby’s pretty hot, too. But then what did you expect?

ATI has produced a video on the making of Ruby, and they’ve made quite a bit of information available on how they did it. From what I gather, the ATI demo team spent the lion’s share of its time on shader research, working to create realistic shaders using ATI tools like RenderMonkey and ASHLI. I don’t have time to cover all of this work in great detail today, but I’m sure ATI will make a video of the Ruby demo and more information available to the public. For now, some screenshots from the demo. The screenshots below are unretouched, with the exception of the close-up of Ruby’s eyes. I had to resize that one.


Ruby makes her entrance—in red leather


Perhaps the most impressive shot of the demo is this close-up of Ruby’s face


Better eyes than Half-Life 2?


Depth of field concentrates the viewer’s eye


Optico eyes the gem, which has front and back face
refractions, specular and environmental lighting


Ruby takes out some NVninjas as a throwing star hurtles her way

Test notes
I want to address one issue before we get to the benchmark results, and that’s a problem we found with the NVIDIA 61.11 beta drivers. On this page of our GeForce 6800 Ultra review, you’ll see that we disabled trilinear optimizations on the GeForce 6800 Ultra card in order to get a direct comparison to the Radeon. NVIDIA has adopted a shortcut method of handling blending between mip map levels, and we turned it off because ATI doesn’t use this method. NVIDIA’s trilinear optimizations produce slightly lower quality images than the commonly accepted method of trilinear filtering, although it is admittedly difficult to see with the naked eye. The performance impact of this optimization isn’t huge, but it can be significant in fill-rate-limited situations, as we demonstrated here.

Unfortunately, the 61.11 beta drivers we received this past Friday night didn’t behave as expected. We ticked the checkbox to disable trilinear optimizations, but our image quality tests showed that the driver didn’t disable all trilinear optimizations in DirectX games. I did have time to check at least one OpenGL app, and trilinear optimizations were definitely disabled there. We will show you the image quality impact of the 61.11 drivers’ odd behavior in the IQ section the review.

NVIDIA’s Tony Tamasi confirmed for us that this behavior is a bug in the 61.11 drivers’ control panel, and says it will be fixed. Keep in mind that the scores you see for the GeForce cards in the following pages would, all other things being equal, be slightly lower if the driver behaved as expected. Also, remember that no recent NVIDIA driver we’ve encountered has allowed us to disable trilinear optimizations on a GeForce FX GPU, including the 5950 Ultra model tested here.

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least twice, and the results were averaged.

Our test system was configured like so:

System MSI K8T Neo
Processor AMD Athlon 64 3400+ 2.2GHz
North bridge K8T800
South bridge VT8237
Chipset drivers 4-in-1 v.4.51
ATA 5.1.2600.220
Memory size 1GB (2 DIMMs)
Memory type Corsair TwinX XMS3200LL DDR SDRAM at 400MHz
Hard drive Seagate Barracuda V 120GB SATA 150
Audio Creative SoundBlaster Live!
OS Microsoft Windows XP Professional
OS updates Service Pack 1, DirectX 9.0b

We used ATI’s CATALYST 4.4 drivers on the Radeon 9800 XT card and CATALYST beta version 8.01.3-040420a2-015068E on the X800 cards. For the GeForce 6800 series cards, we used Forceware 61.11 beta drivers, and for the GeForce FX 5950, we used Forceware 60.72 beta 2. One exception: at the request of FutureMark, we used NVIDIA’s 52.16 drivers for all 3DMark benchmarking and image quality tests on the GeForce FX 5950 Ultra.

The test systems’ Windows desktops were set at 1152×864 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Pixel filling power
We’ll start off with our traditional look at fill rate, one of the more important factors in determining overall performance of a graphics card. Of course, raw theoretical fill rate can’t be divorced from its most likely real-world constraint, memory bandwidth, so we have numbers on that, too. The table below shows specs for some top-end graphics cards in the past two years or so, sorted in order of memory bandwidth.

Core clock (MHz) Pixel pipelines Peak fill rate (Mpixels/s) Texture units per pixel pipeline Peak fill rate (Mtexels/s) Memory clock (MHz) Memory bus width (bits) Peak memory bandwidth (GB/s)
GeForce FX 5800 Ultra 500 4 2000 2 4000 1000 128 16.0
Parhelia-512 220 4 880 4 3520 550 256 17.6
Radeon 9700 Pro 325 8 2600 1 2600 620 256 19.8
Radeon 9800 Pro 380 8 3040 1 3040 680 256 21.8
Radeon 9800 Pro 256MB 380 8 3040 1 3040 700 256 22.4
GeForce 6800 325 12 3900 1 3900 700 256 22.4
Radeon 9800 XT 412 8 3296 1 3296 730 256 23.4
GeForce FX 5900 Ultra 450 4 1800 2 3600 850 256 27.2
Radeon X800 Pro 475 12 5700 1 5700 900 256 28.8
GeForce FX 5950 Ultra 475 4 1900 2 3800 950 256 30.4
GeForce 6800 GT 350 16 5600 1 5600 1000 256 32.0
GeForce 6800 Ultra 400 16 6400 1 6400 1100 256 35.2
GeForce 6800 Ultra Extreme 450 16 7200 1 7200 1100 256 35.2
Radeon X800 XT Platinum Edition 520 16 8320 1 8320 1120 256 35.8

If fill rate were money, the Radeon X800 XT Platinum Edition would be Bill Gates. If it were water, the X800 XT would be Niagara Falls. If it were transistors, the X800 XT would be a GeForce 6800 Ultra.

Err, strike that last one.

However you put it, the Radeon X800 XT Platinum Edition has helluva lotta grunt. The GeForce 6800 Ultra Extreme isn’t far behind it, though, assuming they’ll both be largely limited by available memory bandwidth.

In the $399 category, the 12-pipe Radeon X800 Pro just leads the 16-pipe GeForce 6800 GT thanks to the Radeon’s much higher clock speeds. The memory bandwidth advantage, however, is clearly with the GeForce.

How do these numbers play out in synthetic fill rate tests?

In 3DMark03’s single-textured fill rate test, the GeForce 6800 Ultra cards actually manage to outdo the Platinum Edition, probably because of more efficient management of memory bandwidth during the execution of this test. When it comes time to apply more than one texture per pixel, the X800 XT PE edges ahead slightly. Let’s stop once more to remind ourselves what a huge performance leap we’re seeing from one generation to the next. The Radeon X800 XT PE delivers over twice the fill rate of the Radeon 9800 XT. Yikes.

Rightmark allows us to scale up the number of textures, unleashing the X800 XT PE’s inner demons. When applying one texture, the NVIDIA cards have a big relative advantage. Moving two textures, the picture changes, and the ATI cards look relatively stronger. Once we get into three or more textures, though, the Radeons dominate. So it’s a bit of a wash, depending on the scenario. Keep these differences in texturing and fill rate performance in mind as you read the rest of the results, because they will be enlightening.

Unreal Tournament 2004

We’re using scaling graphs to show just how fill rate and pixel shader performance differences separate the men from the boys, but keep your eyes on the rightmost scores on each graph—and especially on the graphs with 4X antialiasing and 8X anisotropic filtering—if you want to see the most relevant numbers. All of the next-gen cards perform well in UT2004, but the Radeon X800s do best. The X800 XT PE beats the GeForce 6800 Ultra by just over 10 frames per second at 1600×1200 with 4X AA and 8X AF, though the Extreme card narrows that gap a little. And the X800 Pro also tops the GeForce 6800 Ultra, surprisingly enough.

Quake III Arena

In ye olde Quake III Arena, the situation is reversed, with the GeForce 6800 GT outperforming the Radeon X800 XT PE. NVIDIA has long done well in OpenGL games, and the results here suggest that trend will continue.

Far Cry
This game is the big one, Doom 3 and Half-Life 2 all rolled into one and released in mid-2004, ahead of either of those big-name titles. Far Cry uses DirectX 9 shaders, per-pixel lighting and shadowing, and a host of related effects to make one of the best looking games we’ve ever seen.

I tested Far Cry using its built-in demo record and playback function, but I used FRAPS to get frame rates out of it. I don’t know why. Far Cry‘s demos don’t record character interactions, but I did get a good cross section of the game in, starting inside and moving through some tunnels, then going outside through thick vegetation for a stroll on the beach. All of the game’s quality options were set to “Very High.” Anisotropic filtering was set to the highest level the game would allow, and antialiasing quality was set to medium.

The ATI cards take this one, with the X800 XT PE well ahead of the GeForce 6800 Ultra Extreme. The X800 Pro darn near ties with the GeForce 6800 Ultra. You may have noticed, though, that our scores are way up on the GeForce 6800 Ultra from our initial review. NVIDIA fixed a Z culling bug in its drivers, and performance has leaped as a result. Unfortunately, the GeForce 6800 drivers still need some work to fix a number of image quality issues, including intermittent problems with shadowing on weapons. NVIDIA is also working with CryTek to alleviate some lighting artifacts that folks have noted on the GeForce 6800 Ultra. Apparently, some of the problems are driver based, while others will require a patch from CryTek. NVIDIA’s driver team is making its piece of the problem a top priority.

For now, the Radeon X800 cards run this premier game title faster and with better image quality than NVIDIA’s GeForce 6800 series. That has to be a little embarrassing given that this is an NVIDIA “The Way it’s Meant to be Played” game, but the reality is that the GeForce 6800 series is still a very new product, not yet shipping, with beta drivers, while ATI’s Radeon X800s are second-generation silicon shipping today.

Painkiller
Painkiller is another brand-new game with gorgeous graphics, but this one has a decidedly old-school shooter feel. Heaven’s got a hitman… and a crate opener, it would seem.

Since the Painkiller demo doesn’t have any sort of demo recording, I played through a game sequence in the same way three times for each card and averaged the results recorded by FRAPS. I’ve gained more confidence in this “manual” method of benchmarking during this process, because results were remarkably consistent between runs.

Chalk up another win in a new game for the Radeon cards. The X800 XT PE leads the pack, with the GeForce 6800 Ultra Extreme nearly 20 frames per second behind it. Interestingly enough, though, the Radeon X800 Pro manages a higher minimum frame rate, suggesting better playability on the cheaper Radeon X800 card than on the Extreme. Prince of Persia: The Sands of Time
In Price of Persia, I played through the game’s opening sequence, which is a mixture of canned animations and live gameplay, all rendered in the game engine. As with Painkiller, results were remarkably consistent from one run to the next on the same card.

This one is a mirror image of the Painkiller results, with the GeForce 6800 Ultra Extreme pulling far ahead of the Radeons, and the GeForce 6800 GT hitting a higher minimum framerate than the Radeon X800 XT PE.

AquaMark

AquaMark is an interesting case, because the NVIDIA cards have a clear advantage at lower resolutions, but as the fill rate needs increase, the Radeon X800s pull in the lead. Ultimately, at 1600×1200 with antialiasing and aniso, the Radeon X800 Pro surpasses the GeForce 6800 Ultra. Wolfenstein: Enemy Territory

Here’s another OpenGL game, and another big win for the NVIDIA cards. The GeForce 6800 GT outruns the high-end Radeon X800 XT PE by over 20 FPS.

Splinter Cell

Splinter Cell doesn’t work right with antialiasing, so we’ve limited our tests to non-AA modes. You can see the frame-by-frame performance of each card at 1600×1200 here, as well. However, with all the new GPUs bunched up at over 70 FPS, I’m thinking we’d better update our test suite. Serious Sam SE
Serious Sam can run in either Direct3D or OpenGL. Since the game’s default mode is OpenGL, we ran our tests with that API. To keep things on an even playing field, we used the “Default settings” add-on to defeat Serious Sam SE’s graphics auto-tuning features.

The GeForce 6800 cards perform very well in our third and final OpenGL game test, making it a sweep.

3DMark03
At FutureMark’s request, we are using NVIDIA’s 52.16 drivers for the GeForce FX 5950 in this test.

Whoa, that’s a squeaker! The GeForce 6800 Ultra juuuuuust manages to eke out the win over the Radeon X800 XT PE at 1600×1200. The higher-clocked Ultra Extreme puts an exclamation point on the win. However, at 1024×768 where most 3DMark03 scores are recorded and compared, the X800 XT PE beats the GeForce 6800 Ultra, losing only to the Extreme edition. Things aren’t so close in the $399 card race, where the 6800 GT opens up a comfortable lead over the X800 Pro.

The GeForce 6800 cards take 3DMark03’s overall score by being faster in three of the four game tests. However, in the pixel-shader-laden Mother Nature scene, the Radeon cards lead the way.

3DMark’s pixel and vertex shader tests produce opposing results, with the GeForce 6800s running the pixel shader test faster, and the Radeon X800s leading in the vertex shader test.

3DMark image quality
The Mother Nature scene from 3DMark has been the source of some controversy over time, so I wanted to include some screenshots to show how the three cards compare. On this page and in all the following pages with screenshots, you’re looking at low-compression JPEG images. You can click on the image to open a new window with a lossless PNG version of the image.


Radeon 9800 XT


Radeon X800 XT


GeForce 6800 Ultra


GeForce FX 5950 Ultra

The results look very similar between all the cards, at least to my eye.

ShaderMark 2.0
ShaderMark is intended to test pixel shader performance with DirectX 9-class pixel shaders. Specifically, ShaderMark 2.0 is geared toward pixel shader revisions 2.0 and 2.0a. (Version 2.0a or “2.0+” uses longer programs.) ShaderMark also has the ability to use a “partial precision” hint on NVIDIA hardware to request 16-bit floating point mode. Otherwise, the test uses 32 bits of precision on NVIDIA cards and, no matter what, 24 bits per color channel on the Radeon chips due to their pixel shader precision limits. To keep the graph a little less busy, I’ve only included partial precision and 2.0a results for the GeForce 6800 Ultra.

The GeForce cards can’t run some of the tests because ShaderMark is requesting a surface format they don’t support. I’m hopeful ShaderMark will be revised to use a more widely supported format.

The results here are too varied to declare any one winner. You can see, though, that the GeForce 6800 Ultra Extreme and Radeon X800 XT PE are vying together for the top performance in most tests. Where they are not, the GeForce 6800 Ultra with partial precision enabled leads the pack. This is quite a good showing for GeForce 6800 chips given their clock speed handicaps. NVIDIA seems to lead ATI in clock-for-clock performance here.

ShaderMark image quality
One of the quirks of running ShaderMark on the GeForce 6800 Ultra with 32-bit precision was a texture placement problem with the background texture (not the pixel shaders or the orb thingy, just the background.) The problem didn’t show up in “partial precision” mode, but it did in FP32 mode. Since the changing background texture is distracting, and since 16 bits per color channel is more than adequate for these pixel shaders, I’ve chosen to use the “partial precision” images from the GeForce 6800 Ultra.

The images shown are the GeForce 6800 Ultra screenshots, until you run your mouse over them, at which point the Radeon-generated images will appear. Be warned, though, the differences are very subtle.


Per pixel diffuse lighting (move mouse over the image to see the Radeon X800 XT’s output)


Car surface (move mouse over the image to see the Radeon X800 XT’s output)


Environment bump mapping (move mouse over the image to see the Radeon X800 XT’s output)


Self shadowing bump mapping with phong lighting (move mouse over the image to see the Radeon X800 XT’s output)


Procedural stone shader (move mouse over the image to see the Radeon X800 XT’s output)


Procedural wood shader (move mouse over the image to see the Radeon X800 XT’s output)

All in all, not much to talk about. That’s good, I suppose.

Texture filtering
Generally, I’ll show all the cards on a test like this, but this time around, I decided to include results from the GeForce 6800 Ultra using the 60.72 drivers. Since the 61.11 drivers don’t disable trilinear optimizations properly, the results wouldn’t show what I’d like. Instead, you can see results from the 60.72 drivers for the GeForce 6800 Ultra using three modes: the regular results without trilinear optimizations, with NVIDIA’s “brilinear” optimizations enabled, and with the “high quality” anisotropic filtering option checked. Note, also, that I’m testing in Direct3D mode here, not OpenGL, since Direct3D is more commonly used for games nowadays.

The top-end cards are nearly impervious to texture filtering loads in Serious Sam, especially the Radeon X800 XT PE. Overall, though, most of the cards scale fairly similarly. You can see how the “brilinear” mode helps the GeForce 6800 Ultra’s performance when anisotropic filtering is in use.

Texture filtering quality
Here’s a sample scene from Serious Sam, grabbed in Direct3D mode, that shows texture filtering in action along the wall, the floor, and on the inclined surface between the two.


Trilinear filtering – Radeon 9800 XT


Trilinear filtering – Radeon X800 XT


Trilinear filtering – GeForce 6800 Ultra – 60.72 drivers – Optimizations disabled


Trilinear filtering – GeForce 6800 Ultra – 61.11 drivers – Optimizations disabled


Trilinear filtering – GeForce 6800 Ultra – 60.72 drivers – Optimizations enabled


Trilinear filtering – GeForce FX 5950 Ultra

The difference between “brilinear” filtering and true trilinear is difficult to detect with a static screenshot and the naked eye—at least, it is for me. Remember that the GeForce FX 5950 is doing “brilinear” filtering in all cases.

Texture filtering quality
Once we dye the different mip maps colors using Serious Sam’s built-in developer tools, we can see the difference between the filtering methods more clearly.


Trilinear filtering – Radeon 9800 XT


Trilinear filtering – Radeon X800 XT


Trilinear filtering – GeForce 6800 Ultra – 60.72 drivers – Optimizations disabled


Trilinear filtering – GeForce 6800 Ultra – 61.11 drivers – Optimizations disabled


Trilinear filtering – GeForce 6800 Ultra – 60.72 drivers – Optimizations enabled


Trilinear filtering – GeForce FX 5950 Ultra

Here are NVIDIA’s trilinear optimizations at work. Mip-map boundary transitions aren’t as smooth as they are on the Radeon X800 XT and on the GeForce 6800 Ultra with “brilinear” disabled. Notice the odd mixture of filtering going on in the 61.11 drivers with optimizations disabled. The floor and angled surface look like they should, but the wall’s mip map boundaries show the dark banding indicative of the “brilinear” method.

Anisotropic texture filtering quality


16X anisotropic filtering – Radeon 9800 XT


16X anisotropic filtering – Radeon X800 XT


16X anisotropic filtering – GeForce 6800 Ultra – 60.72 drivers – Optimizations disabled


16X anisotropic filtering – GeForce 6800 Ultra – 61.11 drivers – Optimizations disabled


16X anisotropic + trilinear filtering – GeForce 6800 Ultra – 60.72 drivers – Optimizations enabled


8X anisotropic filtering – GeForce FX 5950 Ultra

Once again here, all the cards look pretty good, and very similar, to the naked eye.

Anisotropic texture filtering quality


16X anisotropic filtering – Radeon 9800 XT


16X anisotropic filtering – Radeon X800 XT


16X anisotropic filtering – GeForce 6800 Ultra – 60.72 drivers – Optimizations disabled


16X anisotropic filtering – GeForce 6800 Ultra – 61.11 drivers – Optimizations disabled


16X anisotropic + “brilinear” filtering – GeForce 6800 Ultra – 60.72 drivers – Optimizations enabled


8X anisotropic filtering – GeForce FX 5950 Ultra

With colored mip maps and aniso filtering, the differences become very clear. With optimizations disabled, NVIDIA’s 60.72 drivers did filtering almost exactly equivalent to ATI’s, but the 61.11 drivers do something less than that, though the degree varies depending on the angle of the surface.

Antialiasing
The Radeon X800’s antialiasing hardware hasn’t changed significantly from the Radeon 9800 XT, but ATI does have a few tricks up its sleeve. Let’s start with a look at traditional multisampling edge AA, which ATI and NVIDIA do fairly similarly.

The X800 cards’ AA performance is about what one might expect. The big news here, believe it or not, is the 8X AA mode from NVIDIA. The 6800 Ultra is performing much better than it did in 8X mode with older drivers. Let’s have a look at the AA sample patterns to see if we can discern why. Remember, green represents texture sample points, and red represents geometry sample points.

2X 4X 6X/8X
GeForce FX 5950 Ultra

GeForce 6800 Ultra

Radeon 9800 XT

Radeon X800 XT

The X800 doesn’t vary from the path taken by its predecessor, with the same stock sample patterns for 2X, 4X, and 6X modes. The GeForce 6800, though, has a new sample pattern for both 4X and 8X AA. In fact, the 8X mode is now identified in NVIDIA drivers as “8xS” mode, and it’s a new pattern from the one we saw in our initial review of the GeForce 6800 Ultra. The fact there are only two geometry sample points suggests that NVIDIA’s new driver has enabled a 2X supersamling/4X multisampling mode, replacing the 4X supersampling/2X multisampling method we saw earlier. That explains the performance jump in 8X mode for the GeForce 6800 cards, because multisampling is more efficient.

That’s progress for NVIDIA, but ATI’s sparse sampling pattern for 6X AA is not aligned to any kind of grid, making it very effective in practice. Also, ATI’s cards use gamma-correct blending to improve results. When we interviewed Tony Tamasi of NVIDIA recently, he said the GeForce 6800 is capable of gamma-correct blends, as well, but we’ve since confirmed that the feature won’t be enabled until a future driver release.

Temporal antialiasing
Temporal AA varies the AA sample pattern from one frame to the next, taking advantage of display persistence to create the effect of a higher degree of antialiasing. In theory, at least, this trick should allow the performance of 2X AA while giving the perceived image quality of 4X AA. To give you some idea what’s going on here, the table of sample patterns below shows the two temporal patterns used in each mode, then superimposes them on top of one another.

2X 4X 6X
Temporal pattern 1

Temporal pattern 2

Effective pattern

That’s the basic layout of temporal AA. The trick is achieving that effective pattern, and that brings with it some limitations. For one, temporal AA has the potential to introduce noise into a scene at polygon edges by varying what’s being rendered from frame to frame. That’s not good. To mitigate this effect, ATI has instituted a frame rate threshold for temporal AA. If frame rates drop below 60 frames per second, the X800 will revert to a single, fixed sample pattern. Below 60Hz or so, depending on the persistence of the display, temporal AA is likely to show up as flickering edges. Above 60Hz, though, it can be rather effective.

Temporal AA’s other limitations aren’t necessarily everyday drawbacks, but they present problems for the reviewer. For one, there should be little overhead for temporal AA, so that an effective 8X temporal AA mode ought to perform about like a normal 4X AA mode. However, in order to work properly, temporal AA needs and requires that vertical refresh sync be enabled. With vsync enabled, testing performance becomes difficult. Basically, performance testing is limited to frame rates above 60 FPS and below a reasonable monitor refresh rate. I haven’t yet devised a proper temporal AA performance test.

Also, capturing screenshots of temporal AA is hard. I think I can illustrate the effect, though. Have a look at the images below. The first two are bits of screenshots from UT2004, magnified to 4X their original size. The “averaged patterns” and “diff” shots were created in an image processing program.

Pattern 1 Pattern 2 Averaged patterns Diff between
patterns 1 and 2

The pattern 1 and 2 images are from ATI’s 6X temporal AA mode, and you can see how the two sample patterns produce different results. When combined, even at 6X AA, the final result looks much smoother, as the average patterns image illustrates. To highlight the variance between the two patterns, I’ve also run at “diff” operation between the two sample images, and you can see in the result how the edge pixels are covered differently by the two patterns.

Here’s a larger picture showing the difference between two edge patterns. Unfortunately, the sky texture moves around in UT2004, so it shows up here, as well.

So there is good reason to vary the patterns, even with 6X AA.

In practice, temporal AA is nifty, but not perfect. With the right game and the right frame rates, 2X temporal AA looks for all the world like 4X AA. The difference is more subtle with higher AA modes, as one might expect, but temporal AA can be very effective. However, the 60 FPS cutoff seemed a little low on the Trinitron CRT I used for testing. I could see discernible flicker on high-contrast edges from time to time, and I didn’t like it. Also, the 60 fps cutoff causes temporal AA to switch on and off periodically during gameplay. The difference between 2X “regular” and 2X temporal (or 4X effective) AA is large enough that I was distracted by the transition.

That said, if I were using a Radeon X800 in my main gaming rig, I would probably enable temporal AA and use it—especially if I already were using 4X AA or the like. The Radeon X800 cards tend to spend enough time above 60 fps to make it very useful.

Antialiasing quality
We’ll start off with non-AA images, just to establish a baseline.


No AA – Radeon 9800 XT


No AA – Radeon X800 XT


No AA – GeForce 6800 Ultra


No AA – GeForce FX 5950 Ultra

Antialiasing quality


2X AA – Radeon 9800 XT


2X AA – Radeon X800 XT


2X AA – GeForce 6800 Ultra


2X AA – GeForce FX 5950 Ultra

2X AA shows little difference between the cards.

Antialiasing quality


4X AA – Radeon 9800 XT


4X AA – Radeon X800 XT


4X AA – GeForce 6800 Ultra


4X AA – GeForce FX 5950 Ultra

As we’ve noted before, ATI’s gamma-correct blending seems to do a slightly better job eliminating stairstep jaggies on near-horizontal and near-vertical lines.

Antialiasing quality


6X AA – Radeon 9800 XT


6X AA – Radeon X800 XT


8X AA – GeForce 6800 Ultra


6xS AA – GeForce FX 5950 Ultra


8X AA – GeForce FX 5950 Ultra

Here the difference becomes more apparent. Although NVIDIA’s new 8xS mode looks very good, ATI’s 6X mode is more effective than NVIDIA’s 8X mode, thanks to gamma-correct blends. Since ATI’s 6X mode also performs better, that’s quite the advantage. Add in temporal AA, and ATI has a pronounced lead in antialiasing overall right now.

However, I should mention that NVIDIA’s 8xS mode does do more than just edge AA, and there are image quality benefits to supersampling that likely won’t show up in a still screenshot.

Power consumption
With each of the graphics cards installed and running, I used a watt meter to measure power draw on our test system, as shown in our Testing Methods section, to see how many watts it pulled. The monitor was plugged into a separate power source. I took readings with the system idling at the Windows desktop and with it running a real-time HDR lighting demo at 1024×768 full-screen.

The Radeon X800 Pro actually pulls less juice than the Radeon 9800 XT, believe it or not, and the Radeon X800 XT PE only requires a little more power than the 9800 XT. At idle, the new ATI GPUs consume much less power than the 9800 XT, as well. The GeForce 6800 GT looks reasonably good on the power consumption front, as well, but the GeForce 6800 Ultra Extreme is another story. This card runs hot and loud, its cooler blowing a notch faster than the other cards when it’s just sitting idle.

High dynamic range rendering
Finally, I’ve got to include just one shot from the “dribble” demo, just so you can see what the frame rate counter’s showing on the Radeon X800 XT Platinum Edition. You can see compare frame rate counters from other cards here.


Radeon X800 XT

ATI has really raised the performance bar dramatically with this chip. Getting three times the performance of a Radeon 9800 XT in this shader-intensive demo is no small thing.

Conclusions
The Radeon X800 series cards perform best in some of our most intensive benchmarks based on newer games or requiring lots of pixel shading power, including Far Cry, Painkiller, UT2004, and 3DMark03’s Mother Nature scene—especially at high resolutions with edge and texture antialiasing enabled. The X800s also have superior edge antialiasing. Their 6X multisampling mode reduces edge jaggies better than NVIDIA’s 8xS mode, and the presence of temporal antialiasing only underscores ATI’s leadership here. With a single-slot cooler, one power connector, and fairly reasonable power requirements, the Radeon X800 XT Platinum Edition offers all its capability with less inconvenience than NVIDIA’s GeForce 6800 Ultra. What’s more, ATI’s X800 series will be in stores first, with a more mature driver than NVIDIA currently has for the GeForce 6800 line. The folks at ATI have improved mightily on the R300 design with the R420, successfully delivering the massive performance leap necessary to keep pace with NVIDIA’s new GPUs. The achievement of ATI’s demo team with the Ruby demo is a heckuva reminder that ATI knows what it’s doing with DirectX 9-class graphics, and a very strong argument that the X800’s new, longer shader instruction limits don’t preclude much higher quality graphics in real time than anything we’ve seen from game developers yet.

However, NVIDIA’s GeForce 6800 cards are no pushovers this time around. The GeForce 6 cards are faster in OpenGL, in many older games, and in Prince of Persia: The Sands of Time. ShaderMark 2.0 is very close, too, proving that NVIDIA’s new pixel shaders are very capable, even with a distinct clock speed deficit. The GeForce 6800 GPUs have some natural advantages, including support for Shader Model 3.0 with longer shader programs, dynamic flow control, and FP16 framebuffer blending and texture filtering. Down the road, these capabilities could prove useful for creating advanced visual effects with the highest possible fidelity.

Right now, though, NVIDIA needs to concentrate on getting some basics right. The NV40 is a novel chip architecture, and its drivers are very much in the beta stages. We’d like to see better results in newer titles like Far Cry, antialiasing blends that account for display gamma, and a consistent means of banishing “brilinear” filtering optimizations. Ideally, NVIDIA would make “brilinear” an option but not the default; the GeForce 6800 series is too good and too fast to need this crutch. It’s possible NVIDIA will have worked out all of these problems by the time GeForce 6800 cards arrive in stores.

At present, ATI appears to be slightly ahead of NVIDIA, but its superiority isn’t etched indelibly in silicon the way it was in the last generation of GPUs. The GeForce 6800 is an extremely capable graphics chip, and we don’t know yet how good it may become. Whatever happens, you can see why I said this generation of GPUs presents us with a choice between better and best. These cards are all killer performers, and having seen Far Cry running on them fluidly, I can actually see the logic in parting with four or five hundred bucks in order to own one.

Comments closed
    • Ryan87
    • 15 years ago

    Darn [pr’s here hate ppl l;ike you ‘n me so im not telling, stuff it] does not have their driver review up yet! oh well, wen it comes up i hear its gonna be freakin awesme! i cant wait i cant wait, waaaaah!

      • Convert
      • 15 years ago

      Dude do you work for them or something? I think you forgot to paste your website url a few more times.

        • Ryan87
        • 15 years ago

        no i doont work for them, i just fell in love with their site, if it had a vagina id fuck it.

          • indeego
          • 15 years ago

          I think id gets plenty of poon on their owng{<.<}g

    • Ryan87
    • 15 years ago

    ATI seemed like they were on top until i saw some benchmarks, which nvidia then came on top, its so confusing to chose! but luckily i found this website that has an awesome ATI review and are soon to have an even better ATI Driver review. its at [im not telling you! because the PR’s of this forum hate you and dont want anyone else but them to help you!], it actually helped me a whole lot!

      • Convert
      • 15 years ago

      Uhh… I don’t see a hardware review anywhere on that site for a x800. The only place they talk about the x800 involves a lot of rainbows being shot your ass.

        • Ryan87
        • 15 years ago
          • blitzy
          • 15 years ago

          that website is crap, and that’s not a review it is a cut & paste of a press release

            • Ryan87
            • 15 years ago

            Hey, that site is coming out to be real big and its still real young, its going to become one of those big sites you go to for reviews when cards like these come out someday, you watch!

            • Convert
            • 15 years ago

            Aahahahahaha

          • Convert
          • 15 years ago

          Show me everything? I have never even heard of you before.

          Obviously YOU are the one that needs to be shown what is going on. That isnt a review!

          Rainbows up the ass!

            • Ryan87
            • 15 years ago

            Review, Overview, whats the freakin difference, get a nice idea outa it and read another! you obviously havent read enough if your still posting here wondering which card wins! Sheesh, i was just trying to help those in need. Oh, and by the way ATI wins from the hundreds of pages of reviews i read, want more proof ill send you the 12 page article i wrote myself to show how they win in almost every aspect.

            • Convert
            • 15 years ago

            No thanks I don’t want to waste my time reading a review from someone like you. That is why I come here. Overview and a review are completely different especially in this instance. Help those in need? PR’s don’t help anyone they just show how well a PR team represents their own company.

            There isn’t a clear cut winner in this particular review. IQ is pretty much identical on both cards and both the ultra and ultra extreme have a higher overall frame rates. I need to read more? I have read quite a few other reviews from other sites, problem is TR is the only one I trust. I will wait for them to do another when retail cards hit shelves before I make any final decisions.

            PR’s are a waste of time, maybe some day you will understand that. This also has NOTHING to do with which cards win so DON’T change the subject. Nvidia’s press releases are roses as well.

            You are not representing your own site too well.

            Oh and Rainbows up the ass!

            • Ryan87
            • 15 years ago

            Ugh dude, i dont have my oown site, nor would i ever help one, and i have to admit, techreview’s review was definitely the best i have read so far on the net. Yes, the cards are pretty much equal, but ATI wins with AA and Anisotropic filtering on, meaning if you want your games to look pretty, go ATI. Their compression method is also a lot better, hmmmm, looks like they win. the Geforce is a beautiful card, but its too extravagant, K.I.S.S.

            • Convert
            • 15 years ago

            Haha notice how even now you try to turn it around into a “which card is better” debate.

            No worries, I am not the making an ass out of myself.

            Keep up the work though, you are quite amusing.

            • Ryan87
            • 15 years ago

            Ugh dude, what a chode you are, you dont even counter any of my points at all so i threw that in there to see if you weren’t just an insult bot or something, because your shit was getting real lame, and as for the drivers review its out, so read up, buy your ATI and stick it up your ass so you can get a nice clear picture of who the real shit is. §[< http://www.absoluteinsight.net/index.php?page=6&id=16<]§ Yup same site again, dont know why i should share it with you, but i will anyway since i like to help anyone who wants it, maybe not you, but others may benefit.

            • Convert
            • 15 years ago

            There isn’t any points to counter, thats the point. You lost, move along.

            Nice to see that you have repeatedly proven you are affiliated with that site. Acting like a child and plugging your own site at the same time wont help you get hits.

            It is also Techreport, not techreview you dumb ass.

            • Ryan87
            • 15 years ago

            whats the matter? couldnt come up with anything for my post so you deleted it? Nice use of power i must say, and a nice way to get out of things. Dont think life will be that easy, or you will burn.

            • Convert
            • 15 years ago

            What the heck are you talking about? All of your posts are still there. Use of power? I am just a regular user.

            Like I said there’s nothing to come up with. You don’t have any points.

            Which card is better debate doesn’t have anything to do with this particular conversation. It started over the site, notice how I never said anything about performance until you brought it up.

            The topic was that the site is just a copy/paste of a ATI press release, and that press releases are nothing more than rainbows up the ass, look at Nvidias press release if you are that inept. Not which card performs better you dumb ass.

    • Ryan87
    • 15 years ago

    “www.absoluteinsight.net” has a huge ATI driver article coming tomorrow so you can see what the best is to use if your going with the X800, i found it really helpful! I have it all planned out now, hehehe

    • PerfectCr
    • 15 years ago

    Wow 200 Comments. Have not seen that since we dropped AG’s. BTW, I’m post #200 🙂

    • Kontrol
    • 15 years ago

    So, I have this problem. I have an old video card, Geforce3 TI 500. I was wondering what you folks thing I should get next; once i saw the Geforce 6800 benchmarks, I was in awe. Now that ATi has shadered those dreams, I will probably go for the ATi800pro and OC it.

    System is P4 2.26 512 RAM.

    Thanks
    -K

      • mirkin
      • 15 years ago

      i have the same card in an amd @ 2ghz x 333. i think my rig would leave one of these puppies bored, so ill be upgrading whole rig for this cycle. i almost cant believe i never upgraded my grfx card, 5900XTs had me mighty tempted – but alas, it underwhlemed me. im not playing any new games yet so there was no point.

        • Kontrol
        • 15 years ago

        ahh yeah, i’ve just been looking to the future. my future can wait until doom3/hl2 come out.. or until i need more power (and have the money to upgrade).

        -K

      • indeego
      • 15 years ago

      Your system has, what, a 400 mhz bus? Get a new ‘Thlon 64 or something before getting a new card. Your CPU is a bottleneckg{<.<}g

    • Ryan87
    • 15 years ago

    Hey, I just have to know, after learning all you have about these cards, do you believe that the 6800 Ultra will eventually win this battle because of their more advanced technology or do you believe ATI has its prorities in line and that the speeds of these cards will be obsolete by the time we need this technology? Also, the 6x may look better than the 6800’s 8xS, but the 6800 can go up to 16x… is that just overkill? Is the 6x enough?
    Thank you for your time and i would really appreciate it if you write me back.

    • Lordneo
    • 15 years ago

    I use my server Dual Xp’s Iwill for mostly storage, Using cheap Sata drives, and Promise controllers running on 66mhz pci slots which are the Third and ouch First cpi slots…. so thats the only reason i wouldnt want to block a pci slot….

    But question if anyones got ideas ?

    My duals are at 2.4ghz , Iwill mpx2 , would a X800 Pro be any good ? or should i plan to build a new athlon64 system ?
    Currently running a 9700Pro on her.

      • Wintermane
      • 15 years ago

      The only time you should get a new card is when your old card cant play the games you love.

      That way you will know what you realy need and it is more likely to be out and working by then AND cheaper to boot.

        • Lordneo
        • 15 years ago

        I do play Frycry , its very nice, Highest settings on most options, runs great, but yeh , not maxxed out, and little or no AA, Ansio , like 1 or 2x perhaps

        i think prehaps i found a game that could use the power of the X800,

        first time i saw the reflection of the mountains in the water it was sweet..
        just before borrowing a vehicle mission

    • SXO
    • 15 years ago

    I’d say it’s a draw. nVidia and Ati have both shown their own innovations, nVidia with support for new technology, an architecture designed to deliver 32FP for shaders, etc., while Ati has shown tremendous ability to improve an existing chip with highly innovative architecture that allows for boosted performance even with advanced textures and things like AA.
    I am going to sit out this round though because I as well as many others will not make the plunge until both Doom III and HL2 are on shelves.
    Ati has the slight edge when it comes to current high-end games, but this lead can be reduced by improved drivers on nVidia’s part (hopefully without IQ-reducing optimizations), so the case isn’t closed quite yet. nVidia’s support of SM 3.0 may seem like an unnecessary feature for quite some time, but as I recall the updated Crytech engine that will be featured in an upcoming revamp of FarCry uses true displacement mapping. Yes, I do realize most of the new features are SM 2.0, but displacement mapping is specifically a 3.0 feature, which is why he (The Crytech dude at the NV40 Launch Event) clearly stated that what was being shown was 2.0 & 3.0 effects. The R420 uses N-shaders for virtual displacement mapping, and it just doesn’t support all the new effects we’ll be seeing. At the same time HL2 will be using image enhancing features supported only by Ati.
    So the decision is a tough one, although if I had to make the decision right now, I’d go with Ati simply because of their lower power requirements, and their support for HD resolutions. Although I hope an nVidia card-maker decides to go with an external power solution rather than eating up two molex connectors.
    Furthermore, I hope the R500 will be out around the time HL2 ships. With its support for SM 3.0, it might have everything I need (performance, IQ, & feature support) to truly be happy with my choosing them.
    “I think the main feature that people are looking at is the 3.0 shader model and I think that

    • Convert
    • 15 years ago

    Well I was bored so I decided to add up all the scores. I excluded the aquamark though (I prob should have since it really isn’t that synthetic right?) and the synthetic benchmarks. I also added the min and average frames together on prince of persia and painkiller (comments?).

    6800 Ultra Extreme = 6492.31 total FPS
    6800 Ultra = 6378.74 total FPS
    X800 XT Platinum = 5957.3 total FPS

    Other things go into a decision as well, prob would be best to add up the performance on the rez you use (like for me my monitor wont let me do anything past 1280×1024 at a decent refresh rate). I donno I just thought it was interesting is all, no need to flame, but please correct what you see is wrong.

    I am pretty sure I added everything correctly….

    • BooTs
    • 15 years ago

    WOW! New Graphs! This is awesome! 😀

    • Crackhead Johny
    • 15 years ago

    I’m only going to worry about the price when Doom3/HL2 are gold and I’m building a system.
    If things go poorly there will be another generation of cards out before this happens.

    • indeego
    • 15 years ago

    Are there any new 2-D features or windows GUI features on these cards/driversg{

      • HiggsBoson
      • 15 years ago

      ?

      Not sure what features you’re talking about there buddy… you mean like more multi-monitor stuff? 2D and GUI stuff is pretty much a solved problem random jabs at nVidia’s 2D output quality aside.

      The only big upcoming change, that I’ve heard of, is Longhorn’s rumored 3D accelerated interface.

    • nogaard777
    • 15 years ago

    Am I the only one who is seriously looking at the 6800 GT as the optimum price/performance winner? I was thinking of an Ultra but the power req. were swaying me to the R420. But now that I’ve seen numbers for the single molex GT at 4 bills I’m sold.

      • FunkeeC
      • 15 years ago

      Unless I am remembering wrong, isnt the x800 pro also a 399 price point and seems to have similar performance to the 6800 ultra in the dx9 games (far cry). Seems that one to me would be a buy. Then again i read the review much earlier today.

    • Kurlon
    • 15 years ago

    Question to all of you who b1tch and m0an about 2 slot video cards. Just how many PCI slots do you actually use? I’m looking at my nice big primary box, and I’m using two slots, one for a sound card, and my GF4 4200 sitting in the AGP slot. 9 times out of 10 the mobo is wired such that the PCI slot next to the AGP slot shares an IRQ, that’s reason enough for me to never use that slot anyways.

    So, you’ve got a vid card, sound, ethernet, raid of some form, and a tv tuner… what else are you tying your slots up with that you need every last one?

      • Convert
      • 15 years ago

      Well 99% of people with a tower don’t need all their pci slots, so those people aren’t complaining. Also I believe that now days the IRQ issue are resolved with newer mobos.

      The people that are complaining are the ones with a SFF case. A lot of people on TR are SFF people so that’s why it seems like a big deal. At the end of the day though SFF people make up a incredibility small amount of the over all users.

      • Krogoth
      • 15 years ago

      The SFF crowd doesn’t like two slot cooling solutions, as most SFF can’t fit cards like the 6800U. It looks like the X800PRO will be the high-end card of choice for this crowd. I actually don’t mind two slot cooling solutions since right now, PCI0 in my main rig is taken-up by a slot outake fan to help cool my Radeon 9700 PRO.

      • Decelerate
      • 15 years ago

      OH YEAH?
      Well i have 6 molex connectors and 3 PCI slots available in my current rig, does it mean it’s *[

      • adisor19
      • 15 years ago

      OK, here’s my PCI cards so you don’t tell me i bitch and moan for no reason !!

      1)SBlive
      2)Promise RAID
      3)Ethernet card (for router)
      4)Ethernet card no.2 (for Cable modem no.2)
      5)Firewire card.

      That makes 5 (count them) and i have 5 PCI slots total. So, YES, i do bitch and moan when a stupid video card take an extra slot away.

      My mobo is an A7M266 Deluxe. Time to upgrade next year for some dually action..

      Adi

        • Krogoth
        • 15 years ago

        Your motherboard is way too old to really take real adavange of the video cards that need two slot video cooling solutions. Nowdays, with everything being intergrated into the motherboards. There’s little reason anymore for Motherboard Maunfacturer’s to include 5 PCI slots. The only PCI cards I can think of you might need is a firewire card, some sound card and a secondary disk controller. A secondary ethernet card is more of luxury you only need for routing, firewalls and gateways. Anyway most people on these boards like me leave PCI0 empty to give some breathing space or place a slot fan for their top of the line video cards. So the whole argument for PCI0 slot being eatting up for extra cooling is a mute point for most desktop users. Get the X800 if you simply lack the necessary space for a slot cooler.

        • Kurlon
        • 15 years ago

        I can offer that payload in 4 slots…

        Dual NIC, or just hang a switch, I highly doubt either of your broadband connections come close to walling 100baseT FDX. Majority of mobo’s have onboard ethernet with a better path than stock PCI anyways, plus a setup as old as yours can’t use the perf these cards offer, making you an outlier not the standard.

          • HiggsBoson
          • 15 years ago

          Define better path? If it’s not CSA or the new VIA and nV variations on the them it’s just a device integrated onto the motherboard that’s attached to the PCI bus.

    • PLASTIC SURGEON
    • 15 years ago

    “Even if you’re a fan boy/girl of a particular brand, there’s no denying that, for the most part, ATI’s Radeon X800 XT Platinum Edition is the speed leader. While it’s very close throughout most of the race, the X800 XT Platinum Edition just comes out on top more often. It’s important to remember that NVIDIA’s cards offer full SM30 support and FP32. It’s just that games with these features won’t be out for quite some time.

    A more important consideration is one you may not expect. To run both the GeForce 6800 Ultra and Ultra Extreme, NVIDIA recommends a 480W power supply (PSU). While I’m lucky enough to have very high-end stuff and a power supply that powerful, these things are not common. I looked through the May issues of popular computer gaming magazines, and not a single “über extreme ultra high-end maximum” gaming machine came with anything over a 425W PSU. If you’ve got an OEM machine like a standard HP, Dell, Gateway, or other “mass-market” brand, chances are you have a 350W PSU at best, and it may be proprietary, meaning you can’t just replace it on your own. This means that you may be required to update your PSU along with your video card, a task that many haven’t the first clue how to do. In contrast, both the new Radeon X800 cards have a recommended PSU of 300W, as does the GeForce 6800 GT.

    Additionally, both the 6800 Ultra and Ultra Extreme are double-wide cards, and both require two molex connectors. (A molex connector is one of those white power cords like the one plugged into your hard drive and optical drive.) Even then, both cards require two separate feeds; a y-connector that splits one molex in to two will not suffice. Again, in contrast, both new Radeons and the GeForce 6800 GT have one molex connector and are single space cards.

    For many (especially those with small form factor cases that are all the rage), such power and space considerations make their next video card purchase a no-brainer.”
    -GameSpy-

    That huge link tells it all….

      • Convert
      • 15 years ago

      1. You don’t need to run it on seperate rails.
      2. You don’t need a 480watt PS.
      3. Expect someone like leadtek to make a single slot cooler design.

        • derFunkenstein
        • 15 years ago

        I thought that nVidia was recommending separate rails

          • Convert
          • 15 years ago

          People recommend a lot of things.

            • derFunkenstein
            • 15 years ago

            Yeah, and when a manufacturer recommends things, you’d best do it or void your warranty.

          • Krogoth
          • 15 years ago

          They’re playing it safe for some hardcore gamers who unfortunely, have sub-standard PSUs. I remembered that the when the Radeon 9700 PRO was released there were annoying 8xAGP problems with some motherboards and power issues with people with overloaded PSUs.

            • DaveJB
            • 15 years ago

            I thought the AGP 8x issue only affected SiS boards?

            • rika13
            • 15 years ago

            it affected via boards too, specifically on pre-a3 r300 silicon

      • MorgZ
      • 15 years ago

      wait for newer nVidia drives fs!

      • pedro_roach
      • 15 years ago

      For all you Dell user’s wanting to get a new power supply, check out this artical. It might save you a computer.

      §[< http://63.240.93.134/articles/upgrade3_01_01.asp<]§

    • Dposcorp
    • 15 years ago

    This is what I get for being sick today………late read, but excellent review.
    Now I am just waiting for the AIW, DUAL DVI, DUAL HDTV Tuner version.

    • daniel4
    • 15 years ago

    Let quake3 die in peace people. Let it die in peace :P.

      • Krogoth
      • 15 years ago

      Doom 3 is just around the corner, RIP Quake 3 once Doom 3 is release.

      • BabelHuber
      • 15 years ago

      Quake3 still looks amazing. With high resolutions, 4xAA and 8xAF it doesn’t look like an almost 5 years old game.

      Besides, Q3A still deliveres the best gaming experience if you like a straightforward approach

        • Krogoth
        • 15 years ago

        The textures and lighting effects still look nice, but the only part of the game that shows it age is the models themselves.

    • Sargent Duck
    • 15 years ago

    Of interesting note, Driverheaven overclocked their x800xt in their review. They managed to get a core clock of555mhz compared to the orginal core clock 520mhz. The ram they overclocked to 600/1200mhz from 560/1120mhz. Being a high end chip, those seem to be somewhat decent overclocks. This could however filter down to the Pro version, and even though it only has 12 pipes, could be a very nice overclocker

      • Illissius
      • 15 years ago

      I specifically read the overclocking part in any review that had one :). X800XT PEs overclocked to 530-40 MHz like 3-4 times (and coming from 520, I wouldn’t exactly call that great, a step up from horrible at best), and to 560MHz once. I only remember seeing the X800 Pro overclocked once, where it went to 525 (from 475), which is pretty good.
      On the nVidia side of things, 6800 Ultras usually went to around 430, and once to 465 – this is better than what the X800 XT produced, but not by much. The 6800 GT I again only saw overclocked once, to 425MHz, from 350MHz, which is what I would call excellent. Keep in mind that the 6800 GT is 16 pipes to the X800 Pro’s 12, so its overclocks are worth 33% more per MHz.

    • Ryu Connor
    • 15 years ago

    q[< I sometimes think we make sh*t up just to argue about stuff.<]q Heh. You mean like some of the crap in this thread?

    • lethal
    • 15 years ago

    Who will pay 550$+ on a video card?. Only those freaks that will happily shell out 6000$ for an Alienware PC. By the time that DX 9.c is out, these guys will probbaly have a Geforce 8800 or a XII800XT.

    On the side note, where’s Overdrive? Will this card have the monitor chip?. That could prove quite useful for mid-range overclockers….

    • Nossirahdrof
    • 15 years ago

    Did anybody see the [H]ard|OCP review? Their method seems to conclude even more in the favor of the X800’s rather than the 6800’s. I kinda like the way they go about the benchmarks too since it more matches the way you would use your video card at home – highest settings that are playable. It puts the old games that get 100+ FPS in a better perspective.

      • mirkin
      • 15 years ago

      for single player games – yes. multi player gamers tend to want minimun frames as high as possible.

        • Krogoth
        • 15 years ago

        FPS means very little in real-time multiplayer, your connection and latency matters much more.

    • Sargent Duck
    • 15 years ago

    They are both very impressive cards, but the real WOW factor for me was how well engineered the ATI card was. Yes, I know it’s only 2 r360’s put togeather, but the fact that it only requires 1 power connector (with a 300 watt power supply, I believe?), and it only takes 1 slot. Very impressive from an engineers point of view.

    Also, in my opinion (*READ* MY, i’ve been flamed for this before), anything over 100 frames is overkill. Fine, so the Nv 40 gets 200 frames in a quake 3 game while R420 only gets 150. Big deal. Both cards get over 100, and I’m happy. What I’m concerned about is new games. And the ATI card seems to excel in this part.

    Now if only I could scrounge up some $$$

    • mercid
    • 15 years ago

    As a 9800 Pro owner these cards are giving me the upgrade itch but i wont be fooled this go around. When HL2 and Doom3 FINALLY come out i will buy an updated version of one of these cards…

      • Krogoth
      • 15 years ago

      better get a faster system while you at it. Since, if you have something from the Barton/Northwood generation your system will bottleneck the potental of your new video card.

        • HiggsBoson
        • 15 years ago

        Heee…. agree to both 🙂

        • hmmm
        • 15 years ago

        Well, I disagree that a fast Northwood would be any more bottlenecked than a Prescott.

          • Krogoth
          • 15 years ago

          By the time Doom3 and Half-Life 2 are release. I’m pretty sure Intel will have something compeling over the older Northwoods.

          • HiggsBoson
          • 15 years ago

          yeah… current Prescotts probably not 🙂

    • Doug
    • 15 years ago

    At the end of the day, when all that should matter is the actual game play (if that is why one plays a game) either card would do the job.

    nVidia has a more versitile card (open GL, DirectX 9.0c, Linux Support) while ATI have made an excellent card that will run apps that 99% of the user base run.

    Which am going to buy this year …? More than likely nVidia as it is the devil I know. But then I never overclock or check FPS … I just have fun playing games … what would I know!

      • mirkin
      • 15 years ago

      i only get time to admire the textures when my gibbed body lands on one and im waiting to respawn : )

    • Convert
    • 15 years ago

    In the AA tests with the planes you say the ATI card has the upper hand in 6x mode compared to nvidia’s 8x mode. I don’t see this. The main plane in the picture looks better on nvidia hardware while the small plane in the left hand corner looks better on ati hardware. Take a look at the main planes back tail (yeah yeah I don’t know what the technical name is lol). It clearly looks better. The edges of the main wings also look better. What am I missing?

    Also the nvidia hardware wins in almost all of the game tests. The ati card wins 1 in 12×10 and16x12 in UT2k4 and cleans house with FarCry then takes a win in painkiller. Did I miss something again? I don’t mean to nit pick or anything, I am just wondering since I will be buying one of these cards. I guess it’s pointless at this time anyways since drivers will mature from both camps.

    All I have to say is WOW. The next gen cards can freakin triple performance! I need a towel. Now it is time to go back through the comparison to see which low end card won.

    **Edit: OK don

    • DukenukemX
    • 15 years ago

    Personally it’s a hard choice for me between ATI X800 and Nvidia 6800.

    l[

      • Krogoth
      • 15 years ago

      Haven’t you even read the 6800 article here at TR? The IQ of 6800 is much better then 5950U at par with 9800XT and X800. The only real drawbacks to the 6800 is it’s higher power requirements, larger HSF and somewhat inferior PS/VS performance compaired to the X800. The so call recent “cheats” that some Ati fanboys are accusing 6800 for are just different methods that the 9800XT and 6800U render a scene.

      • indeego
      • 15 years ago

      q[

        • Division 3
        • 15 years ago

        Have to agree with Indeego,

        Both cards have taken up the challange. And well, it is a fight at high speeds.

        At this point if was to get either of theses cards, the bottleneck would be my cpu.

      • HiggsBoson
      • 15 years ago

      r{

        • DukenukemX
        • 15 years ago

        Basically what we have is another Tie between ATI and Nvidia. Which is bad because it doesn’t help bring down GPU sales.

        I tried my best to show the difference between the two cards.

        Seems people who have Mini PCs could benefit a lot from a X800.

        People who want hard core speed would want a 6800.

        Of course AA and Ansi have a lot to do with this as well. But not everyone uses them.

          • HiggsBoson
          • 15 years ago

          Yeah it wasn’t a bad summary, I was just adding some comments/criticism to your observations. Sorry if it came off as an attack on you personally.

      • Ryan87
      • 15 years ago

      According to the pros and cons i would go with ATI. I say that they just win as an overall, but out of 100 points its like 6800 48 points x800 52

    • rika13
    • 15 years ago

    both are good, ati has less power suckage, nvidia has sm3.0 and fp32; both have good drivers and interesting AA options (image nvidia playing with some variant of temporal AA that uses various supersampling/msaa mixes); 2004 will be a great year for us users

      • rxc6
      • 15 years ago

      q[

    • meanfriend
    • 15 years ago

    What’s the current state of ATI’s 3d acceleration support under Linux?

    I’m also nearing the time to upgrade my GF4 Ti4200. ATI’s latest offerings are looking more appealing than nvidia’s (we’ll see when the mid-range cards are released) but multiplatform support for me is a must.

    Last time I looked into it, 3D Linux support from ATI was still spotty. No Linux drivers = no sale. FWIW, the nvidia linux drivers have worked flawlessly with Enemy Territory, UT2003, and whatever else I’ve thrown at it.

      • derFunkenstein
      • 15 years ago

      Then don’t buy ATi yet. They’re still having some Linux issues, if my experience with it is anything.

        • hmmm
        • 15 years ago

        Sad but true.

    • Fr0st
    • 15 years ago

    I don’t care which card is better. What I do know is that Ruby can kick Dusk’s ass, and probably her fairy sister’s too.

      • GodsMadClown
      • 15 years ago

      Let me know when a hack is available to make Ruby naked…

    • Decelerate
    • 15 years ago

    Yeah I’m definitely leaning towards an ATI solution for my next PCs

    Not only is it edging NVidia out, but with less power consumption nonetheless! It’s /[

      • HiggsBoson
      • 15 years ago

      Actually no…. clock for clock the NV40 seems to lead the R420, at least that’s the trend in Damage’s opinion and I’m inclined to agree. Remember at 520MHz or whatever vs. 400/450Mhz there should be a fairly significant performance delta: 30% and 21% respectively, (in theory). It looks like the nVidia part is more parallelized and the ATI part is better balanced/tweaked (for higher clock rates). It doesn’t seem inconsistent with a new part (NV40) and a tweaked/optimized older part (R420).

        • emkubed
        • 15 years ago

        Did I miss something? I though R420 WAS a new part, not a tweaked R3XX.

          • HiggsBoson
          • 15 years ago

          Actually there’s been a lot of speculation that the R420 is based on multiple RV360 (yes, 9600/Pro/XT) cores that have been tweaked and stitched together, based on the review here that sounds even more plausible now. Since they seem to have it arranged as 4 pipeline bundles.

          Apparently that was also one of the reasons why they aren’t supporting SM3.0 in this generation. The overall design philosophy is that the market doesn’t really “need” it at the moment, and at this current level of hardware it doesn’t really work anyway (for ATI). I remember reading somewhere that R500 is ATI’s version of SM3.0 “done right.” However it also happens that it conveniently allows them to forgo all of the engineering/design work that would’ve gone into adding such features.

          Obviously these are all a kind of speculation, which is at best hazy, and all from memory, which is at best unreliable, so make of it what you will. But I do recall seeing a lot of this kind of guessing/analysis weeks/months back.

            • getbornagain
            • 15 years ago

            well i remeber reading something to the effect that ATI had some info on the next gen stuff from nvidia…or otherwise thought it was cheaper, good enought ect. and decided to put off the R400…and turned it into a R500, so they basicly decided to tweak the R3XX aka R420

        • Decelerate
        • 15 years ago

        Hence the /[

          • HiggsBoson
          • 15 years ago

          Well… yeah that is unusual… it appears that the ATI part looks like it’s slower clock for clock than the nVidia part, but at the same time looking at the power draw figures it’s clear that the ATI part draws less power. Which is consistent with a single molex connector.

          What is really interesting though is that the ATI part draws less power at higher clock rates… generally for MPUs power draw scales directly with clock rate so that’s pretty impressive /[

            • droopy1592
            • 15 years ago

            More transistors will get ya higher power consumption…

            #99, Whatever man. Nvidia drivers have soooo many bugs. The original Farcry without the patch was a slower performer, almost 9800XT frame rates, then they introduced the 1.1 patch and suddenly performance increased, but that’s when all of the bugs and artifacts showed up. Nvidia is getting smarter but it STILL has to use game specific “optimizations” which should read “game specific cheats” because the output is still different from the raster and ATi seems to be equal to the reference. Just because Microsoft says “it’s a method of filtering bla bla bla” doesn’t mean it’s not a cheat. Funny how image quality is ok and then when you install patches or force the games to think it’s playing on an ATi card, the Nvidia card goes slower but renders correctly but when ATi card is forced to run as an Nvidia ID card it renders faster with bugs and artifacts.

            Sounds like they are trying their damnest to cheat.

            §[< http://www.driverheaven.net/showthread.php?s=&threadid=44253<]§

    • Ricardo Dawkins
    • 15 years ago

    OT…but some1 knows how much the Sasser worm is affecting the web …??

    • Krogoth
    • 15 years ago

    Hate to burst your bubble but, Doom3 and HL2 (If it ever comes out this year) will really push the envople. I don’t think even the 6800U can handle these games at 1600×1200 at max details with a playible framerate. 1280×1024 is a more realistic max resolution for this generation of cards and CPUs.

    Sorry, it’s meant to be a reply to #43 🙁

      • Decelerate
      • 15 years ago

      I somehow doubt that. All the “/[

        • Rousterfar
        • 15 years ago

        My GF4ti4200 even still does a good job on most games.

        • derFunkenstein
        • 15 years ago

        I could make a case for the original Unreal, but even then, 40fps was pretty fast…so for the time, it still ran well on high-end Voodoo2’s…so I just talked myself in a circle…you win, can’t think of a well-written game that was too much for high-end hardware.

        • mirkin
        • 15 years ago

        quake 3 –

        i think we will need to look more at a game engines coding before we say one GPU architecture is superior another when d3 and hl2 come out. there is an excellent chance that todays hardware will do better w/ d3 than it did w/ q3 back in the day, but it may also come down to decisions the devs make about how to implement effects in their game. the result may be that it runs doggy on even the fastest rig. has any one noticed how many games get patched w/in 1 month of being retail ?

          • Decelerate
          • 15 years ago

          No way, my roomies were all running Q3 on their high-powered machines like it a hot knife through butter. At max settings.

          I know cauz my poor old comp only ran it with big ass polygons, and I was drooling at their graphs.

            • mirkin
            • 15 years ago

            I meant q3 when it came out. q3 went retail just before the first geforce non ddr came out. you couldnt even get one on pre order they were selling so fast. cpus had yet to hit 1 gig and the bx boards still ruled the roost.

    • ciparis
    • 15 years ago

    Loved the teapot / sexpot observation 🙂 great way to start.

    • muyuubyou
    • 15 years ago

    Quake3 at 1600x1200x32 200fps+

    sign me up!

    • David
    • 15 years ago

    Great review Damage.

    • PerfectCr
    • 15 years ago

    Where is the “Print Article” Button? 😛

    • Rousterfar
    • 15 years ago

    I do plan on upgrading my G4Ti4200 this generation. I’m just waiting for the mid-range cards to hit the street. ANyone have an ETA on when we may see them?

    • Randy2
    • 15 years ago

    Great review, but I’m confused by the conclusion. It looks like Nvidia outperforms ATI in about every real world test. But the conclusion reads as if ATI still wears the crown ?

    PS: I sure hope Nvidia gets thier 6800EE out soon, so they can release Half Life 2.

      • daniel4
      • 15 years ago

      Some tend to weigh more popular and recent games with more weight. With that in mind ATI’s cards outperform nVidias by a noticeable margin (with the exception of that Prince of Persia game). I’m guessing you care more about performance in older OGL games and for that I guess the NV40 whipes the floor with the R420.

      • droopy1592
      • 15 years ago

      Most of the test won by Nvidia are won buy the Ultra Extreme card, which isn’t the $499 Ultra that competes directly with the PE X800 card. It’s an overclocked card.

      Just about ever other review uses the Ultra card only, not the Ultra Extreme because it’s overclocked and priced higher.

        • thecoldanddarkone
        • 15 years ago

        you do realize that is the x800 platinum edition right, and the reg edition is 499….

          • droopy1592
          • 15 years ago

          I guess you didn’t read the article:

          y[

      • HiggsBoson
      • 15 years ago

      Quick question: what does nV’s 6800 UE have to do with Valve’s HL2 release date?

    • RyanVM
    • 15 years ago

    I was also disappointed by the lack of Dual DVI connectors in the reference design, but I suppose AIB vendors can just put two on their boards themselves.

    • droopy1592
    • 15 years ago

    Most aren’t realizing that in price, the Nvidia 6800 Ultra competes with the X800XT Platinum. It’s not like we can buy the 6800 Ultra Extreme card any time soon.

    Damage said y[

      • Randy2
      • 15 years ago

      I agree. The products shouldn’t be compared if they don’t exist.

      • Disco
      • 15 years ago

      Over the past few years (even back in the R8500 days), it seems that ATI always manages to beat nVidia in the $$/performance ratio. I was constantly blown away at the prices for the FX cards, which were often the same as a Radeon All in Wonder with better performance. It made me wonder “who the heck keeps buying these nVidia cards?”
      I’m thinking that I’ll be first (maybe second) in line to buy one of the X800 pro cards once Doom/Halflife are on the market. Thank God I didn’t buy the 9600xt when they were advertising the free HL2 bonus. I’m sure that ATI is still pretty bitter about the farce that Valve pulled. It really made them look foolish.

    • AOEU
    • 15 years ago

    But does it support blending with floating point framebuffer formats?

      • Damage
      • 15 years ago

      Nope. Gotta use the pixel shaders to do the blend.

    • droopy1592
    • 15 years ago

    all the review sites are taking a beating today.

    • Xplosive
    • 15 years ago

    As Joe Pesci would say in the movie “Casino”, ATI got _[

      • droopy1592
      • 15 years ago

      how do you figure? IF anything, it’s neck and neck. Did you read the conclusion?

        • daniel4
        • 15 years ago

        Question should be did he even read the review at all :P.

          • Randy2
          • 15 years ago

          I did, and it looks like ATI got runned over by Nvidia.

            • droopy1592
            • 15 years ago

            Just about everyone’s conclusion (including Tomshardware.com) labels the ATi card as the winner by a slight margin. How you came to your conclusion may require a very low level of thought.

            • ExpansionSSS
            • 15 years ago

            yeah, “runned” over by getting whooped in Quake 3 based games

            DAMN ! I wish I could get over 275fps in Q3 ! 240 just isn’t enough !

            • hmmm
            • 15 years ago

            Earth to randy, which review did you read? At worst for ATI, they tie NVIDIA, but virtually everyone (including Tom for goodness sake!) is concluding ATI narrowly wins. So, how did they get “runned” over? Really, we’d like to understand your unique thought process.

            • Randy2
            • 15 years ago

            THG is nothing but paid BS reviews. They used to be straight shooters, but that was while ago.

            • David
            • 15 years ago

            That’s the point. They’re saying THG is historically has an Nvidia bias and ATi still wins in their review.

            Regardless, they’re both great cards.

            • hmmm
            • 15 years ago

            Yes, that’s my point exactly.

      • 5150
      • 15 years ago

      “ATI cracked a baseball batt over nVidia’s skull,” would be a more appropriate answer from Mr. Pesci.

      • Chrispy_
      • 15 years ago

      I believe the correct phrase that applies to this particular post would be “troll”

      If you want to play Quake3 so badly, save yourself $489 and buy a Geforce2 off ebay for ten bucks. My flatmate’s GTS gets 125fps at an ample 1024×768 on his TFT panel, which is what nVidia obviously want you to use with their dual DVI outputs. Let us forget for one moment that no TFT panel in existance can display more than 60fps (16ms panels – best case scenario is 62.5fps)

      Quake3 has so few features and low-res textures that you’d be a frikkin’ imbecile to base modern day card-performance on an irrelevant engine that is heavily optimised for nVidia and intel hardware in the first place. Nvidia dominance in the day meant that Carmack was forced to use nvidia’s rules rather than the OpenGL spec. Those with ATI cards will know that there are compatibility options in the drivers that allow ATI cards to misalign textures and line up vertices differently such that the game looks like it was supposed to. Read into this what you will, ATI were accused of cheating for similar reasons with optimised 8500 drivers for Q3, so neither manufacturer has a clean record….

    • Division 3
    • 15 years ago

    I’m confused, I just came from THG…hell must have froze over or something because he actually said the X800 PE was the winner. o_0

    Other then that…good article. I think I will be picking up and X800 Pro, that seem to be the sweet spot for me.

      • droopy1592
      • 15 years ago

      That’s some odd shit, ain’t it? Toms claiming the ATi card the winner? Nvidia’s pockets must be hurting because of the last wave of payoffs with the FX series.

        • rxc6
        • 15 years ago

        Well… while odd, it is not the first time. I think he had to say that the 9700pro was the king back in the day 😆

          • ExpansionSSS
          • 15 years ago

          well comparing a 9700pro against NOTHING for 6 …no 8 months can do that ^_^

          my 9700 pro is still rockin’

    • Zenith
    • 15 years ago

    I refuse to read any more TR articles until you change the graphs styles to the way they were before

      • PerfectCr
      • 15 years ago

      Yeah I did not like the new styles. MUCH harder to read in this article. I liked the old style 100% better. I was about to post this.

      • Ricardo Dawkins
      • 15 years ago

      kiddie…..the graph rulz..if you don’t like it..go to another site !!!

        • PerfectCr
        • 15 years ago

        I guess I am not entitled to give my opinion? Go back and read other graphics card articles and look at the ones used here. Be honest, which one is easier to read?

        I am simply trying to help TR out. I am sure Damage appreciates the feedback and it is not meant as a personal attack on him.

          • droopy1592
          • 15 years ago

          Ricardo was joking, man. Chill out.

    • RyanVM
    • 15 years ago

    Nice article overall. You’ve gotta love the immediate ship dates! What a nice change from the *ahem* competition. I wonder, though, why the power consumption numbers given in the article are so much higher than the quoted numbers from ATI (in the [H] article). Any idea?

      • HiggsBoson
      • 15 years ago

      I’m afraid you’re just misreading them. TR is giving actual power draw numbers for the entire test system from a watt meter at the outlet. The numbers from [H] are numbers given by ATI for the card alone based on their internal testing.

      If TR wanted to get those kinds of numbers they’d have to measure current draw across the molex connector on both 12V and 5V rails, or something like that, (not sure) not accounting for draw through the AGP slot.

        • RyanVM
        • 15 years ago

        You’re right, I misread it in that case. I thought the latter was what they were doing (measuring wattage over a molex connector). In that case, things look great! (Even though I couldn’t care less about an SFF system)

    • just brew it!
    • 15 years ago

    Nice to see that graphics cards will remain a 2-horse race for a while yet. Competition is good.

    • HiggsBoson
    • 15 years ago

    q[

    • PerfectCr
    • 15 years ago

    I do not know what to base my next purchasing decision on. I have two questions that will help determine it though.

    1) Will it fit in my Shuttle XPC SN85G4?
    2) How will it perform with Doom 3 and (possibly if it ever comes out) HL2?

    I have always liked both companies and have owned both a Radeon 9700 Pro and a FX 5900. Currently using a 5900XT in my Shuttle box.

    I just do not understand why Nvida cannot recapture the performace lead. Perhaps the failure of NV30 was/is too much to overcome. ATI it seems is not going anywhere.

      • Division 3
      • 15 years ago

      I think Nvidia has always been top performer, it just ATI keep tripping itself, but now that they stopped stubbling over there own mistakes…ATI is able to movie at the same pace NV has.

      • HiggsBoson
      • 15 years ago

      Well based on the review I think you can say that the X800 XT looks like a good bet for the Shuttle treatment, if not the X800 Pro and 6800 GT aren’t too shabby either. The question mark will be the 6800 Ultra/Extreme, but they could be plausible too.

        • PerfectCr
        • 15 years ago

        Yeah you may be right. Esp since ATI demo’d it in a XPC 🙂

      • droopy1592
      • 15 years ago

      Damage said :

      y[

        • rxc6
        • 15 years ago

        q[

          • hmmm
          • 15 years ago

          So we should cut them slack on buggy drivers because they previewed their card 2 months before you can buy one?

            • rxc6
            • 15 years ago

            Well,what we should do is to wait until a REAL consumer product is out and see if they have improved or not.

            • hmmm
            • 15 years ago

            Of course we should test the shipping version when it comes out; of course we’ll look at whether it has been optimized since the preview. There’s no disagreement there.

            /[

            • Disco
            • 15 years ago

            I just find if very convenient for nVidia to provide “beta” drivers for the preview of the cards which happen to give a little boost to their performance. Then Damage has to make a comment that readers should remember that the 6800 performance is actually a little less than is shown in the graphs. At the end of the review, the reader is left with a first impression of performance that is false, and artificially high. How often are reviews completely redone once the proper ‘final’ drivers are available? Not very often, and definitely not to the widespread extent that we are observing today…

            As I said…. how convenient for nVidia

      • Buub
      • 15 years ago

      y[

        • HiggsBoson
        • 15 years ago

        Agree. ATI’s finally firing on all cylinders… they’ve always had pretty good hardware, and nice multimedia features, but now they got their whole act together.

      • indeego
      • 15 years ago

      You should not purchase a single part of that new system until the games are out and they are patched once or twice. Even then I’d just wait another round. You’ve waited this long, what is another 3-6 months for cards that will obviously be optimized for the experienceg{

        • PerfectCr
        • 15 years ago

        Good point. Right now my 5900XT plays all my games well, so I’m ok for now. Far Cry obviously has to be played with lower settings, but not too bad.

          • indeego
          • 15 years ago

          Far cry is the only game in a loooong time that has me actually wanting a newer system. It’s so gorgeous now, I can only imagine with top dog at 1600x1200g{<.<}g

            • PerfectCr
            • 15 years ago

            I agree. I only play with 2X AA and no AF and it still looks great at 1280 x 768 (Widescreen monitor) First game I’ve seen where AA and AF are not essential to make it look great.

      • Ryan87
      • 15 years ago

      “www.absoluteinsight.net” has a huge ATI driver article coming tomorrow so you can see what the best is to use if your going with the X800, i found it really helpful! I have it all planned out now, hehehe.

    • derFunkenstein
    • 15 years ago

    Oh boy. That X800Pro totally looks worth the cash now…:D

    • MorgZ
    • 15 years ago

    top review Damage.

    It will b interesting to see how the nVidia drivers improve performance cos atm the X800 is probably the better buy. Think im gonna wait atleast a month b4 deciding and then will buy the best card for £200-300

    The wait has been easily worth it GG both companies and GG techreport for the quality reviews

    • 5150
    • 15 years ago

    If one of nVidia’s card makers decide to make an external power supply, they might be able to get rid of a big problem the NV30 has.

    • Krogoth
    • 15 years ago

    Acutally, The X800PRO and 6800U are pretty much neck to neck right now. As I excepted the X800 is better at PS/VS while, the 6800U still pulls ahead in older and OpenGL apps. I’m more interested in 6800GT it looks to be a pretty soild card. Too bad, I wouldn’t be purchusing anything from this generation I’ll wait till mid-2005 to make a new system. My Radeon 9700PRO will still hold up untl then.

    • ExpansionSSS
    • 15 years ago

    well well well, this review bothers me… because I STILL have to wait for Doom3 / Half Life2 to come out to make a decision….

    Far Cry is just not a reliable enough decision maker…

    but hell, I’ll prolly go for one of those shhhweeet X800 XT Platinums

    • SpotTheCat
    • 15 years ago

    This is interesting, as many buyers will base their next purchase not on DX9 but openGL…. so ATI may not have the better buy this time around…

    though keeping it to one molex is a big plus…

      • HiggsBoson
      • 15 years ago

      You mean because of Doom3?

        • thecoldanddarkone
        • 15 years ago

        I would guess yes, and nvidia seems to do a very good job in ogl. This time it seems to go with what more you want, and what features/benefits from the different types of hardware. (much better than the fx vs 9xxx series).

          • HiggsBoson
          • 15 years ago

          Yep, nVidia has always dominated in OGL and Linux support. ATi has no credibility in those arenas.

            • RyanVM
            • 15 years ago

            Yes, I’m finding it scary how badly they’re getting beaten in OGL performance. Doesn’t bode too well for the Doom3 engine-based games (come on, D3 itself probably isn’t going to be that great) of the future.

            • HiggsBoson
            • 15 years ago

            Well IIRC the original R300 (Radeon 9700 Pro) was no slouch with the game, so I highly doubt it’ll be unplayable.

            The problem, if you want to look at it that way, is that generally I think Quadro’s have been better regarded by the pro community than ATI’s Fire line, but I don’t know how that is lately.

            • RyanVM
            • 15 years ago

            I doubt that too, but I have a Dell 2001FP (native 16×12 LCD), so being able to push a high resolution is critical to me. And of course, I want eye candy too :-P.

            • Disco
            • 15 years ago

            At the time that it was first officially promoted at e3 a couple years ago, Doom3 was demo’d on the ‘secret’ R300, and the performance was more than adequate. Carmack even stated a couple times that it was easier to program for the ATI cards because they adhered better to the DX standards. I’m sure that both nVidia and ATI cards will run D3 more than adequately – I don’t think that D3 is even a DX9 game is it? It’s a mix of DX7 and 8. Someone correct me if I’m wrong

            • hmmm
            • 15 years ago

            ‘Tis DX8. I’m sure you’ll be able to run it just fine on either card.

            • PLASTIC SURGEON
            • 15 years ago

            The vast majority of games are not ogl. And it seems the differences in FPS between the old OGL games are not worth mentioning as both cards are pushing those ancient games to 180+fps. Plain and simple. 1 slot solution to 2. Extra power requirements vs. none. R420 seems to be the better bet. And considering the pro version is almost on par with the 6800Ultra, at $100 less and availabilty soon, it’s a no brainer what chip to buy.

            • Doug
            • 15 years ago

            now .. if you have to put it *[

    • mirkin
    • 15 years ago

    yes,

    i mostly play opengl games and will stick w/ nv for that reason. Id also like to take the opportunity to snicker at anyone who bought a radeon thinking hl2 was imminent : ) theres no such thing as future proof and always wait till the game goes retail and you see benchies w/ retail avail hardware : )

      • Samlind
      • 15 years ago

      Of all the games coming, especially the major titles, only Doom3 is OGL. Everything else is DX something. Given Nvidia’s Quattro market, they have put in the time and effort to make OGL drivers second to none, and they get the prize.

      However OGL is becoming a tiny part of the gaming market, and ATI put their money where their market is, in DX games. 3DMark is a DX benchmark for the same reason.

        • mirkin
        • 15 years ago

        on top of the fact that FPS games are a small part of the retail game market but also the driving force in hardware sales. DX is difinitely getting bigger and bigger as the API for PC games while OEM boxen are what makes a company like nvidia so much $$$. Im nothing but amazed to this day how many PCs ive serviced w/ gf 2mx’s in em. i suspect ati better aim some effort in that direction if they want to win the real game.

        • HiggsBoson
        • 15 years ago

        That’s such an eerie thing to see… it also show’s how young you are… ATI has been an absolute dominant force in OEMs for a long time, a lot longer than nVidia. nVidia’s recent design wins (from the past few years) were a dramatic reversal not the status quo by any means. While I haven’t seen any recent figures I also suspect that ATI is still a force to be reckoned with in that department.

        • mirkin
        • 15 years ago
        • Krogoth
        • 15 years ago

        Probably from all the suckers who bought all the favors of the overpriced FX 5200 and 5600s. Nvidia still has a bigger influnence in the gamer and stand-alone market then ATi at this point.

        • PLASTIC SURGEON
        • 15 years ago

        you figure that’s going to change with this generation real quick.

        • HiggsBoson
        • 15 years ago

        Errm…. you’re taking a post about figures from October of /[<2003<]/ as a rebuttle to my post. ATI has been a player in the graphics market since the days of Windows 3.0, back when /[<2D<]/ accelerators were a big deal. Even before 3dfx's Voodoo hit, ATI was already pretty dominant in the OEM space, IIRC because of their 2D products. Then after the Voodoo 1 and the other 1st gen 3D cards, the entire world changed.

        • mirkin
        • 15 years ago

        erm, I chose a date where ati began to erode nvidias market share. just trying to give you the a sense of how long nvidia has held a greater market share.

        • HiggsBoson
        • 15 years ago

        That’s just it… the big picture is more complex than that, in reality. Remember, /[

        • ionpro
        • 15 years ago

        You obviously didn’t read the article:

        > NVIDIA Corporation and ATI Technologies boast with nearly equal shares of the graphics market – 25% and 22% respectively.

        This is from October, and ATI’s graphics sells have continued to be higher then nVidia’s on a month-to-month basis. I’d say they’re basically even now. Hopefully, once Grantsdale is released, Intel’s Extreme(ly bad) graphics will die the death it deserved a year ago.

        • HiggsBoson
        • 15 years ago

        No it won’t, because I guarantee you that Intel’s going to have another -G variant on Grantsdale ala 865G/865GV.

      • highlandr
      • 15 years ago

      Do you snicker at me? I bought my 9600XT because I wanted HL2. I did expect it, yes, but at the same time I realized I was getting a $50 game for free (sometime in the future). I could have gotten CoD with a FX card, but I chose the ATI/HL2 combo instead, and I am not disappointed with the purchase, just with Valve.

        • mirkin
        • 15 years ago

        i snicker politely – i hope the game performs well on your rig w/out any other upgrades and i hope they honor that coupon. the snicker is more towards the idea of future proof pc hardware.

        • highlandr
        • 15 years ago

        Well, I didn’t buy this card because I was looking for future-proofing (not exactly). I bought it because my old GF2MX had finally bit the dust, and I needed something better than a Rage64, and it was in the right price/performance for me.

        At the rate they’re working on HL2, I’ll probably upgrade the rest of my machine long before they release it anyways (2005 anyone?)

        • mirkin
        • 15 years ago

        ya, i been prayin my gf3 ti500 doesnt kick – b/c id rather wait for 939s and a shiny new 6800 or x800

    • Thresher
    • 15 years ago

    Fantastic review, as always.

    Any word on when they will have a PCI-Express version? I’m switching to it as soon as it’s available.

      • LauFu
      • 15 years ago

      “Those who ignore history are doomed to repeat it, apparently.”

      Remember the early adopters of PCI, AGP (not to mention VESA Local Bus, etc.). Hell, Serial ATA for that matter.

      Unless you have cash to throw away, there is little point in being the first on the block to adopt any new technology. Especially when it’s a bus.

        • indeego
        • 15 years ago

        I would avoid any critical technology (like drives) upon their introduction. Not something you want to gunea pig around with while engineer’s slap their foreheads and say, “whoops, we forgot.”g{<.<}g

    • just brew it!
    • 15 years ago

    Heh… looks like ATI has taken down the “Ruby” demo. Maybe their servers got buried? LOL

    Edit: It’s back now, looks like they were still futzing around with the layout of the site as of this morning. I was actually kind of hoping that it would be runnable on 9xxx hardware (albeit at reduced frame rates)… download page says it requires an X800 though. Oh well.

    • Ricardo Dawkins
    • 15 years ago

    Nvidia 0wned…

    • Ricardo Dawkins
    • 15 years ago

    np …Damage…

    just wondering why u didn’t put “fp” on your “First Post”

    • Damage
    • 15 years ago

    Sorry about any typos. The first revision up is pretty rough, but I am working on it. 🙂

      • robg1701
      • 15 years ago

      well ive already looked at the leaked stuff, but ill humour you and read your rough copy Damage

      😉

      • OsakaJ
      • 15 years ago

      Nice review, Damage. (How many hours did all that benchmarking take?)
      My Subaru needs repairs — about $500 worth of work — so I guess I’ll have to wait a while. ARGH!!!!!

      • dolemitecomputers
      • 15 years ago

      I like the new graphs. Good job.

        • primitive.notion
        • 15 years ago

        Great review Scott! Your command of the English language continues to impress. It’s not often that one sits through a 28 page review, but your interesting intro, puns, and good pacing/transitions kept me reading all the way through.

      • Convert
      • 15 years ago

      Typos shmypos

  • Pin It on Pinterest

    Share This