The Carmack weighs in

The mastermind behind Quake and Doom took a break from coding away on his next game engine to give us his opinions on the current crop of 3D graphics chips in the form of a .plan file update (thanks billb). He recently added Radeon 8500 support to the new Doom, so he compares and contrasts the Radeon 8500 with the GeForce4 Ti. He finds the two chips' vertex shader implementations comparable: "The ATI hardware is a little bit more capable, but not in any way that I care about."

Pixel shaders are a different story:

The fragment level processing is clearly way better on the 8500 than on the Nvidia products, including the latest GF4. You have six individual textures, but you can access the textures twice, giving up to eleven possible texture accesses in a single pass, and the dependent texture operation is much more sensible. This wound up being a perfect fit for Doom, because the standard path could be implemented with six unique textures, but required one texture (a normalization cube map) to be accessed twice. The vast majority of Doom light / surface interaction rendering will be a single pass on the 8500, in contrast to two or three passes, depending on the number of color components in a light, for GF3/GF4 (*note GF4 bitching later on).
That's a significant advantage for the Radeon. Single-pass rendering ought to allow for better performance and superior image quality versus the NVIDIA chips. However, as is too often the case, the Radeon 8500's features don't necessarily translate into better performance:
A high polygon count scene that was more representative of real game graphics under heavy load gave a surprising result. I was expecting ATI to clobber Nvidia here due to the much lower triangle count and MUCH lower state change functional overhead from the single pass interaction rendering, but they came out slower. ATI has identified an issue that is likely causing the unexpected performance, but it may not be something that can be worked around on current hardware.
There is something to be said for getting things right the first time out, and Carmack notes that NVIDIA tends to do so, especially when it comes to OpenGL drivers. It sounds like ATI has been pretty responsive in delivering driver fixes lately, however.

The Carmack also had this to say about GeForce4 MX chips:

Do not buy a GeForce4-MX for Doom.

Nvidia has really made a mess of the naming conventions here. I always thought it was bad enough that GF2 was just a speed bumped GF1, while GF3 had significant architectural improvements over GF2. I expected GF4 to be the speed bumped GF3, but calling the NV17 GF4-MX really sucks.

GF4-MX will still run Doom properly, but it will be using the NV10 codepath with only two texture units and no vertex shaders. A GF3 or 8500 will be much better performers. The GF4-MX may still be the card of choice for many people depending on pricing, especially considering that many games won't use four textures and vertex programs, but damn, I wish they had named it something else.

There you have it from the man himself.
Tip: You can use the A/Z keys to walk threads.
View options

This discussion is now closed.