Single page Print

Radeon 8500 vs. GeForce3 Ti 500


Radium? Titaneon? Aw, hell.
— 12:00 AM on December 6, 2001

This article is late because we've been occupied by trying to pin down ATI on Radeon 8500 clock speeds (which we eventually did) and delving into ATI's apparent use of cheats on Quake III Arena benchmarks. Then we visited with ATI (and with NVIDIA) at Comdex. While we were there, ATI released a new driver for the Radeon 8500, so we had to go back to the drawing board with our testing.

That's all probably just as well, however, because the new drivers make this comparison much more interesting. Before ATI released the latest drivers for the Radeon 8500, this GeForce3 Ti 500-versus-Radeon 8500 comparison would have read like this:

Don't buy a Radeon 8500. Buy a GeForce3.
End of story (only with more graphs). However, ATI's latest drivers take care of a great many problems—the Quake III "optimizations," Athlon XP incompatibilities, surprisingly low performance—that the Radeon 8500 brought with it when it first arrived on retail shelves. And once you get under that crusty old ATI veneer of lousy drivers and purposely vague public statements, the Radeon 8500 looks like a darned good graphics processor.

Good enough to take on NVIDIA's vaunted GeForce3 Titanium series? Just maybe. Keep reading to find out.

GeForce goes Titanium
The GeForce3 Titanium series cards are new, but not really novel. They're simply GeForce3 chips set to run at different core and memory clock speeds. Rather than use its traditional "Ultra" and "Pro" tags, NVIDIA chose to use the "Titanium" name for its fall product line this year. The new GeForce3 Ti 500 runs at a 240MHz core clock speed with 500MHz (DDR) memory—just a bit faster than the 200/460MHz of the original GeForce3. The Ti 200, meanwhile, runs at a 175MHz core speed with 400MHz memory—slower than the GeForce3, but then it's priced much lower, too. Beyond the clock speed changes, the Ti series of chips is essentially identical to the original GeForce3.

Not that there's anything wrong with that. In fact, the GeForce3 is still one of the most amazing graphics chips ever. If it weren't for the Radeon 8500, the GeForce3 would have no real rivals. If you aren't familiar with what makes the GF3 so special, go check out my review of the GeForce3 right now. It will bring you up to speed on the new approach to real-time graphics that NVIDIA pioneered with the GeForce3, including fancy-pants things like vertex shaders and pixel shaders. When you're done reading that article, you'll be much better equipped to follow this one.

The Ti cards hit the scene at the same time NVIDIA released its Detonator XP video drivers. These new drivers brought with them substantial performance gains, and they "turned on" a few features already present in the GeForce3 hardware but not yet implemented in driver software. Among them:


    A 3D textured object with a
    scoop taken out of the corner

  • 3D textures — Also known as volumetric texturing. 3D textures are just what they sound like, but the concept may be a little bit difficult. Think of it this way: traditional 2D textures cover the surface of an object, while 3D textures permeate the entire space an object occupies. Chop off a corner of the object, and what's inside is textured, as well. The picture at the right will help illustrate the concept.

    Once you've grasped the basic idea, the mind-blowing stuff comes along. NVIDIA's implementation of 3D textures includes quad-linear filtering and up to 8:1 compression of 3D textures.

    Poof! Mind blown.

    NVIDIA has licensed its 3D texture compression scheme to Microsoft for use in DirectX, but this scheme remains unique to NVIDIA products in OpenGL.

  • Shadow buffers — Shadow buffers are one of the better ways to produce shadows in real-time 3D rendering. They work well with complex scenes, allow for shadows with soft edges, and even allow objects to be self-shadowing. Shadow buffers and shadow maps may not be the end-all, be-all for 3D shadowing; we'll have to wait and see how often and how thoroughly developers choose to implement them. But they're much better than nothing.

  • Improved occlusion detection — This is where the big performance gain comes along. Occlusion detection helps the graphics chip to avoid one of the biggest bandwidth hogs in graphics: overdraw. Overdraw happens whenever a graphics chip renders a pixel that will end up behind another pixel (and thus not visible) once an entire scene is rendered. A conventional graphics chip renders hundred of pixels per frame that won't be visible. That's a huge waste of a graphics card's most precious resource: memory bandwidth.

    Although it's not as radical an approach as a Kyro II, the GeForce3 has the ability to determine, at least some of the time, when a pixel will be occluded, so the chip can avoid drawing unneeded pixels. The Detonator XP drivers improve the GeForce3's occlusion detection, boosting the chip's effective pixel-pushing power—or fill rate—substantially.

The combination of the Ti 500's higher clock speeds and Detonator XP's new features makes the GeForce3 a more formidable competitor than ever. The gap between previous-generation chips like the GeForce2 and the original Radeon keeps growing, and ATI looks like the only graphics company with a prayer of keeping up.