Single page Print

NVIDIA's GeForce FX 5200 GPU

Between capability and competence

WHEN NVIDIA announced its NV31 and NV34 graphics chips, I have to admit I was a skeptic. The chips, which would go on to power NVIDIA's GeForce FX 5600 and 5200 lines, respectively, promised full DirectX 9 features and compatibility to the masses. Who could resist?

Me, at least initially. Perhaps I still had a bitter taste in my mouth after the recycled DirectX 7 debacle that was the GeForce4 MX, or maybe it was NVIDIA's unwillingness to discuss the internal structure of its graphics chips. Maybe it was merely the fact that I didn't believe NVIDIA could pull off a budget graphics chip with a full DirectX 9 feature set without making cutting corners somewhere.

Or maybe I'm just turning into a grumpy old man.

Well, NVIDIA may have pulled it off. Now that I have Albatron's Gigi FX5200P graphics card in hand, it's time to take stock of what kind of sacrifices were made to squeeze the "cinematic computing" experience into just 45 million transistors. Have NVIDIA and Albatron really produced a sub-$100 graphics product capable of running the jaw-dropping Dawn demo and today's 3D applications with reasonably good frame rates? How does the card stack up against its budget competition? Let's find out.

The NV34 cheat sheet
NVIDIA's big push with its GeForce FX line is top-to-bottom support for DirectX 9 features, including pixel and vertex shaders 2.0, floating point data types, and gobs of internal precision. As the graphics chip behind NVIDIA's GeForce FX 5200 and 5200 Ultra, NV34 has full support for the same DirectX 9 features as even the high-end NV30. What's particularly impressive about NV34 is that NVIDIA has squeezed support for all those DirectX 9 features into a die containing only 45 million transistors—nearly one third as many as NV30.

Beyond its full DirectX 9 feature support, here's a quick rundown of NV34's key features and capabilities. A more detailed analysis of NV34's features can be found in my preview of NVIDIA's NV31 and NV34 graphics chips.

  • Clearly defined pipelines — NVIDIA has been very clear about the fact that NV34 has four pixel pipelines, each of which is capable of laying down a single texture per pass. Unlike NV30, whose texture units appear dependent on the kind of rendering being done, NV34 is limited to a single texture unit per pipeline for all rendering modes.

  • Arrays of functional units — NVIDIA has been coy about what's really going on under the hood of its GeForce FX graphics chips. Instead of telling us how many vertex or pixel shaders each chip has, NVIDIA expresses the relative power of each graphics chip in terms of the amount of "parallelism" within its programmable shader. NV30 has more parallelism than NV31, which in turn has more parallelism than NV34. How much more? Well, NVIDIA isn't being too specific about that, either.

  • Lossless compression lost — Unlike NV30 and NV31, the NV34 graphics chip doesn't support lossless color and Z compression, which could hamper the chip's antialiasing performance. The absence of lossless Z compression will also limit the chip's pixel-pushing capacity.

  • 0.15-micron core — NVIDIA's mid-range NV31 and high-end NV30 graphics chips are manufactured on a 0.13-micron manufacturing process, and both feature 400MHz RAMDACs. Since NV34 is targeted at low-end graphics cards, it's being built on a cheaper and more mature 0.15-micron manufacturing process. The 0.15-micron manufacturing process limits NV34's RAMDAC speed to 350MHz, but only those running extremely high resolutions at high refresh rates should be limited by a 350MHz RAMDAC.

NVIDIA's NV34 graphics chip: DirectX 9 on a budget

With chip specifics out of the way, let's take a peek at Albatron's Gigi FX5200P.