Single page Print

Differentiating features, or the lack thereof
If NV30, NV31, and NV34 share so many key features, how does NVIDIA differentiate between them? First, let's deal with the easy stuff:

 Lossless color & Z compressionMemory interfaceTransistors (millions)Manufacturing processRAMDACS
NV30Yes128-bit DDR-II1250.13-micron400MHz
NV31128-bit DDR-I80

Both NV30 and NV31 use lossless color and Z-compression to improve antialiasing performance, but those features have been left off NV34 (likely to reduce the NV34's transistor count). The lack of color compression will hinder NV34's antialiasing performance, and the chip won't support NVIDIA's new Intellisample antialiasing technology. Losing Z-compression won't help performance, either, with AA or in general use.

All of NVIDIA's NV3x chips will have a 128-bit memory interface, but NV31 and NV34 will use DDR-I memory chips. NVIDIA wouldn't reveal how fast the memory on its various NV31 and NV34 flavors will run, but at the very least we know that cards will have less memory bandwidth than the vanilla GeForce FX 5800. Currently, the fastest DDR-I-equipped consumer graphics cards use DDR-I memory at 650MHz, which offers just over 10GB/s of memory bandwidth on a 128-bit bus; to equal the GeForce FX 5800's 12.8GB/s of memory bandwidth, NVIDIA would have to uue DDR-I memory chips clocked at 800MHz, which is very unlikely.

All of NVIDIA's NV3x chips will be manufactured by TMSC. Although NV31 will use the same 0.13-manufacturing process as NV30, NV34 will use the older, more established 0.15-micron manufacturing process. NVIDIA wouldn't reveal NV31 or NV34's final clock speeds. Those speeds have been decided, but they won't be released until actual reviews hit the web. It doesn't take much faith to believe that the 0.13-micron NV31 will run at higher clock speeds than the 0.15-micron NV34. Because NVIDIA is guarding the clock speeds of its new chips so closely, it's almost impossible to speculate on each chip's performance potential. One wonders why is NVIDIA being so secretive.

There are, however, no secrets when it comes to NV31 and NV34's integrated RAMDACs. NV34 integrates two 350MHz RAMDACs, while NV31 uses 400MHz RAMDACs. Honestly, NV34's 350MHz RAMDACs shouldn't hold many back. The GeForce4 MX's 350MHz RAMDACs support 32-bit color in resolutions of 2048x1536 at 60Hz, 1920x1440 at 75Hz, and 1920x1200 at 85Hz. I can think of precious few instances where a relatively low-end NV34-based graphics card would be paired with an ultra high-end monitor capable resolutions and refresh rates higher than that.

Now that we've gone over the easy stuff, it's probably a good idea to pause and take a deep breath. Things are about to get messy.

Deciphering the pipeline mess
Lately, a bit of a fuss has been made over the internal structure of NV30's pixel pipelines and how many pixels the chip is capable of laying down in a single clock cycle. NV30's internal layout is unconventional enough to confuse our trusty graphics chip chart, which only works with more traditional (or at least more clearly defined) graphics chip architectures.

What do we know about NV30 for sure? That it can render four pixels per clock for color+Z rendering, and eight pixels per clock for Z-rendering and stencil, texture, and shader operations. Only newer titles that use features like multi-texturing and shader programs will be able to unlock NV30's ability to render eight pixels per clock cycle. In fact, even in id's new Doom game, NV30 will only be rendering eight pixels per clock "most" of the time. That "most" is straight from NVIDIA, too.

NV31's mystery-shrouded internals

If that explains NV30, what about NV31 and NV34? According to NVIDIA, both NV31 and NV34 have four pixel pipelines, each of which has a single texture unit. A 4x1-pipe design makes the chips similar to ATI's Radeon 9500, but comparing NV31 and NV34 with NV30 is more complicated. You didn't think you were going to get off easy this time, did you?

Because NVIDIA has explicitly stated that NV31 and NV34 are 4x1-pipe designs, it's probably safe to assume that there are no situations where either chip can lay down more than four textures in a single clock cycle, but it doesn't look like there are any situations where NV31 or NV34 can lay down more than four pixels per clock cycle, either.

According to NVIDIA, NV31 will be roughly half as fast as NV30 in situations where NV30 can lay down eight pixels per clock (Z-rendering and stencil, texture, and shader operations). Part of that speed decrease will come from the lack of a second texture unit per pixel pipeline, but NV31 will also be slower because it has "less parallelism" in its programmable shader than NV30. NVIDIA isn't saying NV31 has half as many shaders as NV30 or that its shader is running at half the speed of NV30's, just that the shader has "less parallelism." If NV31's performance is tied to the amount of parallelism within its shader, a betting man might wager that NV31 achieves "roughly half" the speed of NV30 when dealing with shader operations because NV31's programmable shader has roughly half the parallelism of NV30's.

Like NV31, NV34's pixel pipelines have half as many texture units and its programmable shader "roughly half" as much parallelism as NV30's. NV31 and NV34 have more in common with each other than they do with NV30, but at least partially because of its lack of color and Z compression, NV34 won't be quite as fast as NV31. According to NVIDIA, NV34's performance is very similar to NV31's in situations where NV30 is capable of rendering four pixels per clock and about 10% slower than NV31 in situations where NV30 would be capable of rendering eight pixels per clock. Those comparative performance estimates refer to non-antialiased scenes; all bets are off when antialiasing is enabled.

Of course, these relative performance claims for NV30, NV31, and NV34 assume that the chips are running at identical clock speeds, which certainly won't be true for all cards based on the chips and may not even be true for any. Additionally, any manufacturer's performance claims should be taken with a grain of salt, at least until independent, verifiable benchmarks are published.

Now that we know about the chips, let's move onto the cards they'll be riding on.