The GeForce 4 MX/Go looks to become the most maligned chip that NVIDIA has ever released. While this may be the truth, it is also unfortunate, as the NV-17 chip is actually quite an impressive piece of engineering. The reasons for both arguments are legion, and they all have their merit, but the overriding truth of this is that NVIDIA has released an almost pristine piece of technology that will never receive its just due.I'm not sure what he means by "pristine" in this contextit doesn't seem fitting, since the chip's 3D pipelines and T&L engine are recycled from the GeForce2 MXbut obviously he likes the chip.
Who are these critics with whom Josh so strongly disagrees? Well, there's me for instance. My GeForce4 article didn't have nice things to say about the GF4 MX, because this uber-GeForce2 chip lacks NVIDIA's excellent programmable pixel and vertex shaders. That's not a major problem for a very low-end value chip, but the $179 GF4 MX 460 cards are decidely something more than low-end products. I wrote:
What doesn't make sense to me is why in the world NVIDIA is introducing this product, with this 3D rendering pipeline, at the beginning of 2002. One would expect a "GeForce4 MX" to include a cut-down version of the GeForce3/4 rendering pipeline, perhaps with two pixel shader/rendering pipes and a single vertex shader. Instead, we're getting a card that's incapable of taking advantage of all of the new 3D graphics programming techniques NVIDIA pioneered with the GeForce3....and Josh writes:
To be a full Pixel and Vertex shading part, the NV-17 would have weighed in at around 43 million transistors with two rendering pipelines and a single vertex shader. This was entirely unacceptable, and so NVIDIA was forced to only provide acceleration for certain pixel shader functions, but leave some of the more "expensive" functions out. It also couldn’t support 4 pixel pipelines with all of the defined features, nor could it have a full vertex shading unit. So the great compromise between cost, features, and performance was struck with the NV-17, and work began in earnest.And thus my complaints are answered. (Josh is a good guy, and after hanging out with him at Comdex, I consider him a friend. However...)
I assume Josh's estimate on the transistor budget came from NVIDIA, and so we'll assume it's reasonably accurate to say that a chip with all of NV17's features plus vertex and pixel shaders would be about that big. And granted, 43 million transistors is a big chip for a budget part. But I have a couple of objections to Josh's argument.
One, looking at the chip diagram, NV17 dedicates an awful lot of real estate to that big ol' Accuview AA unit, a second RAMDAC, and a full MPEG2 decoder. Those are neat features, but the NV17's set of compromises isn't necessarily the right or only way. I'm all for good AA, but I'd rather have organic, programmable graphics with a few rough edges than previous-gen graphics with smoothed-out edges. Or heck, axe the fixed-function T&L unit entirely, integrate a pair of pixel shader pipes, and emulate vertex shaders in software if you must. Pixel shaders can't really be emulated.
There were other ways.
Two, Josh seems not to understand that the NV17's 3D core was lifted from the GF2 MX. The "limited" pixel shader support that does exist in the GF4 MX is nothing more than the GeForce2's (and GF2 MX's) register combiners. To talk about NV17's 3D core as a design forged in compromise, as if it were a new design intended to meet certain goals, is misleading. Yes, NV17 runs at a higher clock speed and has a much better memory controller, but the 3D core ain't new.
Finally, there's this argument:
The days of the big jumps are now over, and we have to accept this sad fact. No longer can we look forward to releases such as the Voodoo 2, which promised twice the speed of the Voodoo Graphics, or thinking of upgrading our Riva 128's to the new TnT boards that were speed demons and had countless advanced features. Perhaps one day a massive breakthrough will occur and we can see those gains again, but at the rate these incremental advances are coming, that is a very unlikely possibility.All I can say here is: apparently Josh wasn't around a year ago, when the GeForce3 arrived. And he apparently missed it two years ago, when the first GeForce chip was introduced. Both of those chips redefined the landscape in 3D graphics, and both were much more important than the move from Voodoo Graphics to Voodoo 2. If revolutionary innovations have died in 3D graphics, they only kicked off very recentlywith the introduction of the GeForce4 MX.