For more than a year, the GeForce4 MX and GeForce4 Ti 4200 chips have occupied the mainstream and performance segments of NVIDIA's product catalog, but they're being put to pasture in favor of new additions to the GeForce FX line. In 2002, GeForce4 Ti 4200 received glowing accolades from reviewers and held the PC graphics price/performance crown for the vast majority the year; many of us will be sad to see the product taken out behind the barn. The GeForce4 Ti 4200 will be replaced by NVIDIA's new GeForce FX 5600 Ultra, which will have a tough act to follow.
The GeForce4 MX, on the other hand, wasn't favored by many, and its lack of hardware pixel and vertex shaders isn't likely to be missed by anyone. The GeForce4 MX's replacements, the NV34-powered GeForce FX 5200 and 5200 Ultra, have rather small shoes to fill, so it should be easy for them to impress.
What are the capabilities and limitations of NVIDIA's new NV31 and NV34 graphics chips and the new GeForce FX cards they'll be rolling out on? Do these new products share enough technology with NVIDIA's high-end NV30-based products to be worthy of the GeForce FX name, or is NVIDIA still keeping the mainstream a generation behind? Read on to find out.
Cinematic computing across the line
In a bold move that lays to waste NVIDIA's much-criticized "MX" philosophy of introducing new low-end graphics chips a generation behind the rest of its lineup, NVIDIA's new NV31 and NV34 chips both support Microsoft's latest DirectX 9 spec and even offer a little extra functionality above and beyond DirectX 9's official requirements. Here's a quick rundown of the features shared by NV30, NV31, and NV34.
Will NV3x's support for more complex pixel shader programs than even DirectX 9's requirements go unused? Maybe not. In this .plan update, id Software programmer John Carmack acknowledges that he's already hit the R300's limits:
For developers doing forward looking work, there is a different tradeoff -- the NV30 runs fragment programs much slower, but it has a huge maximum instruction count. I have bumped into program limits on the R300 already.Games that don't venture beyond the DirectX 9 spec won't make use of the NV3x's support for longer pixel shader programs, but some developers will probably take advantage of support for extra-long shader programs where available.
Unfortunately, it's hard to compare the NV3x's pixel shader precision directly with the R300's. The R300 supports only one level of floating point pixel shader precision, which, at 96 bits, falls between NV3x's support for 64- and 128-bit modes. Based on the results of early reviews, it looks like NV30's performance with 128-bit pixel shader precision is a little slow, but the chip can sacrifice precision to improve performance.
The above features are all key components of NVIDIA's CineFX engine, which means that NV31 and NV34 are both prepared for the "Dawn of cinematic computing" that NVIDIA has been pushing since its NV30 launch. This catch phrase, of course, refers to really, really pretty visual effects that should be easy for developers to create using of the additional flexibility offered by NV3x's support for complex shader programs and high color modes. That's the theory, anyway. NVIDIA will need to equip a large chunk of the market with CineFX-capable products before developers start targeting the technology, but bringing CineFX to the masses is what NV31 and NV34 are all about.
| Intel moves to achieve deep-learning Nervana with acquisition | 10 |