JUST ABOUT ACCORDING TO schedule, NVIDIA has unveiled a top-to-bottom refresh of its entire desktop graphics product line. The new NVIDIA chips, dubbed GeForce4 Ti and GeForce4 MX, bring with them a number of new features and better performance, which is always a good thing. However, they do little to advance the state of the art in 3D graphics, nor has the GeForce4 Ti unambiguously recaptured the technology lead from ATI's Radeon 8500.
As always, the GeForce4 chips have been launched with much fanfareNVIDIA knows how to throw a mean partyand with a torrent of new "marketing terms" to help describe the chip's technology to the public. And as always, our analysis of the GeForce4 will go beyond the marketing terms to give you the skinny on the GeForce4 scene. Read on to find out what's newand what's notin NVIDIA's latest GPUs.
The modular approach
NVIDIA's rise to the top in graphics has been fueled by the company's aggressive six-month product development cycle. Typically, NVIDIA launches a truly new product once a year, with a minor refresh (usually involving higher clock speeds) at six months. One reason this approach has been successful for NVIDIA is because its chip designs are modular, so functional units on a chip can be reused if needed. Lately, NVIDIA has taken this approach to an extreme, integrating and interchanging a number of technologies across the GeForce3, the Xbox GPU, and the nForce core logic chipset. Doing so has allowed the company to introduce a number of extremely complex chips in relatively short order.
By my watch, it's time for NVIDIA to launch another truly new product. Instead, NVIDIA has elected to introduce two brand-new chips, and though they share some technology, they're fundamentally very different from one another. Naturally, the GeForce 4 MX is the low-end chip, and the GeForce4 Ti is the new high-end chip. These new chips do incorporate some new and improved functional units, but they're not what you might be expecting from NVIDIA a year after the launch of the GeForce3.
We'll look at each chip in some detail to see which bits have been recycled from previous products. Before we dive in deep, however, we'd better pull out the trusty ol' chip chart to see how these chips stack up in terms of basic pixel-pushing power.
Here's how the hardware specs match up:
|Core clock (MHz)||Pixel pipelines||Peak fill rate (Mpixels/s)||Texture units per pixel pipeline||Peak fill rate (Mtexels/s)||Memory clock (MHz)||Memory bus width (bits)||Peak memory bandwidth (GB/s)|
|Radeon 64MB DDR||183||2||366||3||1098||366||128||5.9|
|GeForce3 Ti 200||175||4||700||2||1400||400||128||6.4|
|GeForce3 Ti 500||240||4||960||2||1920||500||128||8.0|
|GeForce4 MX 460||300||2||600||2||1200||550||128||8.8|
|GeForce4 Ti 4600||300||4||1200||2||2400||650||128||10.4|
Now remember, as always, that the fill rate (megapixels and megatexels) numbers above are simply theoretical peaks. The other peak theoretical number in the table, memory bandwidth, will often have a lot more to do with a card's actual pixel-pushing power than the fill rate numbers will. ATI and NVIDIA have implemented some similar tricks to help their newer chips make more efficient use of memory bandwidth, so newer chips will generally outrun older ones, given the same amount of memory bandwidth.
In fact, that chart above so doesn't capture the actual facts of the matter that we'll augment it with another chart that shows a few, key features of most newer GPUs.
|Vertex shaders||Pixel shaders?||Textures per rendering pass||Z compression?||Hardware occlusion culling?|
That's by no means everything that's important to know about these chips, but it's what I could squeeze in with confidence about the precise specs. Also, the implementations of many of these features vary, so the fact both GeForce4 and Radeon 8500 have "hardware occlusion culling" doesn't say much all by itself. The GF4's culling might be much more effective, or vice versa.
Still, this chart is revealing. As you can see, there are a few surprises for those of us familiar with the GeForce3. The GeForce4 Ti includes a second vertex shader unit, while the GeForce4 MX has no vertex shader at all.
What does it all mean? Let's take it one chip at a time.
|Thunderbolt 3 strikes more Gigabyte motherboards||2|
|National Dog Day Shortbread||39|
|Corsair backlit keyboard lineup gets new Lux models||15|
|Nixxes turns out another Deus Ex: Mankind Divided patch||25|
|Upcoming Samsung CF791 is a high-contrast FreeSync ultrawide||65|
|Deals of the week: an unlocked Skylake CPU for cheap and more||19|
|Thermaltake View 27 case offers a birds-eye view of builds||29|
|PCIe 4.0 won't actually deliver 300 watts from the slot||59|
|iOS 9.3.5 fixes serious zero-day vulnerabilities||13|
|Stupid physics getting in the way of all our fun.||+36|