By that standard, NVIDIA's GeForce 6 series graphics processors have had plenty of success. Since the introduction of the GeForce 6800 Ultra last April, NVIDIA has rolled out a top-to-bottom rework of its entire GPU lineup, with the exception of the integrated graphics core in its nForce chipsetsand we spotted early versions of the new C51G chipset at Computex, complete with GeForce 6-class integrated graphics. The scope of the product refresh is impressive, but the real news is the potency of those products. In nearly every segment from uber-high-end setups down to cheapo $59 graphics cards, NVIDIA has contended closely with ATI for market leadership. And in some cases, like the $199 sweet spot for gamers' graphics cards, the GeForce 6 series has been the uncontested leader. Cards based on the NV4x series of chips have packed a wallop in terms of pixel processing power, enabling new sorts of effects in real time that game developers have only just begun to employ.
Now, NVIDIA is back with a new high-end chip, known by the code-name G70, that promises significantly more power than the GeForce 6800 Ultra. To add to the intrigue, the G70 chip is closely related to the RSX graphics processor that NVIDIA is developing for the PlayStation 3. Can NVIDIA duplicate the success of the NV40 series by following it up with a worthy successor? Can the GeForce 7800 GTX really deliver on NVIDIA's claims of up to twice the shader power of the GeForce 6800 Ultra? And if you run a pair of GeForce 7800 GTX cards together in an SLI config, will the fabric of time and space warp? Keep reading for some answers.
Inside the G70 GPU
Despite NVIDIA's protestations to the contrary, the G70 architecture is clearly derived from the NV40. Not that there's anything wrong with that. The NV40 and G70 both have a full implementation of Shader Model 3.0, the programming model for Windows graphics that looks like it will be the standard until Microsoft's Longhorn OS arrives. Shader Model 3.0 packs plenty of capability and precision for real-time graphics use in the next year or two, at least, and ATI's upcoming R520 chip is rumored to support SM3.0, as well.
In order to understand how the G70 differs from and improves upon the NV40 architecture, let's take a look at a simplified block diagram of the G70 design.
Not only does G70 have more vertex units, but those units can complete more operations per clock than the NV40's. NVIDIA says it has tweaked the vertex shader units so that they now can process MADD (multiply-add) operations in a single clock cycle. These tweaked vertex units are purportedly up to 30% faster in scalar math ops, as well.
Across the middle of the block diagram are the G70's pixel shader units, arranged in groups of four. This grouping of pixel shaders into "quads" is not a change from NV40; this diagram is simply more accurate than the ones NVIDIA supplied to us at the time of our GeForce 6800 Ultra review. The shader units in each quad share some on-chip resources, including an L2 texture cache. We have known about the grouping of shader pipes into quads for some time now (ATI does it, too), but the really interesting thing in the diagram above is the 2x2 arrangement of pixel shader units, so that two of them are in line with two others. This arrangement isn't really a faithful representation of the flow of data inside the chip; the G70's "quad" piplines actually operate on four fragments concurrently and in parallel.
|Intel 600P Series SSDs bring NVMe into the M.2 mainstream||13|
|PCIe 4.0 won't actually deliver 300 watts from the slot||8|
|iOS 9.3.5 fixes serious zero-day vulnerabilities||3|
|Canon EOS 5D Mark IV offers more pixels and better autofocus||30|
|Adata Ultimate SU800 SSDs use floating-gate 3D NAND||4|
|Thermaltake's Core G3 ATX chassis is slim and trim||11|
|Alienware desktops with Polaris cards get caught on camera||15|
|AMD and Nvidia court gamers with new pack-in bundles||40|
|First Deus Ex: Mankind Divided patch focuses on crash fixes||33|
|Seconded. We need a paradigm shift in how these buzzwords are used!||+32|