Single page Print

Preview: NVIDIA's nForce2 chipset

If at first...

NVIDIA'S FIRST FORAY into the PC chipset market was quite a ride. The company sauntered into town on a sterling reputation based on its dominance of the graphics market, announced the product, and unveiled a features list a mile long. nForce would hit the scene with dual banks of DDR memory, NVIDIA's own integrated graphics, a high-speed HyperTransport link, and Dolby digital audio. Folks looked at the situation and said, "Wow. NVIDIA is going to take over chipsets, too." It was, they said, inevitable.

But a funny thing happened on the way to the inevitable. A few things, actually. NVIDIA thought a lot of its new chipset and priced it accordingly. Mobo makers scratched their heads. Then, NVIDIA wouldn't say the chipset was late, but I would: the chipset was late. What's worse, during the wait, VIA unveiled its screaming-fast KT266A chipset and took the Socket A market by storm. When the nForce finally arrived, it was overmatched: expensive, slower, and less available than the KT266A. Only MSI had a board ready to roll at launch, and to this day, only a handful of manufacters offers nForce-based products. The nForce didn't exactly do to the chipset market what GeForce did to the graphics world.

However, the nForce was by no means an outright failure. Core logic chipsets are not easy to make, and most informed onlookers were impressed with NVIDIA's ability to put together a reasonably stable, working chipset its first time out. Despite a few minor bumps in the road, the nForce hasn't suffered any major incompatibilities or nervous breakdowns. Its graphics were, for the chipset market, quite good, as everyone expected. And its overall performance was quite respectable last time we rounded up all the Socket A contenders. In fact, Compaq, HP, NEC, and Micron all built systems around the nForce, though most enthusiasts weren't too interested.

Now it's time for a second attempt. Clearly, NVIDIA has learned some lessons from nForce, and the nForce2 looks likely to make deeper inroads into the Socket A market. With upgraded graphics, dual banks of DDR400 memory, AGP 8X, and a host of other new features, nForce2 looks ready to run with the big dogs. Read on to see exactly what NVIDIA has in store.

The nForce2 IGP

The north bridge
First things first: let's tackle NVIDIA's terminology. Like Intel, NVIDIA has an aversion to calling its north bridge chips "north bridge" and its south bridge chips "south bridge." The whole bridging thing must sound so... so undignified, so simplistic, so Ethernet, so PCI-derived, so last century that it's just unbearable. I suppose. Whatever the reason, the nForce still has a chip that fulfills the traditional north bridge functions, including a CPU interface, memory controller, and AGP port. Likewise, there's a chip that handles I/O devices like any good south bridge would.

NVIDIA's north bridge chips are called "nForce2 IGP" and "nForce2 SPP". IGP stands for Integrated Graphics Processor, and SPP stands for System Platform Processor. As I understand it, the IGP is the version of nForce2's north bridge chip with a built-in graphics processor, and the SPP is the version of the north bridge chip without a GPU.

So you got yer SPP, and you got yer IGP. The IGP has an SPP in it, but we won't talk about that.

nForce2 isn't a complete redesign, but for the north bridge, it nearly is. Among the changes:

  • A totally revamped memory controller — nForce2 sports a completely rearchitected memory controller. Like nForce, it can address two banks of DDR memory, which makes it unique in the Socket A world. But this time out, the chipset supports DDR400 memory speeds, for a total of 6.4GB/s of peak memory bandwidth. The Athlon's 266MHz front-side bus will prevent it from using all of that bandwidth, but the extra throughput should be helpful for both the IGP's onboard graphics and, to a lesser degree, for the AGP port.

    Perhaps even more importantly, nForce2's memory controller will incorporate three address control lines—one for each DIMM slot. The original nForce had only two address control lines, so DIMMs 2 and 3 had to share. NVIDIA claims the addition of a third address control line will improve both performance and stability, eliminating the need for the confusing "super-stability mode" in the first nForce. Also, nForce2 can address up to 3GB of memory, or 1GB per DIMM, which is twice what nForce could handle.

    nForce2 also includes a beefed-up version of NVIDIA's DASP, or Dynamic Adaptive Speculative Pre-Processor. DASP is essentially a memory prefetch mechanism tied to small "L3 cache" buffer, much like the hardware prefetch mechanism incorporated into the Athlon XP. nForce2's second-gen DASP implementation holds more data and is, mysteriously, "more aggressive" than the original.

  • AGP 8X — NVIDIA claims it will have the first AGP 8X-capable chipset on the market, which is a bold claim, since I have two chipsets already here in Damage Labs that claim to support AGP 8X. One of them, the VIA P4X333, I can even talk about. Of course, the P4X333's status as "on the market" is mighty questionable. Regardless, NVIDIA seems confident its experience in graphics will give it the edge in implementing AGP 8X in its chipsets.

  • Upgraded graphics — The IGP version of nForce2 eschews nForce's GeForce2 MX-derived graphics core for another GeForce2 MX-derived graphics core, the NV17—also known as the GeForce4 MX. You can read all about the NV17 core right here if you want detailed info. To summarize, the NV17 is a GeForce2 MX 3D graphics core with a number of enhancements. NV17 is faster than the original GeForce2 MX because it incorporates a more advanced crossbar memory controller bandwidth-saving techniques like fast Z clear and occlusion culling. It has a DirectX 7-class hardware T&L engine and NVIDIA register combiners, which are sort of proto-pixel shaders, but NV17 lacks both the vertex shaders and true pixel shaders required to run DirectX 8-class apps optimally.

    Still, GeForce4 MX-class graphics integrated into a chipset should be good enough to lead the industry. The nForce2 can allocate up to 128MB of frame buffer memory to the built-in graphics core. It can't dynamically partition more RAM as needed like Intel's 845G, but nForce2 can dynamically allocate memory bandwidth as needed to keep the graphics pipeline fed. NV17 also packs an MPEG2 decoder, so nForce2 machines will be able to play back DVDs without taxing the CPU.

    NVIDIA's NV17 graphics core, which is essentially what's integrated into nForce2

  • Improved display support — NV17 also brings much-improved display capabilities to nForce2 IGP. Dual integrated RAMDACs allow nForce2 to support two simultaneous analog displays, and dual TMDS transmitter interfaces mean nForce2 could support a pair of digital flat panels. Most likely, mobo makers will opt to include some combination of VGA and DVI outputs augmented by DVI-to-VGA port converters, as do video card makers. They might also opt to include an S-Video out port, since NV17 has a built-in TV encoder, as well.

    NVIDIA will also support a low-cost "Digital display card" that can ride in the AGP slot and provide DVI-out ports for the built-in graphics core, much like Intel does with the 845G.

    NVIDIA's nView software suite will tie it all together with robust support for multiple, independent display resolutions and refresh rates.

    NVIDIA's concept motherboard sports dual VGA ports

That's pretty much it for the north bridge. The IGP and SPP will be pin-compatible, so mobo makers will be able to vary designs only slightly and get mobos with and without graphics onboard. For what it's worth, I asked NVIDIA point blank, and the SPP chip is not an IGP chip with the graphics disabled. TSMC will be fabbing two distinct chips for NVIDIA.

NVIDIA says this time around its focus will be on the SPP, because systems with discrete graphics represent a larger portion of the opportunity in the Socket A market. NVIDIA also says SPP-based boards should be price-competitive with the Taiwanese competition.