Nvidia’s GeForce 8600 series graphics cards

I DON’T ACTUALLY have to write these reviews, you know. If I get enough caffeine into my bloodstream, letters begin to swim across the page, form into words, and a stream of trenchant observations interlaced with bad jokes just kind of assembles itself in front of me. At least, that’s what I think happens. After staying up ’til 4:00 AM to push one of these babies out of the door, I have little or no recollection of the process by which they get written. In fact, I was surprised to learn the other day that I reviewed the Radeon X1650 XT last October. Can’t say I recall that one.

Anyhow, before I forget, the prompter of today’s stimulant-enhanced activities is Nvidia’s launch of a new line of DirectX 10-capable mid-range graphics cards, known as the GeForce 8600 series. In fact, Nvidia is setting loose a whole range of products, from the bargain-bin GeForce 8300 GS to the GeForce 8600 GTS, which merits at least six shots of espresso, by my reckoning. I’ve spent the better part of the past week testing out the GeForce 8600 GTS and its little brother, the GT, against their predecessors and natural competitors from the Radeon camp.

So how well does the unified architecture behind the killer GeForce 8800 translate into a mid-range GPU? The answer’s slipped my mind, my hands are shaking, and I need to visit the men’s room. But I think I’ve stashed some charts and graphs in the following pages, so let’s see what they have to say.

Inside the G84 GPU
The GeForce 8 line of GPUs started out with the high-end GeForce 8800, which we reviewed back at its introduction. I’m about to lay a whole bunch of graphics mumbo-jumbo on you that’s predicated on knowledge of the G80 GPU, so you might want to read that review if you haven’t yet.

Of course, since everyone always listens to suggestions like that one, I won’t have to tell you that the G80 is an all-new graphics processor with a unified shader architecture that’s designed to live up to the requirements of the DirectX 10 graphics programming interface built into Windows Vista. Nor will I have to explain that the G80 uses a collection of stream processors (SPs) to handle both pixel and vertex processing, allocating computational resources as needed to meet the demands of the scene being drawn. You’ll not need to be reminded that the G80 SPs are scalar in nature—that they each operate on a single pixel component, not on the usual four components in parallel. And you’ll already be well aware that the G80 produces higher-quality pixels generally, from its 32-bit floating-point precision per pixel component to the wretched excess of texture filtering capacity Nvidia has deployed to deliver angle-independent anisotropic filtering by default. The G80’s excellent coverage-sampled antialiasing will be old hat, as well, so you won’t be impressed to hear that it delivers up to 16X sample quality with minimal slowdowns.

So I might just as well skip ahead and say that the G84 graphics processor that powers the GeForce 8600 series is based on this same basic technology, with the same capabilites, only scaled down and tweaked to better meet the needs of much less expensive graphics cards than $600 behemoths like the GeForce 8800 GTX.

Indeed, the G84 is down to a more manageable 289 million transistors, by Nvidia’s estimates, which is well under half the count of the staggering 680-million-transistor G80. G84s are manufactured by TSMC on an 80nm fab process, and by my shaky measurements, the chips are roughly 169 mm². That number gives the G84 an unmistakably upper-middle-class pedigree. For reference, its predecessor, the G73 GPU powering the GeForce 7600 series, is approximately 125 mm², while the AMD RV570 chip inside the Radeon X1950 Pro is about 230 mm².


Block diagram of the G84. Source: NVIDIA.

A single SP cluster. Source: NVIDIA.

This scaled-down G80 derivative packs only two partitions of 16 stream processors, for a total of 32 SPs onboard. That’s a precipitous drop from the 128 SPs in the G80, to say the least. Nvidia, though, has made provisions to keep the G84’s performance acceptable in its weight class. The texturing capacity of the G80’s SP partitions has been beefed up, so each texture processor can handle eight addresses per clock on the G84 instead of the four on the G80. That gives the G84 the ability to handle a total of 16 texture address ops per clock, although the ratio of texturing to filtering capacity is altered. (The texture filtering capacity of each SP remains the same as the G80 at eight bilinear filtered texels per clock.)

Nvidia claims the other performance tweaks in the G84 include improved stencil cull performance and the ever-informative “various low-level architectural tweaks,” whatever those may be.

If you’re attuned to more traditional graphics processors, the G84’s 32 stream processors may sound like a lot. After all, the G73 chip in the GeForce 7600 has 12 shader processors and the higher-end G71 has 24. But keep in mind, as I’ve mentioned, that the G84’s SPs are scalar, so they only operate on a single pixel component at once, while the vector units in most GPUs process four components together. One could justifiably argue that the G84’s 32 SPs are the equivalent of eight traditional shader units—not very impressive. The G84 is banking on SP clocks that are about twice the typical frequencies of previous-gen GPUs and a more efficient overall architecture.

Beyond the SPs, the G84 has eight raster operators (or ROPs), so it can output a maximum of eight pixels per clock to memory. That doesn’t make for impressive pixel fill rate numbers, but it should suffice. Texturing and shading are the more likely constraints these days. The two ROP partitions on the G84 each have a 64-bit path to memory, yielding a combined 128-bit memory interface—a standard-issue config for this class of GPU from Nvidia but only a third the width of the 384-bit memory bus of the G80.

Another improvement over the G80 is the G84’s new “VP2” video processing unit, which includes hardware to accelerate more portions of the HD video decoding task. Nvidia says the G84’s VP2 has “full acceleration for the entire H.264 decode chain,” although such pronouncements are notoriously slippery. H.264 decoding involves many stages, and some chores will almost certainly fall to the CPU. Nonetheless, the G84 has new logic to assist with decoding H.264’s context-adaptive encoding schemes and with decryption of 128-bit AES copy-protected content. These abilities will certainly be a welcome addition to this mid-range GPU, which is likely to find its way into systems that lack the CPU horsepower to handle high-def video processing on their own.

Unlike the G80, the G84 doesn’t require a separate, external display chip; the G84 has its display output logic built in, and it’s capable of driving a pair of dual-link DVI connections simultaneously, each at a maximum 2560×1600 resolution. HDCP support is also included, since so many of us fear for the safety of the content we’ve purchased.

 

The cards
The G84 GPU begins its career strapped to one of two different graphics cards, the GeForce 8600 GT and its brawnier companion, the 8600 GTS. We have examples of each on hand from XFX, and they look like, well graphics cards—mid-range ones, to be exact, with modest single-slot coolers.


XFX’s GeForce 8600 GTS


The 8600 GTS sports a single PCIe aux power connector


XFX’s 8600 GT rides on a smaller board and needs no extra power

Both cards sport a pair of dual-link DVI connectors, but only the GTS boasts HDCP support.

The official speeds and feeds on the 8600 series look like so:

  GPU Core
clock (MHz)
SP clock
(MHz)
Memory
clock (MHz)
Memory
interface
Price
range
GeForce 8600 GTS G84 675 1450 1000 128 bits $199-229
GeForce 8600 GT G84 540 1190 700 128 bits $149-159

Nvidia’s partners have some leeway to improvise on this front, as is their custom. XFX will be selling three different variants of the GeForce 8600 GTS, ranging from a stock-clocked model at $199 to a version that has a 730MHz core, 1.566GHz SPs, and 1.13GHz memory for $239. Similarly, they’ll have a stock-speed 8600 GT for $149 and hotter model for $169 that packs a 620MHz core, 1.355GHz SPs, and 800MHz memory. Other Nvidia board partners look to have similar plans, and Gigabyte has even cooked up 8600 GT and GTS cards with passive cooling. The first GeForce 8600 cards should be available now, with broader availability by the end of the month.

An even smaller DX10 GPU? Yep, meet G86
So you’ve heard the spiel on the G84 GPU and the GeForce 8600 lineup, and I know what you’re thinking. You’re thinking, “That’s all well and good, but $149 for a graphics card is kind of steep. What is Apple going to put into its dual-Xeon Mac Pro?” For that, we have the smaller, cheaper G86 GPU.

You can think of the G86 as a G84 with one of its SP clusters removed. The G86 has only a single SP cluster, but that SP cluster shares the G84’s capacity for eight texture address ops per clock. Also present are eight ROPs, the improved video decoding logic, and the same 128-bit memory interface.

The cluster-ectomy has dropped the G86 to a total of 210 million transistors (which yields the interesting bit of info that an SP cluster costs about 79 million transistors, since the G84 is 289 million). Like the G84, it’s produced on TSMC’s 80nm fab process. We don’t yet have a G86 in our possession, so I can’t give you a die size measurement. The G86 is a separate chip, though, not just G84 silicon with one of its SP clusters deactivated.

The G86 will power a whole host of low-end video cards that looks like so:

  GPU Core
clock (MHz)
SP clock 
(MHz)
Memory
clock (MHz)
Memory
interface
Price
range
GeForce 8500 GT G86 450 900 400 128 bits $89-129
GeForce 8400 GS G86 450 900 400 64 bits OEM only
GeForce 8300 GS G86 450 900 400 64 bits OEM only

As noted, the 8300 GS and 8400 GS are low-cost products intended for extremely cost-conscious PC makers, while the GeForce 8500 GT at $89 ought to reach down to the very bottom of the retail graphics card market. The 8300 GS will even have its VP2 video processing logic disabled to underscore its bargain-bin status.

 

Test notes
This review was a complicated beast to put together for a whole host of reasons, not least of which is the fact that we decided to make the leap to Windows Vista—the 64-bit version, no less—for our graphics test platforms. We held off on making this move for quite a while out of concern over the Vista driver situation, but after checking with both Nvidia and AMD, we decided now was the time to make the leap. They both assured us that their Vista drivers were up to it, and they both claimed that 32-bit and 64-bit Vista driver development happens pretty much concurrently.

Turns out that’s mostly true. We were able to conduct our testing without a major catastrophe, and most of the games we’d chosen to use installed and ran reasonably well on Vista x64. However, as we worked through the testing process, weaknesses in 64-bit Vista support became apparent, especially on the Nvidia side of things. The version of the nTune utility that we downloaded from Nvidia’s website claimed to work with Vista x64, but it wouldn’t install and run properly, leaving us unable to overclock the GeForce 8600 cards. (Though we did later get a beta version we haven’t yet had time to try.) We found out that the new video processing capabilities of the 8600 series aren’t yet supported in Vista x64, either. And we ran into a number of other minor catches that we’ll address in the following pages. We chose to test with Vista x64 because, well, that’s the version of Windows we’d want to install on a new PC for ourselves. Unfortunately, at least one of the two major graphics vendors may not be entirely ready to meet us there.

You’ll also see some things in this review that are artifacts from a change we made partway though the testing process. Initially, we’d planned to test the GeForce 8600 GTS against the similarly priced GeForce 7900 GS and Radeon X1950 Pro in both single and dual-card configurations. But then a GeForce 8600 GT showed up on our doorstep this past Friday, unexpectedly early, and we decided to test it, as well, against the GeForce 7600 GT and Radeon X1650 XT. Thus, we abandoned our plans for multi-GPU action in order to make room for the 8600 GT and competitors. However, by then we’d already chosen the display resolutions and quality settings at which we would be testing, having defined those with the higher-end cards and dual-GPU configs in mind. As a result, the lower end cards had to struggle through some games at quality settings that are a bit above their pay grade. That’s what happens when you zig rather than zag sometimes.

Moving to Vista did give us a nice opportunity to revamp our test platforms. Our nForce 4 SLI motherboards were getting a little long in the tooth, so we decided to upgrade to the newer nForce 680i SLI. XFX recently began selling motherboards based on Nvidia’s excellent nForce 680i SLI reference design, and they agreed to provide us with this board for our test systems.


Our new Vista x64 test rig: nForce 680i SLI boards from XFX with 4GB of Corsair Dominators

One of the things testing in Vista x64 allows us to do is take advantage of more RAM, and some newer games like Supreme Commander seem to appreciate having more than 2GB on hand. With that in mind, Corsair was kind enough to supply us with four 1GB Dominator DIMMs for our testbeds. Thanks to XFX and Corsair for the support. We should be ready for R600 now!

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core 2 Extreme X6800 2.93GHz
System bus 1066MHz (266MHz quad-pumped)
Motherboard XFX nForce 680i SLI
BIOS revision P26
North bridge nForce 680i SLI SPP
South bridge nForce 680i SLI MCP
Chipset drivers ForceWare 15.00
Memory size 4GB (4 DIMMs)
Memory type 2 x Corsair TWIN2X20488500C5D DDR2 SDRAM at 800MHz
CAS latency (CL) 4
RAS to CAS delay (tRCD) 4
RAS precharge (tRP) 4
Cycle time (tRAS) 18
Command rate 2T
Hard drive Maxtor DiamondMax 10 250GB SATA 150
Audio Integrated nForce 680i SLI/ALC850 with Realtek R1.64 drivers
Graphics Radeon X1650 XT 256MB PCIe
with Catalyst 7.3 drivers
Radeon X1950 Pro 256MB PCIe
with Catalyst 7.3 drivers
EVGA e-GeForce 7600 GT 256MB PCIe
with ForceWare 158.14 drivers
XFX GeForce 7900 GS 480M Extreme 256MB PCIe
with ForceWare 158.14 drivers
XFX GeForce 8600 GT 620M 256MB PCIe
with ForceWare 158.14 drivers
XFX GeForce 8600 GTS 730M 256MB PCIe
with ForceWare 158.14 drivers
OS Windows Vista Ultimate x64 Edition
OS updates

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by OCZ GameXStream 700W power supply units. Thanks to OCZ for providing these units for our use in testing.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults.

The test systems’ Windows desktops were set at 1600×1200 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Pixel-filling power
We’ve already talked about GeForce 8600 GT and GTS specifications, but it’s sometimes enlightening to do some basic math to see what those specs are likely to mean for performance. Here’s a look at the 8600 series’ specifications along with their closest competitors’. Notice that I’ve included separate entries for the higher-clocked XFX cards. Our review samples are these faster, more expensive models, so keep that in mind as you see our test results.

Also, these numbers are important to determining overall performance, but they are all—including memory bandwidth—growing less important as the use of programmable shading power becomes more extensive.

  Core
clock
(MHz)
Pixels/
clock
Peak
fill rate
(Mpixels/s)
Textures/
clock
Peak
fill rate
(Mtexels/s)
Effective
memory
clock (MHz)
Memory
bus width
(bits)
Peak memory
bandwidth
(GB/s)
Radeon X1650 XT 575 8 4600 8 4600 1350 128 21.6
Radeon X1650 Pro 600 4 2400 4 2400 1400 128 22.4
GeForce 7600 GT 560 8 4480 12 6720 1400 128 22.4
GeForce 8600 GT 540 8 4320 16 8640 1400 128 22.4
XFX GeForce 8600 GT 620M 620 8 4960 16 9920 1600 128 25.6
GeForce 8600 GTS 675 8 5400 16 10800 2000 128 32.0
XFX GeForce 8600 GTS 730M 730 8 5840 16 11680 2260 128 36.2
GeForce 7900 GS 450 16 7200 20 9000 1320 256 42.2
Radeon X1950 Pro 575 12 6900 12 6900 1380 256 44.2
XFX GeForce 7900 GS 480M 480 16 7680 20 9600 1400 256 44.8
Radeon X1900 XT 625 16 10000 16 10000 1450 256 46.4

The GeForce 8600 GT tucks right into its place next to the Radeon X1650 XT and GeForce 7600 GT without rocking the boat. At stock speeds, it has the same memory bandwidth as the 7600 GT, but it has more texturing capacity than either of its rivals.

The GeForce 8600 GTS, on the other hand, is priced to challenge the Radeon X1950 Pro and GeForce 7900 GS, but both of those cards have about 10GB/s more memory bandwidth than a stock GeForce 8600 GTS, thanks to their 256-bit memory interfaces. The 8600 GTS can more than hang with them on texturing capacity, but we still have a mismatch of sorts in hardware.

How do these numbers play out in a synthetic fill rate test?

The GeForce 8600 series cards tend to underachieve a bit here, with neither reaching its theoretical peak fill rate, even with multitexturing. Then again, as I said, shaders are the big constraint these days. Perhaps the 8600s can redeem themselves with their performance in games.

 

S.T.A.L.K.E.R.: Shadow of Chernobyl
We tested S.T.A.L.K.E.R. by manually playing through a specific point in the game five times while recording frame rates using the FRAPS utility. Each gameplay sequence lasted 60 seconds. This method has the advantage of simulating real gameplay quite closely, but it comes at the expense of precise repeatability. We believe five sample sessions are sufficient to get reasonably consistent and trustworthy results. In addition to average frame rates, we’ve included the low frames rates, because those tend to reflect the user experience in performance-critical situations. In order to diminish the effect of outliers, we’ve reported the median of the five low frame rates we encountered.

For this test, we set the game to its “medium” quality settings at 1280×1024 resolution. Antialiasing was not enabled.

I should mention, too, that S.T.A.L.K.E.R. still appears to be somewhat buggy. We found on the Radeons that the game would suddenly slow to a crawl and refuse to speed up again. When that happened, we had to throw out the numbers for that test run, exit the game, and start over again in order to get good results. I’d pin the blame on AMD, but Nvidia warned us about issues with S.T.A.L.K.E.R., too.

Nevertheless, this is a good-looking game with HDR lighting, some nice shader effects, and lots of vegetation everywhere.

Things start out well for the GeForce 8600 cards, as even the 8600 GT outperforms the Radeon X1950 Pro in this brand-new game. Impressive.

Supreme Commander
Here’s another new game, and a very popular request for us to try. Like many RTS and isometric-view RPGs, though, Supreme Commander isn’t exactly easy to test well, especially with a utility like FRAPS that logs frame rates as you play. Frame rates in this game seem to hit steady plateaus at different zoom levels, complicating the task of getting meaningful, repeatable, and comparable results. For this reason, we used the game’s built-in “/map perftest” option to test performance, which plays back a pre-recorded game.

Another note: the frame rates you see below look pretty low, but for this type of game, they’re really not bad. I found the Radeon X1950 Pro to be playable at 1600×1200, for instance, even though it only averaged 17.5 FPS. Frame rates in the game are similar to the numbers from the performance test, but they’re still acceptable. This is simply different from an action game, where always-fluid motion is required for smooth gameplay.

The GeForce 8600 GT is at the top of its class here, well ahead of the Radeon X1650 XT and GeForce 7600 GT, as 8600 GTS duels with the Radeon X1950 Pro for the top spot.

 

Battlefield 2142
We tested this one with FRAPS, much like we did S.T.A.L.K.E.R. (and if I have to type S.T.A.L… ugh, never mind).

The Radeons take this one from the 8600s, though it’s a tight race in both classes. In a particular bit of ownage, the 8600 GTS’s low frame rate beats out the 7900 GS’s average.

Half-Life 2: Episode One
The Source game engine uses an integer data format for its high-dynamic-range rendering, which allows all of these cards to combine HDR rendering with 4X antialiasing.

Check this out: the 8600 GTS runs with the Radeon X1950 Pro at 1280×1024 and 1600×1200, but it wilts at the three-megapixel 2048×1536 resolution. That’s probably the result of the GTS’s lower memory bandwidth becoming a constraint. Nevertheless, the GeForce 8600s do run this game nicely at 1600×1200 with 4X antialiasing and 16X aniso enabled.

 
The Elder Scrolls IV: Oblivion
We turned up all of Oblivion’s graphical settings to their highest quality levels for this test. The screen resolution was set to 1280×1024 resolution, with HDR lighting enabled. 16X anisotropic filtering was forced on via the cards’ driver control panels.

We strolled around the outside of the Leyawin city wall, as show in the picture below, and recorded frame rates with FRAPS. This area has loads of vegetation, some reflective water, and some long view distances.

When is getting the best frame rate not a win? When the card doesn’t run the game very well. FRAPS didn’t seem to catch it, but all of the Nvidia cards we tested had performance issues in Oblivion. The game would hesitate momentarily as we walked through the vegetation, seemingly at each point where there was a level-of-detail change. This game uses an awful lot of dynamic LOD scaling, so that means lots of quick pauses and hiccups. The GeForce 8600s have the shader performance to suffer from this issue and still produce good frame rate numbers, probably due in part to their ability to dedicate lots of computing power to vertex processing in this crazy-detailed area.

We have seen GeForce 7600 GT and 7900 GS cards run Oblivion quite smoothly in the past, so I expect this is some sort of problem with Nvidia’s Vista x64 driver.

Rainbow Six: Vegas
This game is notable because it’s the first game we’ve tested based on Unreal Engine 3. As with Oblivion, we tested with FRAPS. This time, I played through a 90-second portion of the “Dante’s” map in the game’s Terrorist Hunt mode, with all of the game’s quality options cranked.

I’m starting to think the real loser here is the GeForce 7900 GS, which gets humiliated by the GeForce 8600 GT once more. For its part, the GeForce 8600 GTS can’t quite catch the Radeon X1950 Pro, but it’s very close once again.

 

3DMark06

The 8600s turn in a commanding performance in 3DMark, as they did in some of the more shader-intensive games. The GTS’s lead over the Radeon X1950 Pro narrows considerably at 2048×1536, but crazy high resolutions like that are typically the domain of video cards more expensive than any we’re testing here.

The 8600 GT, meanwhile, is just a slam-dunk in its price class, as these results help confirm.

The 8600 GTS trails the 7900 GS and X1950 Pro in 3DMark’s simple pixel shader benchmark, reminding us of the G84 GPU’s limitations. That reminder, though, is promptly counterbalanced by an out-of-this-world performance from the 8600 cards in 3DMark’s two vertex shader tests. The G84 GPU’s unified architecture allows it to dedicate the lion’s share of its shader processing power to vertex processing as needed, giving it a big edge over the older DX9 chips.

3DMark’s particle simulation gives us a sense of how these GPUs might handle one type of physics processing task, and again, the 8600s excel. Meanwhile, the Radeons can’t run this test because they lack support for vertex texture fetch.

 

Coverage sampled antialiasing performance
We’ve compared the basic performance of the GeForce 8600 series to its closet competition, but we haven’t yet said too much about image quality and some of the G84 GPU’s more intriguing features. That’s in part because you can read about image quality and the like in my GeForce 8800 review. What you saw there from the high end GeForce 8, GeForce 7, and Radeon X1000 GPUs also applies to their lower end derivatives. I checked it out, and the GeForce 8600 cards appear to have the same angle-independent anisotropic filtering capabilities as the GeForce 8800.

But I do want to take a second to see how coverage-sampled antialiasing has translated to the G84. CSAA offers 16X image quality with very little performance drag. Here’s a quick look at performance scaling with the various AA modes these GPUs offer.

The GeForce 8600 GTS delivers the same frame rates with 16X CSAA that the Radeon X1950 Pro does in its 6X AA mode, despite the fact that 16X CSAA achieves discernibly smoother edges.

Oh, and I’m not sure why the 8600 GT confounds expectations by being faster in 8xQ mode than in 16X CSAA. The GTS doesn’t do that. I did re-test the GT several times, and the results didn’t change.

 

Power consumption
We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement.

The idle measurements were taken at the Windows desktop. The cards were tested under load running Oblivion at 1280×1024 resolution with 16X anisotropic filtering. We loaded up the game and ran it in the same area where we did our performance testing.

The G84 is an efficient architecture in more ways than one. It performs well, and it does so without drawing too much power.

Noise levels and cooling
We measured noise levels on our test systems, sitting on an open test bench, using an Extech model 407727 digital sound level meter. The meter was mounted on a tripod approximately 14″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured, including the Zalman CNPS9500 LED we used to cool the CPU. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

Notice that the GeForce 7 cards don’t look so good here, especially at idle. That’s because their coolers appeared to be running at full-tilt no matter whether the systems was running a game or sitting idle at the Windows desktop. I even tried disabling Vista’s Aero look, but it didn’t seem to help. Looks to me like we found another bug in Nvidia’s Vista x64 drivers.

At any rate, the 8600 cards come out looking pretty good, with GTS tying the Radeon X1950 Pro. None of these cards are as quiet as the high-end cards with massive dual-slot coolers, though.

 
Conclusions
Like the G80 on which it’s based, the G84 GPU is a formidable piece of technology. The G80’s unified shader architecture has scaled down quite gracefully to this welterweight GPU, bringing with it the higher image quality, efficient performance per die area and per watt, and DirectX 10-class features of its much larger sibling.

As a result, the GeForce 8600 GT is without question a nice advance over the two incumbent offerings in its price class, the GeForce 7600 GT and Radeon X1650 XT. The 8600 GT we tested is a hopped-up version from XFX that’s clocked somewhat higher than stock and lists for $169. Nevertheless, this card so decisively outperformed the two DX9 cards that there’s no doubt the stock-clocked version of the 8600 GT is the best option at $149. This is the kind of progress we like seeing from one generation to the next in the GPU arena, and Nvidia has delivered once again.

That said, the GeForce 8600 GTS faces much stiffer competition in the form of the GeForce 7900 GS and, especially, the Radeon X1950 Pro. The GeForce 8600 GTS is a very good product, but it’s fighting above its natural weight class when it takes on the these competitors, both of which have 256-bit paths to memory. The 8600 GTS variant we tested did largely hold its own against the Radeon X1950 Pro in terms of performance, even scoring clear victories in S.T.A.L.K.E.R. and 3DMark06, but the X1950 Pro had an edge overall. That’s true even though we used the “overclocked in the box” version of the 8600 GTS from XFX, a card that lists for $239. The Radeon X1950 Pro can be had for around $179 at online vendors if you shop around.

Of course, the X1950 Pro isn’t DirectX 10 compliant, and once DX10 apps arrive in force, DX9 cards may begin to feel old very quickly. DX10 is a clean break from the past, and if you want to run DX10 apps, you’ll need to have a card that’s compliant. But one doesn’t wish to be held hostage to the DX10 conversion, forced to cough up more money and to sacrifice DX9 performance in order to get into the club. That’s what the Radeon X1950 Pro versus GeForce 8600 GTS tradeoff feels like right now, which is unfortunate.

Nvidia itself offers a way out of this dilemma in the form of the GeForce 8800 GTS 320MB, a vastly more powerful graphics card that costs only 60 bucks more than the XFX GeForce 8600 GTS we tested. Pay the extra 60 bucks, fer goshsakes. You’ll be getting a much better value.

Of course, the GeForce 8600 GTS needs nothing more than a price cut to make it more compelling. I wouldn’t be surprised to see that happen fairly soon, like when AMD’s new mid-range DX10 offerings arrive. 

Comments closed

Pin It on Pinterest

Share This

Share this post with your friends!