Nvidia’s GeForce 9600 GT graphics processor

For a while there, trying to find a decent DirectX 10-capable graphics card for somewhere around two hundred bucks was a tough assignment. Nvidia had its GeForce 8600 GTS, but that card didn’t really perform well enough to measure up similarly priced DX9 cards. On the Radeon side of things, AMD had, well, pretty much nothing. You could buy a cheap, slow DX10-ready Radeon or a faster one with a formidable price tag. Between them, crickets chirped as tumbleweeds blew by.

Happily, the GPU makers saw fit to remedy this situation, and in the past few months, we’ve gained an embarrassment of riches in video card choices between about $170 and $250, including the screaming GeForce 8800 GT and a pair of solid values in the Radeon HD 3850 and 3870. Now, that embarrassment is becoming positively scandalous, as Nvidia unveils yet another new GPU aimed at graphics cards below the $200 mark: the GeForce 9600 GT.

Where does the 9600 GT fit into the daunting mix of video cards available in this price range? How does it match up with the Radeon HD 3850 and 3870? Why is this new GPU the first installment in the GeForce 9 series? We have no idea about that last one, but we’ll try to answer those other questions.

The GeForce 9600 GT laid bare

Welcome the new middle management

Let’s get this out of the way at the outset. Nvidia’s decision to make this new graphics card the first in the GeForce 9 series is all kinds of baffling. They just spent the past few months introducing two new members of the 8-series, the GeForce 8800 GT and the confusingly named GeForce 8800 GTS 512, based on a brand-new chip codenamed G92. The G92 packs a number of enhancements over older GeForce 8 graphics processors, including some 3D performance tweaks and improved HD video features. Now we have another new GPU, codenamed G94, that’s based on the same exact generation of technology and is fundamentally similar to the G92 in almost every way. The main difference between the two chips is that Nvidia has given the G94 half the number of stream processor (SP) units in the G92 in order to create a smaller, cheaper chip. Beyond that, they’re pretty much the same thing.

So why the new name? Nvidia contends it’s because the first product based on the G94, the GeForce 9600 GT, represents such a big performance leap over the prior-generation GeForce 8600 GTS. I suppose that may be true, but they’re probably going to have to rename the GeForce 8800 GT and GTS 512 in order to make their product lineup rational again. For now, you’ll just want to keep in mind that when you’re thinking about the GeForce 8800 GT and the 9600 GT, you’re talking about products based on two chips from the same generation of technology, the G92 and G94. They share the same feature set, so choosing between them ought to be a simple matter of comparing price and performance, regardless of what the blue-shirts haunting the aisles of Best Buy tell you.

Not that we really care about that stuff, mind you. We’re much more interested in the price and performance end of things, and here, the G94 GPU looks mightily promising. Because Nvidia has only excised a couple of the SP clusters included in the G92, the G94 retains most of the bits and pieces it needs to perform quite well, including a 256-bit memory interface and a full complement of 16 ROP units to output pixels and handle antialiasing blends. Yes, the G94 is down a little bit in terms of shader processing power and (since texture units are located in the SPs) texture filtering throughput. But you may recall that the GeForce 8800 GT is based on a G92 with one of its eight SP clusters disabled, and it works quite well indeed.

Here’s a quick look the G94’s basic capabilities compared to some common points of reference.

ROP output:

Pixels/clock

Texture

filtering:

Bilinear

texels/clock

Texture

filtering:

Bilinear FP16

texels/clock

Stream

processors

Memory

interface

bits

Radeon HD 38×0 16 16 16 320 256
GeForce 9600 GT 16 32 16 64 256
GeForce 8800 GT 16 56 28 112 256
GeForce 8800 GTS 20 24 24 96 320

The 9600 GT is suitably potent to match up well in most categories with the GeForce 8800 GT and the Radeon HD 3850/3870. Even the older G80-based GeForce 8800 GTS fits into the conversation, although its capacities are almost all higher. As you know, the RV670 GPU in the Radeons has quite a few more stream processors, but Nvidia’s GPUs tend to make up that difference with higher SP clock speeds.

In fact, the GeForce 9600 GT makes up quite a bit of ground thanks to its clock speeds. The 9600 GT’s official “base” clock speeds are 650MHz for the GPU core, 1625MHz for the stream processors, and 900MHz (1800MHz effective) for its GDDR3 memory. From there, figuring out the GPU’s theoretical potency is easy.

Peak
pixel
fill rate
(Gpixels/s)

Peak bilinear

texel
filtering
rate
(Gtexels/s)


Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
shader
arithmetic
(GFLOPS)

Peak
memory
bandwidth
(GB/s)
Radeon HD 3870 12.4 12.4 12.4 496 72.0
GeForce 9600 GT 10.4 20.8 10.4 312 57.6
GeForce 8800 GT 9.6 33.6 16.8 504 57.6
GeForce 8800 GTS 10.0 12.0 12.0 346 64.0

As expected, the 9600 GT trails the 8800 GT in terms of texture filtering capacity and shader processing power, but it has just as much pixel fill rate and memory bandwidth as its big brother. More notably, look at how the 9600 GT matches up to the GeForce 8800 GTS, a card that was selling for $400 less than a year ago.

Making these theoretical comparisons to entirely different GPU architectures like the RV670 is rather tricky. On paper, the 9600 GT looks overmatched versus the Radeon HD 3870, even though we’ve given the GeForce cards the benefit of the doubt here in terms of our FLOPS estimates. (Another way of counting would cut the GeForces’ FLOPS count by a third.) We’ll have to see how that works out in practice.

Incidentally, the 9600 GT’s performance will be helped at higher resolutions by a feature carried over from the G92: improved color compression. All GeForce 8-series GPUs compress color data for textures and render targets in their ROP subsystems in order to save bandwidth. The G92 and G94 have expanded compression coverage, which Nvidia says is now sufficient for running games at resolutions up at 2560×1600 with 4X antialiasing.

The chip

Like the G92 before it, the G94 GPU is manufactured on a 65nm fabrication process. That leaves AMD with something of an edge, since the RV670 is made using a smaller 55nm process. Nvidia estimates the G94’s transistor count at 505 million, versus 754 million for the G92. AMD seems to count a little differently, but it estimates the RV670 at a sinister 666 million transistors.

Here’s a quick visual comparison of the three chips. By my measurements, the G94 is approximately 240 mm², quite a bit smaller than the G92 at 324 mm² but not as small as the RV670 at 192 mm². Obviously, the G94 is very much in the same class as the RV670, and it should give Nvidia a much more direct competitor to AMD’s strongest product.

The cards

I’ve already told you about the 9600 GT’s basic specs, but the products you’ll see for sale will have a few other common parameters. Among them is this very nice development: cards will come with 512MB of GDDR3 memory, standard. I’m pleased to see this memory size becoming the new baseline for enthusiast-class products. Not every game at every resolution requires more than 256MB of memory, but a mid-range card with 512MB is a much nicer compromise, especially given RAM prices these days. On top of that, running two GPUs in SLI makes a lot more sense with 512MB of memory than it does with 256MB, where you’re facing a serious mismatch in GPU horsepower and available memory.

Most 9600 GTs will sport a single-slot cooler similar to the one on the 8800 GT. Nvidia rates board power consumption at 95W, so the 9600 GT requires the help of a single six-pin PCIe aux power connector. And, as we’ve hinted, prices should slot in just below the 8800 GT at about $169 to $189, according to Nvidia.

Of course, there will be a range of GeForce 9600 GT cards on tap from various board makers, and many of them will be clocked at higher frequencies than Nvidia’s defaults. That’s decidedly the case with the 9600 GT we have in the labs for review, Palit’s flamboyant GeForce 9600 GT Sonic.

Do you find Palit’s palette palatable?

This puppy’s dual-slot cooler is shrouded in bright Lego/Tonka yellow, which pretty effectively broadcasts that this isn’t a stock-clocked creation. In fact, Palit has turned up the GPU core clock to 700MHz, the shader clock to 1.75GHz, and the memory to 1GHz.

Also, there’s an atomic frog. With, I think, a flamethrower. I’ve learned a lot about video cards over the years, but some things I will never fully understand.

Anyhow, Palit has loaded this thing up with more ports than the Pacific rim. Check it out:

There are two dual-link DVI outputs, an HDMI out, a DisplayPort connector, and an optical S/PDIF output. You’ll need to connect an audio source to the card’s two-pin internal plug (using the supplied internal audio cable) in order for HDMI audio and the S/PDIF output to work, since unlike the RV670, the G94 GPU lacks an audio device. Still, that’s a very impressive complement of ports—the most complete I’ve seen in a video card in some time. You’ve gotta give these guys credit for making something different here.

Unfortunately, different isn’t always better, as we found out when we tried to plug the six-pin portion of our adaptable eight-pin PCIe aux power lead into the card. Regular six-pin-only connectors work fine, but the eight-pin one didn’t fit.

Such problems could be resolved fairly easily by removing the shroud altogether, which exposes the card’s nifty cooler and would probably lead to better cooling anyhow. All other things being equal, I’d prefer a cooler designed to exhaust warm air from the case, like most dual-slot coolers do these days. However, I have to admit that this cooler did a fine job on our test card and made very little noise. This puppy has it where it counts, too, with a copper base connected to copper heatpipes routed into aluminum fins. The fan is speed controlled, and although it can be quite noisy when first booting or in a pre-boot environment, it’s impressively quiet in Windows—even when running a 3D application or game.

Palit’s festival of transgression against the reference design continues at the board level, where the firm has replaced the standard two-phase power with a three-phase design, intended to enhance board longevity and overclocking potential. Again, we like the initiative, and we’ll test the board’s overclocking headroom shortly.

This card ships with a copy of Tomb Raider Anniversary, which apparently isn’t a bad game, as hard as I find that to believe. Palit says the card’s MSRP is $219, but it’s currently selling for $209 on Newegg. Obviously, you’re paying extra for all of the bells and whistles on this card, which take it nearly into 8800 GT territory. Palit has a more pedestrian model selling for $179, as do a number of other board makers, including XFX and Gigabyte. MSI even has one with a 700MHz GPU core selling at that price.

The competition hits the juice

AMD has no intention of ceding ground to Nvidia in this portion of the market without a fight, which is good news for you and me. In order to counter the 9600 GT, AMD and the various Radeon board makers have recently slashed prices and juiced up their Radeon HD 3850 cards. Clock speeds are up, and many of the boards are now equipped with 512MB of GDDR3 memory, as well.

Diamond kindly agreed to send us its latest 3850 for comparison to the 9600 GT. This card is emblematic of the new wave of Radeon HD 3850s. It’s clocked at 725MHz with 512MB of GDDR3 memory running at 900MHz, and it’s selling for $169.99 right now. Those clock speeds put it darn near the base clocks of the Radeon HD 3870, although as we learned in our recent video card roundup, 3870 clocks are making something of a northward migration, as well.

Diamond isn’t alone in offering this class of product. In fact, we paired up the Diamond card with a similarly clocked HIS Hightech TurboX 512MB card for CrossFire testing.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core
2 Extreme X6800
2.93GHz
Core
2 Extreme X6800
2.93GHz
System
bus
1066MHz
(266MHz quad-pumped)
1066MHz
(266MHz quad-pumped)
Motherboard Gigabyte
GA-X38-DQ6
XFX
nForce 680i SLI
BIOS
revision
F7 P31
North
bridge
X38
MCH
nForce
680i SLI SPP
South
bridge
ICH9R nForce
680i SLI MCP
Chipset
drivers
INF
update 8.3.1.1009

Matrix Storage Manager 7.8

ForceWare
15.08
Memory
size
4GB
(4 DIMMs)
4GB
(4 DIMMs)
Memory
type
2
x Corsair
TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
2
x Corsair
TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
CAS
latency (CL)
4 4
RAS
to CAS delay (tRCD)
4 4
RAS
precharge (tRP)
4 4
Cycle
time (tRAS)
18 18
Command
rate
2T 2T
Audio Integrated
ICH9R/ALC889A

with RealTek 6.0.1.5497 drivers

Integrated
nForce 680i SLI/ALC850

with RealTek 6.0.1.5497 drivers

Graphics Diamond Radeon HD
3850 512MB PCIe

with Catalyst 8.2 drivers

Dual
GeForce
8800 GT 512MB PCIe

with ForceWare 169.28 drivers

Dual Radeon HD
3850 512MB PCIe

with Catalyst 8.2 drivers

Dual
Palit GeForce
9600 GT 512MB PCIe

with ForceWare 174.12 drivers


Radeon HD 3870 512MB PCIe

with Catalyst 8.2 drivers

Dual

Radeon HD 3870 512MB PCIe

with Catalyst 8.2 drivers



Radeon HD 3870 X2 1GB PCIe

with Catalyst 8.2 drivers

Palit
GeForce
9600 GT 512MB PCIe

with ForceWare 174.12 drivers

GeForce
8800 GT 512MB PCIe

with ForceWare 169.28 drivers

EVGA
GeForce 8800 GTS 512MB PCIe

with ForceWare 169.28 drivers

GeForce
8800 Ultra 768MB PCIe

with ForceWare 169.28 drivers

Hard
drive
WD
Caviar SE16 320GB SATA
OS Windows
Vista Ultimate
x86 Edition
OS
updates
KB936710, KB938194, KB938979,
KB940105, KB945149,
DirectX November 2007 Update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs. Thanks to OCZ for providing these units for our use in testing.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Sizing up the new guy

We’ve already talked some about the 9600 GT’s theoretical capabilities. Here’s a quick table to show how it compares with a broader range of today’s video cards, including the juiced-up Diamond Radeon HD 3850 512MB card we’re testing. I’ve included numbers for the Palit card at its higher clock speeds, as well.

Peak
pixel
fill rate
(Gpixels/s)

Peak bilinear

texel
filtering
rate
(Gtexels/s)


Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)

Peak
shader
arithmetic
(GFLOPS)
GeForce 9600 GT 10.4 20.8 10.4 57.6 312
Palit GeForce 9600 GT 11.2 22.4 11.2 64.0 336
GeForce 8800 GT 9.6 33.6 16.8 57.6 504
GeForce 8800 GTS 10.0 12.0 12.0 64.0 346
GeForce 8800 GTS 512 10.4 41.6 20.8 62.1 624

GeForce 8800 GTX

13.8 18.4 18.4 86.4 518
GeForce 8800 Ultra 14.7 19.6 19.6 103.7 576
Radeon HD 2900 XT 11.9 11.9 11.9 105.6 475
Radeon HD 3850 10.7 10.7 10.7 53.1 429
Diamond Radeon HD 3850 11.6 11.6 11.6 57.6 464
Radeon HD 3870 12.4 12.4 12.4 72.0 496
Radeon HD 3870 X2 26.4 26.4 26.4 115.2 1056

Now the question is: how do these theoretical numbers translate into real performance? For that, we can start with some basic synthetic tests of GPU throughput.

The single-textured fill rate test is typically limited by memory bandwidth, which helps explain why the Palit 9600 GT beats out our stock GeForce 8800 GT. The multitextured test is more generally limited by the GPU’s texturing capabilities, and in this case, the 8800 GT pulls well away from its upstart sibling. The 9600 GT easily outdoes the Radeon HD 3850 and 3870, though, which is right in line with what we’d expect.

3DMark’s two simple pixel shader tests show the 9600 GT at the back of the pack, again as we’d expect. Simply put, shader arithmetic is the place where Nvidia has compromised most in this design. Whether or not that will really limit performance in today’s game is an intriguing question. We shall see.

Among the GeForce 8 cards, these vertex shader tests appear to track more closely with shader clock speeds than with the total shader power of the card. I don’t think that’s anything worth worrying about.

However, have a look at the difference in scores between the Radeon HD 3850 and 3870 in the simple vertex shader test. This is not a fluke; I re-tested several times to be sure. The 3850 is just faster in the simple vertex shader test—at least until you get multiple GPUs involved. After consulting with AMD, I believe the most likely explanation for the 3870’s low performance here is its use of GDDR4 memory. GDDR4 memory has a transaction granularity of 64 bits, while GDDR3’s is half that. In certain cases, that may cause GDDR4 memory to deliver lower performance per clock, especially if the access patterns don’t play well with its longer burst length. Although this effect is most pronounced here, we saw its impact in several of our game tests, as well, where the Radeon HD 3850 turned out to be faster than the 3870, despite having slightly slower GPU and memory clock frequencies.

Call of Duty 4: Modern Warfare

We tested Call of Duty 4 by recording a custom demo of a multiplayer gaming session and playing it back using the game’s timedemo capability. Since this is a high-end graphics card we’re testing, we enabled 4X antialiasing and 16X anisotropic filtering and turned up the game’s texture and image quality settings to their limits.

We’ve chosen to test at 1680×1050, 1920×1200, and 2560×1600—resolutions of roughly two, three, and four megapixels—to see how performance scales. I’ve also tested at 1280×1024 with the 9600 GT and its closest competitors here, since some of them struggled to deliver completely fluid rate rates at 1680×1050.

The 9600 GT delivers jaw-dropping performance in CoD4, keeping frame rate averages above 40 FPS even at 1920×1200 resolution. That’s astounding, because we’re talking about a great-looking modern game running with 4X antialiasing, 16X aniso filtering, and peak quality options. In fact, the 9600 GT shadows the GeForce 8800 GT, trailing it by only a few frames per second.

Note that in this game, I’ve also provided results for a stock-clocked GeForce 9600 GT (at 650/900MHz), for comparison. Although dropping from the Palit card’s frequencies to Nvidia’s reference clocks puts a little more distance between the 8800 GT and the 9600 GT, it doesn’t drop the 9600 GT into Radeon territory. Both the 3850 and 3870 are consistently slower.

Sadly, just when you need SLI, at 2560×1600, it fails. The story is the same on both the 9600 GT and the 8800 GT. My best theory: they may be running out of video memory, which would explain the big drop in performance.

Enemy Territory: Quake Wars

We tested this game with 4X antialiasing and 16X anisotropic filtering enabled, along with “high” settings for all of the game’s quality options except “Shader level” which was set to “Ultra.” We left the diffuse, bump, and specular texture quality settings at their default levels, though. Shadows, soft particles, and smooth foliage were enabled. Again, we used a custom timedemo recorded for use in this review.

The 9600 GT’s strong performance continues here, where it again finds it itself between the GeForce 8800 GT and the Radeon HD 3870. Dropping down the Nvidia’s base clock frequencies brings the 9600 GT closer to the 3870, but it’s still a tick faster. These are very minor differences of a few frames per second, though, to keep things in perspective.

Half-Life 2: Episode Two

We used a custom-recorded timedemo for this game, as well. We tested Episode Two with the in-game image quality options cranked, with 4X AA and 16 anisotropic filtering. HDR lighting and motion blur were both enabled.

The 9600 GT’s advantage over the Radeon HD 3850 and 3870 carries over to Episode Two. And two 9600 GTs in SLI can deliver playable frame rates at 2560×1600 with 4X AA and 16X aniso, believe it or not.

Crysis

I was a little dubious about the GPU benchmark Crytek supplies with Crysis after our experiences with it when testing three-way SLI. The scripted benchmark does a flyover that covers a lot of ground quickly and appears to stream in lots of data in a short period, possibly making it I/O bound—so I decided to see what I could learn by testing the 9600 GT and its closest competitors with FRAPS instead. I chose to test in the “Recovery” level, early in the game, using our standard FRAPS testing procedure (five sessions of 60 seconds each). The area where I tested included some forest, a village, a roadside, and some water—a good mix of the game’s usual environments.

Please note that all of the results you see below for the Radeons come from a newer graphics driver, version 8.451-2-080123a, than the ones we used for the rest of our tests. This newer driver improved Crysis performance noticeably. These driver enhancements for Crysis should be available to the public soon in Catalyst 8.3.

The cards tend to cluster together at 1280×800, and multi-GPU rendering doesn’t seem to help performance much at all. A low of 23 frames per second isn’t too bad, when you think about it, and I’d classify any of these cards as running Crysis at playable speeds at this resolution. Obviously, there’s very little difference between them.

At 1680×1050, the field begins to separate just a little, while CrossFire and SLI actually start to help. The 9600 GT is technically faster than the Radeons, but not by enough to matter much.

Unreal Tournament 3

We tested UT3 by playing a deathmatch against some bots and recording frame rates during 60-second gameplay sessions using FRAPS. This method has the advantage of duplicating real gameplay, but it comes at the expense of precise repeatability. We believe five sample sessions are sufficient to get reasonably consistent and trustworthy results. In addition to average frame rates, we’ve included the low frames rates, because those tend to reflect the user experience in performance-critical situations. In order to diminish the effect of outliers, we’ve reported the median of the five low frame rates we encountered.

Because UT3 doesn’t support multisampled antialiasing, we tested without AA. Instead, we just cranked up the resolution to 2560×1600 and turned up the game’s quality sliders to the max. I also disabled the game’s frame rate cap before testing.

The UT3 results pretty much confirm what we’ve seen elsewhere. Any of these cards can, incredibly, run UT3 well enough at this resolution, but you’ll probably want to drop down to 1920×1200 for the best experience with either of the Radeon HD cards. Their performance seemed a little choppy at times to me. And, heh, you’ll probably have drop down a little bit to match your monitor’s native resolution.

Power consumption

We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows Vista desktop with the Aero theme enabled. The cards were tested under load running UT3 at 2560×1600 resolution, using the same settings we did for performance testing.

Note that the SLI configs were, by necessity, tested on a different motherboard than the single cards, as noted in our testing methods section.

Whoa. The 9600 GT draws even less power under load than the Radeon HD 3850. That makes it one heck of an energy efficient GPU, given its performance.

Noise levels

We measured noise levels on our test systems, sitting on an open test bench, using an Extech model 407727 digital sound level meter. The meter was mounted on a tripod approximately 12″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured, including the stock Intel cooler we used to cool the CPU. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

Unfortunately—or, rather, quite fortunately—I wasn’t able to reliably measure noise levels for most of these systems at idle. Our test systems keep getting quieter with the addition of new power supply units and new motherboards with passive cooling and the like, as do the video cards themselves. I decided this time around that our test rigs at idle are too close to the sensitivity floor for our sound level meter, so I only measured noise levels under load.

Like I said, Palit’s cooler is nice and quiet. Of course, it doesn’t hurt to have a GPU that draws so little power (and thus generates little heat) onboard.

GPU temperatures

Per your requests, I’ve added GPU temperature readings to our results. I captured these using AMD’s Catalyst Control Center and Nvidia’s nTune Monitor, so we’re basically relying on the cards to report their temperatures properly. In the case of multi-GPU configs, well, I only got one number out of CCC. I used the highest of the numbers from the Nvidia monitoring app. These temperatures were recorded while running UT3 in a window.

Yeah, those numbers seemed wrong to me at first, too. I tried again a few times, and the GPU temps never really got any higher. Palit’s copper heatsink with heatpipes may be overkill for this GPU. Then again, it’s the kind of overkill I like.

Overclocking

So how far will this little GPU go? Finding out was surprisingly easy using Palit’s little overclocking tool. I just kept nudging the sliders upward until something went sideways and caused a Vista display driver crash in my 3D test app. Then I backed off a little bit and watched for visual artifacts. In the end, the 9600 GT was stable with a 810MHz GPU core and a 1.9GHz shader clock.

Turning up the memory clock didn’t work out so well, though. Even going to 1050MHz produced immediate visual corruption and a system lock. I gave up and ran a few tests at my known-good overclocked speeds.

A little overclocking gives the 9600 GT enough of a boost to surpass the 8800 GT—just barely. I’m curious to see how much the more pedestrian 9600 GTs out there will overclock. I suspect Palit’s spiffy cooler and three-phase power have given this card more headroom than most.

Conclusions

The pattern in our performance testing was unmistakable: the GeForce 9600 GT is just a little bit slower than a GeForce 8800 GT and a little bit faster than the Radeon HD 3850 and 3870. Of course, that statement needs some qualification, since we tested the 8800 GT and HD 3870 at bone-stock clocks, while the 9600 GT and HD 3850 we tested were both overclocked considerably. But the basic trends we spotted were consistent, even when we reduced the 9600 GT card to Nvidia’s base clock speeds. The 9600 GT also impressed us with the lowest power draw under load of any card we tested and very low noise levels—despite its amped-up clock speeds.

I’m struggling to figure out what’s not to like here. One could argue that the practical performance difference between the Radeon HD 3850 512MB and the GeForce 9600 GT in our testing was largely nil. Both cards ran most games well at common resolutions like 1680×1050, even with quality levels cranked up. Image quality between the two was comparable—and uniformly good. When there was a performance difference between them, it was usually fairly minor.

This is true enough, but the performance differences were large enough in Call of Duty 4 and Half-Life 2: Epsiode Two to distinguish the 9600 GT as the better choice.

One could also argue that the 9600 GT’s strong performance today may give way to relatively weaker performance down the road. If game developers shift their attention to using more and more complex shaders, the 9600 GT could end up running tomorrow’s games slower than the Radeon HD 3850, which clearly has more shader processing power.

This is a possibility, I suppose.

But at the end of the day, that just means there are a couple of very good choices in the market right now. The GeForce 9600 GT is one heck of a deal in a graphics card at around $179, and that’s something we like very much. If you haven’t upgraded for a while and don’t want to drop a lot of cash when doing so, it’s hard to go wrong with the 9600 GT. This card should offer roughly twice the performance of a DX9-class graphics card like the GeForce 7900 GS or the Radeon X1950 Pro, based on what we’ve seen. If you want to spend a little more and get a GeForce 8800 GT, you’ll get some additional future proofing in the form of shader power and just a little bit higher frame rates in today’s games. Whether that’s worth it will depend on your budget and your priorities. I will say this though: if there has ever been a better time to upgrade, I sure as heck don’t remember it.

Comments closed
    • Hance
    • 12 years ago

    well i decided to jump on the 9600GT bandwagon. Hopefully this will get my framerate out of the teens in SupCom

    • FubbHead
    • 12 years ago

    It seems the 9600GT uses the PCI-express frequency as reference, instead of a crystal of it’s own. So if the PCIe frequency is above specs (above 100MHz) you will overclock the card. I didn’t see that in the review, but maybe I missed it.

    ยง[<http://forums.guru3d.com/showthread.php?t=254944<]ยง

      • lethal
      • 12 years ago

      Very interesting. IIRC, there was that link boost thing on some nvidia mobos that OC the PCI-E bus to 125 Mhz, which that would provide a 25% OC without the user knowing about it. It makes sense then that it matches the 8800GT when the core is running at over 800 Mhz and the shaders at close to 2000 Mhz O.o . It wouldn’t surprise me that this may cause some instability on some system though, like some people are reporting.

    • sigher
    • 12 years ago

    NOOOO not the US quarter again ๐Ÿ˜ฎ

    ๐Ÿ˜‰

      • derFunkenstein
      • 12 years ago

      it’s cheaper to use a quarter than other currency. ๐Ÿ˜‰

    • liquidsquid
    • 12 years ago

    Well, I have the same problem as these guys: After about 1 hour of gaming, blank screen, hard reboot required.

    ยง[<http://forums.nvidia.com/lofiversion/index.php?t60826.html<]ยง Temps OK, power supply more than adequate, etc. It is mentioned the firmware has a known issue and we have to wait for weeks to get it fixed!?!? WTH? Good thing it works fine with my CAD software, or I would be hosed. Did Damage labs see any of this while playing? -Mark

      • no51
      • 12 years ago

      sounds like you’re experiencing what i’m getting. except my card is a g92.

    • willyolio
    • 12 years ago

    given the power draw and temperature, it looks like they could have easily made this card a single-slotter. this would be especially important for a compact gaming rig (mATX) where a 2-slot cooler automatically means you’ve lost an extra 25% of your expansion slots.

      • spiritwalker2222
      • 12 years ago

      All of these cards are single slot, except for the one they reviewed.

      I like this card so much that I just picked one up for $175. Instead of the 8800GT.

    • derFunkenstein
    • 12 years ago

    wrong comments area…oops

    • swaaye
    • 12 years ago

    Well it goes to show how aggressive NV and ATI are trying to be in the mid-range segment. It also could mean that none of these cards will last long as “the bargain”. 8800GT and 38×0 just got demoted… when will 9600GT’s time come? ๐Ÿ™‚

    • liquidsquid
    • 12 years ago

    Installed mine last night, all I can say is “wow” at the performance increase from my 6600 card. Maxed out all the games I own (all three) and took a look. Now I know what I was missing! Not to mention my UT2K3 scores went way up, so it wasn’t entirely my bad skills making me stink.

    One problem I had… Under UT2K3 the fan never spun faster than idle, but the temperature soared up, which caused the card to crash the computer. Not pleasant. Turns out I didn’t have the latest drivers on the installation disk, but I haven’t re-tested with the new drivers yet.

    Needless to say, I am pleased as it is one heck of a jump from the 6600 and the borrowed 7600. During idle it is as cool as a cucumber, and for that alone I am pleased.

    -LS

    • matnath1
    • 12 years ago

    Back with a vengeance after the flu Aye Scott???? No sign of it here great work as usual..

    I have a few things to bring up..

    I’d love to see minimum power supply requirement numbers. You used to include that back in the day.

    Second… The 9600 GT is replacing the 8600 GT Correct????? Than why is the card TWICE THE SIZE and now requires a molex power connector and quite probably a 450 Watt PSU???? I have a second rig with an 8600 GT that is using a 300 Watt PSU… It’s too much of a pain to upgrade to a 9600 since now I’d have to buy a larger power supply rather than simply swapping cards. Sorry but Nvidia messed this up In my opinion. The 8600 GT may be half the card performance wise but it’s much more convenient to install.

    It would have been nice to see benchies with todays Fastest 8600 GT’s whose clocks have risen CONSIDERABLY since techreport’s last 8600 review. My 8600 GT has a core of 680 mhz and Memory of 1600mhz! I wanted to see how much more performance the 9600 GT provides VS the 8600 GT and weigh wheather it’s worth it to go through all of the above installation and expsense issues. (I’d have to spend another $50 or $60 on a Power supply on TOP of the card and than worry about potentially screwing up the power supply surgery).

    • mattthemuppet
    • 12 years ago

    interesting to see how the 3850 power draw has creeped up to 3870 levels with increased clocks and memory (which seems to pretty much turn it into a 3870) – one of the main draws of the 3850 for me was it’s low power consumption in idle. Now there’s not much point other than price for going with the 3850 over the 9600GT. Hmm.

      • Dagwood
      • 12 years ago

      What I lernt by readin tek repurt:

      –When a 3850 is overclocked and has 512MB of DRAM, and duel slot cooler, it *[

    • Hance
    • 12 years ago

    My 7800GTX cried as I read this review

    • Sargent Duck
    • 12 years ago

    I’m really impressed by this.

    I’ve been gaming less and less which means my video card has been idling more and more.

    So for me the big thing has been power consumption first, performance second. I really liked the 3870 because of this, but man, this card is the card to get. Amazing power consumption, and for all intensive purposes, 8800GT performance. Hmmmm, the ‘ol 7900GT may be up for sale soon…

      • Vrock
      • 12 years ago

      It’s “all intents and purposes” not “all intensive purposes”. That really bugs me.

        • danny e.
        • 12 years ago

        maybe he meant “intensive” purposes rather than some meaningless jargon

        • willyolio
        • 12 years ago

        unless by “intensive purposes” he just means things like gaming… or GPGPU applications that are GPU-intensive…

        but it’s still bad grammar in that case.

    • derFunkenstein
    • 12 years ago

    This thing was WAY closer to the 8800GT in performance than I initially thought it’d be…I guess 112 stream processors is overkill today, though as you note in the conclusion, that may change in the future. Very impressive for a $170 card…I’d expect the 8800GT to get the boot when the 9800GTX comes out, as it’s really not any faster than the 9600GT for most folks.

    • albundy
    • 12 years ago

    hopefully the 9800 series will be like night and day just like when the 8800 followed the 8600 series. I do like the fact that the power is better optimised. Nobody really wants to run a 1kw psu. whats next? 10600GT?

      • UberGerbil
      • 12 years ago

      1066, the Norman Conquest Edition.

        • crazybus
        • 12 years ago

        With the Bayeux Tapestry emblazoned on the heatsink shroud? I’d dig it.

      • provoko
      • 12 years ago

      Like all companies, they’ll go with the letter X. I’m dreading that horrid day.

        • green
        • 12 years ago

        there’s still the off chance they could get hex with Ax00

    • DrDillyBar
    • 12 years ago

    I’d be interested in seeing a little more information about how smoothly the card functions when added to a HTPC setup and paired with a TVTuner of some sort. There’s always a lot of focus on establishing the performance of the GPU in gaming scenerios, but sometimes I’m more interested in the other features of the cards reviewed. Sure it plays Quake2 at a billion FPS, but that is not always the /[

      • UberGerbil
      • 12 years ago

      Yes, for the cards that land lower on the power/heat/noise scale, and are thus obvious candidates for HTPC applications, that would be interesting. But you don’t want to have the review too distracted by issues involving separate hardware like TV tuners — that’s a subject (and an excellent one, hint hint) for a separate review/round-up entirely. But video playback quality and features certainly deserve a page in a GPU review.

      • Usacomp2k3
      • 12 years ago

      Other than driver incompatibility, I don’t see where the video card makes a difference. The Intel Extreme onboard will show video off a tuner card just as well as a 9600gt. (connections aside). There is no decoding that needs done in terms of H.264 or anything like that. Blu-Ray decoding is a completely different matter, but those are usually touched upon in other reviews: ยง[<https://techreport.com/articles.x/12956/7<]ยง

    • elmopuddy
    • 12 years ago

    Thanks for the great review, I am buying a replacement for my ailing 7900GTX.. I was torn between a 8800GT and GTS, but now I don’t know..

    edit: 20$ CDN diff between the Palit Sonic 9600 and their stock clock 8800GT, damn!

    EP

    • Vrock
    • 12 years ago

    Wow, it looks like price/performance is making somewhat of comeback.

    It’s a shame I don’t play many PC games anymore. And the ones I do play alot of work perfectly on a 3dfx Voodoo 3.

    • Kurotetsu
    • 12 years ago

    Well crap. Got an 8800GT for $250, and now you’re telling me this card performs practically equal to it? And for $70 less?

    *headbutts desk repeatedly*

      • edub82
      • 12 years ago

      I know exactly how you feel. If I had waited just one more measly month, I could have saved myself 50-60 dollars. (picked up an 8800gt for $240 late january)

        • indeego
        • 12 years ago

        When in doubt, blame the housing marketg{<.<}g

      • paulWTAMU
      • 12 years ago

      Hey me too! Still, I’ve enjoyed my card for the last two months so no real complaints ๐Ÿ™‚

    • emorgoch
    • 12 years ago

    Thank you for including the 8800GT and GTS 512 in the collection. Anandtech’s review only included results from the 3850, 3870, and the 8600, so you missed that last piece showing the price/performance ratio with the 8800 series.

      • enzia35
      • 12 years ago

      I think it was nice to include the 8600gt in the Anand review. That’s what I’m stuck with.

    • MadManOriginal
    • 12 years ago

    Damage you need to do some digging and figure out how with so many fewer shaders this GPU stays so close to the 8800GT and even GTS at times. Is the G90 series architecture just not bound by shader operations as much and it’s just memory bandwidth that holds it back? Have the shaders been improved that much? Do current games just not need the shader power? The last few GPU articles have been lacking in the excrutiating details that used to be included, I think you said once that NV ‘is being tight-lipped’ so please hit up your anonymous sources ๐Ÿ™‚

    I like the low power draw, I’m a little irked about getting an 8800GTS recently though because the specs for this card before it was released didn’t make it seem near as impressive as the real results.

      • marvelous
      • 12 years ago

      Games just aren’t shader bound with these Geforce based on SP. 64SP is enough for most games out there.

      • Damage
      • 12 years ago

      I’m not sure how much more digging you want me to do. The article discusses the architectural similarities between the 9600 GT and 8800 GT and the (very simple) differences. It walks you through the shader GFLOPS numbers and all of the other key theoretical specs. Then it shows you basic tests of those things, where the 9600 GT is nearly as fast as the 8800 GT, except in texture filtering and shading. Then you see gaming performance, where they are very similar.

      The obvious conclusion is that the games we tested aren’t especially shader-bound on a 9600 GT. Apologies if I didn’t make that obvious enough.

      I don’t think we’re seeing any “secret sauce” from Nvidia here, and having discussed this with them, I don’t expect the 174.x drivers to bring major performance gains to G92 when they’re released for it.

      Food for thought: If games were primarily shader bound, I think the RV670 would look relatively stronger than it does. I’ve heard the arguments about RV670 scheduling being more difficult than G8x/G9x, but I don’t think that’s a big factor. AMD has even said it’s breaking out shaders into serial sequences and running them down the fifth ALU of its superscalar execution units when needed. In both theory and practice, RV670 has more shader power than G94 (and matches up very well against the 8800 GT). I just don’t think that matters right now, probably due to its texture filtering constraints and, most importantly, the current GPU usage model in games.

        • marvelous
        • 12 years ago

        Damage your article was clear as day and more detailed than usual. It’s just that guys who aren’t too familiar with reading numbers they might have a hard time figuring out the differences.

          • MadManOriginal
          • 12 years ago

          Ok, thanks for calling me stupid..?

        • MadManOriginal
        • 12 years ago

        So the conclucion is that with sufficient SPs that pixel fillrate and memory bandwidth are the keys? Too bad NV didn’t include more ROPs in the G92 instead of more shaders then. Looks like excess shaders mean nothing unless PhysX on the graphics card appears in games next month.

        I’m not ungrateful for the article but when the 8800 debuted you had a heck of an article about the architectural details with all sorts of nice diagrams in addition to numbers and clear statements about why the performance was how it was. Perhaps that isn’t necessary since G9x are just tweaked G8x. I guess one thing that wasn’t clear was which 8800GTS you were talking about initially but looking at the charts and the ‘used to be $400’ one can tell it’s the old GTS. If there is any special info you can find out from NV that would be great ๐Ÿ˜‰ You say it’s fundamentally the same as the G92 though, I guess NV is just continuing their tradition of cannabalizing their own higher cards. Would brief tests with even higher AA show anything or not because NV does AA in the ROPs? Also something that wasn’t clear is whether you’re using DX9 or DX10 in each game. And does Vista x64 change performance at all?

    • Madman
    • 12 years ago

    It’s good to see greener cards coming out ๐Ÿ™‚

    • flip-mode
    • 12 years ago

    I LOVE THE PICTURE STRIP ABOVE THE COMMENTS!!!!

    • Gerbil Jedidiah
    • 12 years ago

    Very Nice Review!

    I’m surprised by those SLI numbers. The pervading logic up until recently is that it’s not worth it to SLI a cheaper card, but the 9600GT scales nicely and compares well to more expensive competition.

      • Flying Fox
      • 12 years ago

      Buying 2 cards immediately always bring you the benefit of SLI. The problem with the 9600GT and its price segment are usually with people who subscribe to the “buy one get another one (much) later” principle, which makes SLI not worth it anymore.

    • Mithent
    • 12 years ago

    That thing about GDDR4 memory interested me – does that mean that GDDR4 memory is actually worse than GDDR3 for practical purposes?

    • Prototyped
    • 12 years ago

    versus 8800 GS?

    ๐Ÿ™

    • DaveJB
    • 12 years ago

    Does it seem weird to anyone else that ATI had a 9600 way back in 2002?

      • Fighterpilot
      • 12 years ago

      Yeah..I had one…an ATI 9600XT 256MB…that was a damn good card too:)

      • echo_seven
      • 12 years ago

      I noticed this too, and, if one of the two companies doesn’t change their branding scheme soon, they’ll be repeating each other’s model numbers for the foreseeable future…

        • kvndoom
        • 12 years ago

        Hee hee, WTB Geforce 9800 Pro.

        • DaveJB
        • 12 years ago

        That’s a thought… when ATI eventually does launch R700, we might end up with Radeon 4400, Radeon 4600 and Radeon 4800! All they’d need to do then is come up with a Radeon 4200 for laptops (or HTPCs, or something), and they’ll have copied the entire GeForce 4 Ti family!

      • zgirl
      • 12 years ago

      No lots of people had them. Was a terrific card for the price. Mine is still running in the PC my wife uses.

      • Meadows
      • 12 years ago

      Does anyone care?

        • DaveJB
        • 12 years ago

        You’ve seen way too many fanboy flamewars. I wasn’t saying “ZOMG!! NVIDIA HAVE STOLEN ATI’S PRODUCT NAMES OMG!!!!! >(” more like “Hey, that’s a weird co-incidence; NVidia’s releasing a decent mid-range product called the 9600, almost 5 years to the day after ATI released a decent mid-range product called the 9600.”

      • Delphis
      • 12 years ago

      No, I have an ATI 9500 Pro in my windows machine still … maybe it IS time for me to upgrade ๐Ÿ˜€ … what better than to keep the model number similar, albeit for the ‘other team’.

      I would like to know how much faster an ATI 3870 or NVidia 9600 GT would be compared to the Radeon 9500 Pro I’m used to. It’s hard trying to find comparable benchmarks.

        • lethal
        • 12 years ago

        Depending on what resolutions/settings you are using, and what games are you playing, it would probably range from much to OMG!11!eleventy!!!1 faster =P. 6 years is really a lot of time when it comes to GPUs.

        • SecretMaster
        • 12 years ago

        I finally replaced my 9500pro back in the summer on my old rig. The fan had died out so I decided its time to upgrade. I replaced it with a 7600GT

        You want some quantifiable comparison? Well lets put it this way. Before with the 9500Pro I could run Oblivion at low-medium quality at 1280×1024 resolution. Now I can max out everything at 1280×1024.

        And this was with just a 7600GT. I’d reckon that the 9600GT is at least 3 times faster than the 7600GT, with lower power consumption, noise levels, and GPU temps.

          • Delphis
          • 12 years ago

          Thanks for that… that does help quite a lot. I do indeed use 1280×1024 on a nice looking NEC LCD. All to often I see many cards going neck and neck at the high end but it’s hard to understand what that would mean for my own use.

          I’ve just been looking over the TR system builder report for what motherboard they chose.

          Thanks guys.

            • SecretMaster
            • 12 years ago

            I should mention though that I also upgraded the CPU and Mobo because my 9500Pro was AGP and nearly every offering on the market is PCI-E. Of course there are still AGP contenders out there but I decided to the whole shebang in terms of upgrading. But still, my point still stands. CPU’s really don’t factor in terms of how the game looks visually.

            Also note that when I say I can max out everything in Oblivion, that is without LOD enabled. I personally prefer to not see the whole gameworld. I tried maxing out everything in Oblivion with LOD and it isn’t quite up to snuff I think. It wasn’t a slideshow, but it also wasn’t creamy smooth.

            But still, I think a 3870, hell even a 3850 should perform considerably better than my 7600GT, and my 7600GT was a huge difference.

      • willyolio
      • 12 years ago

      i still have one.

    • insulin_junkie72
    • 12 years ago

    The 9600GT + a passive Accelero S1 slapped on there would be an excellent combo, it appears.

      • yehuda
      • 12 years ago

      Yeah. Even though the 9600GT is not as efficient as 3870 at idle (it’s about halfway from 8800GT), its smaller power envelope will make passive cooling easier.

        • insulin_junkie72
        • 12 years ago

        The Accelero S1 can cool a 8800GT passively (with half-way decent case airflow). The 9600GT would be no sweat, by comparison (I’m assuming Nvidia didn’t do something goofy like change the mounting holes on the 9600GT vs. the 8800GT).

    • Fighterpilot
    • 12 years ago

    That’s a damn good new card,the performance at such low power consumption and noise levels is amazing.
    At that price 9600GT is the best choice out there right now.

      • VILLAIN_xx
      • 12 years ago

      i concur, Nvidia has a decent mainstream GPU on their hands.

      :o)

    • slot_one
    • 12 years ago

    Very nice card. I’m seriously thinking of getting one for my HTPC. Add a little gaming performance to the movie rig, you know?

    Having said that, I think that “8700GT(S)” would’ve been a better name for this card. NVIDIA calling this card a 9600GT is reminiscent of the time Intel called the Katmai a Pentium III when it was, in fact, just a P2 core with SSE tacked on. Coppermine was the *real* P3. NVIDIA’s next mid-range card will be the *real* 9600GT. ๐Ÿ™‚

    • b_naresh
    • 12 years ago

    Damn! That kind of performance coupled with such low power draw and load temps is simply awesome!
    I game at 1680*1050 with my 8800GT. At that resolution, there should no discernible difference between 8800GT and 9600GT yet the 9600GT simply blows away the 8800GT when you compare those load temps!

      • Meadows
      • 12 years ago

      Look at the bright side, you may use more detail with your card.
      Or antialias, depending on the bottleneck.

    • tfp
    • 12 years ago

    l[

      • Damage
      • 12 years ago

      Bingo!

        • tfp
        • 12 years ago

        And the 3850 is king vs 3870? At the egg I see about a 15 dollar difference between the two cheapest of both, and in almost all cases 3870 is faster. I guess I’m confused at how the 3850 is King.

        Maybe I just need some sleep.

        Nice review and good timing BTW.

          • Damage
          • 12 years ago

          Ah. Well, the 3850 has moved so close to the 3870, they’re pretty close to interchangeable. And I didn’t want people bugging me about my little summary having the 3870 in there. Can’t win, though.

            • tfp
            • 12 years ago

            It doesn’t matter I just didn’t understand, I do now thanks.

          • Dagwood
          • 12 years ago

          I think the idea is that yesturday only the 3850 was under 200 bucks, now all three are under 200 and the 9600 is the fastest.

          On a side note: this is some serious math for me. 64 = 96 of old, and nearly equal to 112. Any chance of a more indepth how in the hell does this all work kinda explanation? I’ve read most of the articles but I seem to be missing something.

          [edit:] nice job, definately the best review on the cards. Worth the wait

            • Damage
            • 12 years ago

            Two things. One, just look at clock speed. I convert the SP count and the SP clock into GFLOPS in the review. The 9600 GT’s higher SP clock gives it 312 GFLOPS to the old 8800 GTS’s 346 GFLOPS.

            Two, not every game is limited primarily by shader arithmetic, so the lower FLOPS count may not matter much.

            • bogbox
            • 12 years ago

            And third drivers optimization , and probably the most important part:))
            Now Nvidia is smart enough to make the 9600 gt perform excellent in the main games the you review . The big question is how will perform in the future ( will Nv offer support for it later ? )
            r[

            • Meadows
            • 12 years ago

            Drivers aren’t of utmost importance.
            And you’re making yourself ridiculous with your dreams of the GeForce 9800.

    • UberGerbil
    • 12 years ago

    Finally the people whining about not getting this review will shut up already.

    • enzia35
    • 12 years ago

    Absolutely amazing.

    • BoBzeBuilder
    • 12 years ago

    Those SLI numbers are seriously impressive, kudos to Nvidia for getting it right. Maybe people should stop calling it a gimmick since it could easily be seen as a logical upgrade path.

    And I wish techreport would include a 3dmark score at the default resolution for comparisons sake.

      • donkeycrock
      • 12 years ago

      funny you mention that, i was thinking they should take away 3dmark all together, its the only page i always skip over.

      • Flying Fox
      • 12 years ago

      Not for people who want to buy the 2nd card more than a year later and if Nvidia does indeed bring up their true next gen before that. Real next gen usually is doubled the previous gen with tons of new features. Getting the 2nd card won’t get you then. Plus if Nvidia is quick to EOL the SKU you are SOL, remember the 6800 and all its incompatible variants?

Pin It on Pinterest

Share This