Nvidia’s GeForce 8800 GTS 320MB graphics card

THE TEST RIGS HAVE been churning away in Damage Labs for days now, and your humble narrator is exhausted. The occasion that’s prompted all of this activity? The release of a 320MB version of the GeForce 8800 GTS. As you may know, the GeForce 8800 series first arrived in two flavors, the GTX version with 768MB of onboard memory and the GTS with 640MB. Neither card is what you’d call affordable, so Nvidia has fired up the favorite tool of chipmakers everywhere—the world’s smallest chainsaw—and shaved off half of the GTS’s memory in order to bring the price down. The result is the first DirectX 10-capable graphics card with a price tag of roughly three hundred bucks. ‘taint cheap, but it may be the best value in the GeForce 8800 lineup. We have tested the GTS 320MB against a stack of competitors, as is our custom. Has the world’s smallest chainsaw brought us another winner, or has it cut too deep this time? Let’s take a look.

XFX does the GTS with the XXX
We should probably start at the beginning with the G80 GPU that powers the GeForce 8800 lineup. This massive graphics chip has 680 million transistors, and its die area could practically be measured in square miles. If you’re unfamiliar with it, you should go read our review of the GeForce 8000 in order to come up to speed. The G80 is the first PC graphics processor with a unified shader architecture and support for Direct X 10, and both of those things are critical buzzword phrases for the future of graphics marketing. Fortunately, we found the G80’s performance to be excellent, and we think it produces higher quality pixels than any other desktop graphics chip, as well. Both its performance and image quality are second to none.

The GeForce 8800 GTS started life as a product of the world’s smallest chainsaw when Nvidia deactivated some portions of the G80 GPU for the sake of product segmentation. In the GTS, the G80 has two of its eight clusters of stream processors and one of its six ROPs partitions disabled. That leaves the GTS with a total of 96 stream processors and a 320-bit path to memory, down from 128 SPs and 384 bits in the GeForce 8800 GTX. Clock frequencies are also down in the GTS. The primary GPU clock, or core clock, is 500MHz. The stream processors run at 1.2GHz, and the card’s GDDR3 memory is clocked at 800MHz—or 1.6GHz effective, for those of you keeping score at home.

The big change in today’s new product is simply a reduction in memory size from 640MB to 320MB, nothing more. 320MB versions of the GTS have the same 320-bit path to memory as the 640MB versions. The smaller amount of onboard memory will affect performance, of course, but only in certain scenarios, like when applications need to store lots of data or when running at really high resolutions with lots of edge and texture antialiasing enabled. We’ll dig into that when we look at our test results.

But first, here’s a look at our test subject, the XFX GeForce 8800 GTS 320MB XXX edition.

Why is it called the XXX edition? Good question. I’ve spent hours with this card, and I think it’s just a tease.

Still, this tease has its virtues, including a higher out-of-the-box frequency than the usual GTS. The XXX edition’s core GPU clock is 580MHz, and its memory runs at 900MHz. As for its SPs, I don’t know the exact frequency, but Nvidia says SP clock speeds are tied to the core GPU clock, so the SP clock frequency should rise proportionately with core speed.

XFX says the suggested retail price for the XXX edition is $334.99—or $335 to you and me. They’re also selling an ExTreme edition with 560/850MHz core and memory clocks for $310 and a bone-stock 500/800MHz version for $300. The core clock speeds for the XXX and ExTreme cards are actually higher than they are for the corresponding 640MB versions, raising the intriguing possibility that the card with less memory and a lower price could prove faster in some applications.

 

Test notes
Sizing up the closet competition for the GeForce 8800 GTS 320MB is a little bit tricky. Its predecessor at the $299-ish price point is the GeForce 7950 GT, and thus we’ve included it in our comparison. But AMD’s entry at this price point isn’t entirely clear.

Until recently, ATI offered the Radeon X1900 XT in a 256MB configuration, but those have been replaced by the Radeon X1950 XT 256MB, a card that sells for roughly between $259 and $289. The X1950 XT 256MB has the same 625MHz core GPU clock as the X1900 XT, but it has a faster 900MHz (1.8GHz effective) memory clock. We made arrangements to procure one of these cards for inclusion in this review, but it didn’t arrive here in time. In its stead, we’ve tested the older Radeon X1900 XT 256MB with its slower 725MHz memory clock, to give us some basis for comparison.

The more relevant comparison for XFX’s pricier XXX-edition card may be the Radeon X1950 XTX, which can be had for as little as $349-359, if you shop around.

You’ll see in the table below that the Radeon X1950 XTX was tested on a different motherboard than the rest of the cards. That’s an artifact of the fact that some of these results came from our article on GeForce 8800 SLI. However, we did test the Radeon X1950 XTX for power and noise on the same motherboard as the rest of the cards, and in fact, we tested the X1950 XTX in Oblivion and Rainbow Six: Vegas on the Asus P5N32-SLI SE Deluxe, as well.

Our lone GTS 320MB review unit is a XFX GeForce 8800 GTS XXX Edition card, so we’ve tested it at its default 580MHz/900MHz core and memory clocks. We also underclocked this card to the bone-stock 500MHz/800MHz frequencies that the cheaper $299 cards will have. The, err, porno edition card’s results are marked XXX, while the 500/800MHz config is simply labeled “GeForce 8800 GTS 320MB.”

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core 2 Extreme X6800 2.93GHz Core 2 Extreme X6800 2.93GHz
System bus 1066MHz (266MHz quad-pumped) 1066MHz (266MHz quad-pumped)
Motherboard Asus P5N32-SLI SE Deluxe Asus P5W DH Deluxe
BIOS revision 0305 0801
North bridge nForce4 SLI X16 Intel Edition 975X MCH
South bridge nForce4 MCP ICH7R
Chipset drivers ForceWare 6.86 INF Update 7.2.2.1007
Intel Matrix Storage Manager 5.5.0.1035
Memory size 2GB (2 DIMMs) 2GB (2 DIMMs)
Memory type Corsair TWIN2X2048-8500C5
DDR2 SDRAM
at 800MHz
Corsair TWIN2X2048-8500C5
DDR2 SDRAM
at 800MHz
CAS latency (CL) 4 4
RAS to CAS delay (tRCD) 4 4
RAS precharge (tRP) 4 4
Cycle time (tRAS) 15 15
Hard drive Maxtor DiamondMax 10 250GB SATA 150 Maxtor DiamondMax 10 250GB SATA 150
Audio Integrated nForce4/ALC850
with Realtek 5.10.0.6200 drivers
Integrated ICH7R/ALC882M
with Realtek 5.10.00.5345 drivers
Graphics Radeon X1900 XT 256MB PCIe
with Catalyst 7.1 drivers
Radeon X1950 XTX 512MB PCIe
with Catalyst 7.1 drivers
BFG Tech GeForce 7950 GT OC 512MB PCIe
with ForceWare 93.71 drivers
 
GeForce 7900 GTX 512MB PCIe
with ForceWare 93.71 drivers
 
XFX GeForce 8800 GTS 320MB XXX Edition PCIe
with ForceWare 97.92 drivers
 
GeForce 8800 GTS 640MB PCIe
with ForceWare 97.92 drivers
 
GeForce 8800 GTX 768MB PCIe
with ForceWare 97.92 drivers
 
OS Windows XP Professional (32-bit)
OS updates Service Pack 2, DirectX 9.0c update (December 2006)

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by OCZ GameXStream 700W power supply units. Thanks to OCZ for providing these units for our use in testing.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults.

The test systems’ Windows desktops were set at 1280×960 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Fill rate and memory bandwidth
Before we get to the game benchmarks, here’s a quick overview of how the various cards we’ve tested compare in terms of fill rate (the ability to draw pixels and textured pixels on screen) and memory bandwidth. These numbers don’t always determine in-game performance these days because shader arithmetic capacity has become at least as important as fill rate.

  Core
clock
(MHz)
Pixels/
clock
Peak
fill rate
(Mpixels/s)
Textures/
clock
Peak
fill rate
(Mtexels/s)
Effective
memory
clock (MHz)
Memory
bus width
(bits)
Peak memory
bandwidth
(GB/s)
GeForce 7950 GT 550 16 8800 24 13200 1400 256 44.8
BFG GeForce 7950 GT OC 565 16 9040 24 13560 1430 256 45.8
Radeon X1900 XT 625 16 10000 16 10000 1450 256 46.4
GeForce 7900 GTX 650 16 10400 24 15600 1600 256 51.2
Radeon X1950 XTX 650 16 10400 16 10400 2000 256 64.0
GeForce 8800 GTS 500 20 10000 24 12000 1600 320 64.0
GeForce 8800 GTS 320MB XXX 580 20 11600 24 13920 1800 320 72.0
GeForce 8800 GTX 575 24 13800 32 18400 1800 384 86.4

The basic lesson here is that the GeForce 8800 GTS is a tremendously powerful graphics solution, especially in its higher-clocked “XXX edition” form. Although GTS 320MB cards start at $300, they compete more closely in terms of memory bandwidth with the Radeon X1950 XTX and GeForce 7900 GTX than they do with the incumbents in the $249-299 price range.

Note that the GeForce 8800 GTS 320MB results differ slightly from the GTS 640MB results. The 320MB card is a little slower in the single-textured fill rate test and a little faster at multitexturing. That’s unexpected in a synthetic test like this one that isn’t likely to run into a memory size constraint. We can probably place the blame for the difference at the feet of our BFG Tech GeForce 8800 GTS 640MB card, which shipped with a somewhat higher-than-stock 513MHz core clock and a lower-than-stock 792MHz memory clock.

 

Quake 4
In order to make sure we pushed the video cards as hard as possible, we enabled Quake 4’s multiprocessor support before testing. We used the game’s “playnettimedemo” to play back our gaming session with the game engine’s physics and game logic active.

If the 320MB GTS is going to run into a memory size limitation at higher resolutions and quality levels, it’s not apparent in Quake 4. Even at 2560×1600 with 8X aniso and 4X antialiasing, the GTS 320MB cards pretty much keep pace with their 640MB sibling. In fact, the XFX XXX edition is second only to the GeForce 8800 GTX here, outpacing the Radeon X1950 XTX and GeForce 7900 GTX.

 

F.E.A.R.
We’re using F.E.A.R.’s built-in “test settings” benchmark for a quick, repeatable comparison.

F.E.A.R. is a different story entirely from Quake 4. Once again, the GTS 320MB cards don’t appear to lose steam at higher resolutions, but this time, they’re just slower across the board than the 640MB card. Heck, the stock-clocked GTS 320MB is the slowest card here overall.

The GTS 320MB cards look to be bumping up against a memory size barrier, but curiously, the Radeon X1900 XT does pretty well, placing mid-pack in the two lower (but still relatively high) resolutions.

 

Half-Life 2: Episode One
The Source game engine uses an integer data format for its high dynamic range rendering, which allows all of the cards here to combine HDR rendering with 4X antialiasing.

Here’s something more like we expected to see going into testing. The GeForce 8800 GTS 320MB cards perform at least as well as the 640MB version at 1600×1200, but as the display resolution climbs, they fall behind. The performance drag caused by having less memory really only becomes a concern at the monster four-megapixel resolution of 2560×1600, though. At 2048×1536, the GTS 320MB cards are faster than anything near the same price range.

 
The Elder Scrolls IV: Oblivion
We tested Oblivion by manually playing through a specific point in the game five times while recording frame rates using the FRAPS utility. Each gameplay sequence lasted 60 seconds. This method has the advantage of simulating real gameplay quite closely, but it comes at the expense of precise repeatability. We believe five sample sessions are sufficient to get reasonably consistent and trustworthy results. In addition to average frame rates, we’ve included the low frames rates, because those tend to reflect the user experience in performance-critical situations. In order to diminish the effect of outliers, we’ve reported the median of the five low frame rates we encountered.

We turned up all of Oblivion’s graphical settings to their highest quality levels for this test. The screen resolution was set to 1920×1200 resolution, with HDR lighting enabled. 16X anisotropic filtering and 4X AA was forced on via the cards’ driver control panels. Since the G71 GPU can’t do 16-bit floating-point texture filtering and blending in combination with antialiasing, the cards based on it had to sit out these tests.

For this test, we strolled around the outside of the Leyawin city wall, as show in the picture below. This area has loads of vegetation, some reflective water, and some long view distances.

We’re hitting some serious memory size limitations at these settings in Oblivion. The most obvious casualty is the Radeon X1900 XT 256MB, which suffers mightily. To AMD’s credit, the game didn’t crash, it just ran really slowly. The GeForce 8800 GTS 320MB fares well by comparison, but it’s much slower than its 640MB counterpart. Even the XXX edition falls behind.

I should say that when playing through this sequence, the GTS 320MB cards didn’t feel especially slow, despite their FPS lows in the teens and low twenties. Those lows seemed to come when the game engine decided to scale up the level of detail as we got closer to a group of trees or the like. Still, the 640MB card vastly reduced the severity of those hiccups.

Rainbow Six: Vegas
This game is a new addition to our test suite, notable because it’s the first game we’ve tested based on Unreal Engine 3. As with Oblivion, we tested with FRAPS. This time, I played through a 90-second portion of the “Dante’s” map in the game’s Terrorist Hunt mode, with all of the game’s quality options cranked.

The GTS 320MB is back in the saddle in Rainbow Six: Vegas, performing almost identically to the 640MB version and clearly outrunning any direct competitors.

Ghost Recon Advanced Warfighter
We tested GRAW with FRAPS, as well. We cranked up all of the quality settings for this game, with the exception of antialiasing, since the game engine doesn’t take to AA very well.

Admittedly, asking some of the less expensive cards to run this game at the settings we’ve used is a bit much. The game identified the Radeon X1900 XT as having only 256MB of memory and refused to set the display resolution to 2560×1600. Rather than fuss with the config files, we just decided to have the X1900 XT sit this one out.

With GRAW running at a higher resolution, the GTS 320MB configs are again feeling the strain. The net effect is that the GTS 320MB cards drop back into the pack with the Radeon X1950 XTX and the GeForce 7900 GTX, while the 640MB card pulls out ahead.

 

3DMark06

Unlike some of the games we’ve tested, 3DMark doesn’t mind the GTS 320MB’s missing memory. That leaves the GTS 320MB tracking right with the GTS 640MB, while the XXX edition is all alone in second place behind the GTX.

As expected, the synthetic vertex and pixel shader tests are unaffected by the GTS 320MB’s memory size.

 

Power consumption
We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement.

The idle measurements were taken at the Windows desktop with SpeedStep power management enabled. The cards were tested under load running Oblivion using the game’s Ultra High Quality settings at 1920×1200 resolution with 16X anisotropic filtering. SpeedStep was disabled for the load tests.

Reducing the memory size doesn’t do much for power consumption. The 320MB version tracks right with the 640MB, and XFX’s XXX edition card draws even more power. The Radeon cards have lower power consumption at idle, but under load, they’re in the same neighborhood as the 8800 GTS.

Noise levels and cooling
We measured noise levels on our test systems, sitting on an open test bench, using an Extech model 407727 digital sound level meter. The meter was mounted on a tripod approximately 14″ from the test system at a height even with the top of the video card. The meter was aimed at the very center of the test systems’ motherboards, so that no airflow from the CPU or video card coolers passed directly over the meter’s microphone. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured, including CPU and chipset fans. We had temperature-based fan speed controls enabled on the motherboard, just as we would in a working system. In all cases, we used a Zalman CNPS9500 LED to cool the CPU.

Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a cards’ highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

We measured the cards at idle on the Windows desktop and under load while running Oblivion at 1920×1200 with 16X aniso.

Nvidia’s stock coolers for the GeForce 8800 series are very effective and very quiet, as the results indicate. None of these cards are noisy at idle, thanks to their speed-controlled fans. When running a game, though, several of them can assert themselves. The worst offender here is the GeForce 7950 GT, whose chintzy cooler doesn’t belong on any graphics card that costs more than $250.

 
Conclusions
I’m not entirely sure what to make of the GeForce 8800 GTS 320MB. We went into our testing expecting to see the GTS 320MB perform well at lower resolutions and struggle at higher ones, especially with antialiasing and anisotropic filtering at work. We certainly found results along those lines, but we also saw some counter examples. In Quake 4, for instance, the GTS 320MB didn’t struggle at all, even at 2560×1600 with 4X AA and 8X aniso. On the other hand, the GTS 320MB’s performance suffered in F.E.A.R. even at 1600×1200. So the actual impact of the GTS 320MB’s smaller amount of onboard memory seems to vary as much with the application as it does with the display resolution.

I think we can safely assume that those folks with massive monitors like the Dell 3007WFP would do well to stay away from the GTS 320MB. If you plan on running at really high resolutions, you’re definitely better off with a card that has at least 512MB of memory. Anyone looking to build an SLI setup on an installment plan will probably want to resist the GTS 320MB’s tempting price, as well. Adding a second card in SLI doesn’t increase the effective memory size of the graphics subsystem, and any solution with the power of two GeForce 8800 GTS GPUs will need more than 320MB in order to take full advantage of all of that power.

For the rest of us, the GTS 320MB is a tempting prospect, but it comes with some caveats. With the exception of F.E.A.R., XFX’s GeForce 8800 GTS 320MB XXX Edition outperforms the Radeon X1950 XTX pretty consistently at two and three megapixel display resolutions. The GTS also offers superior image quality and DirectX 10 support, and it costs a little less. Yet the ostensible future-proofing that comes from having DirectX 10 support is blunted somewhat by the possibility that most next-gen games will make use of larger textures, longer shaders, and more complex geometry—all of which requires more graphics memory. This raises the question: will you be better off in the long run with a DX10 card with 320MB of memory or a DX9 card like the Radeon X1950 XTX with 512MB of memory? I’m not sure I can answer that. Ideally, Nvidia would offer a card in this price range with a little less GPU power and more memory, which would probably be a better tradeoff.

Then again, I’m probably overthinking it. Most of the time, having less video memory just means we’ll have to crank an in-game texture quality slider a little bit to the left or drop down to a lower degree of antialiasing. Graphics cards this capable typically don’t require harsh compromises to get acceptable performance. If you’re willing to accept a few compromises along those lines, the GeForce 8800 GTS 320MB looks like an excellent value. 

Comments closed

Pin It on Pinterest

Share This

Share this post with your friends!