The GeForce 9 series multi-GPU extravaganza

OK, I’ve been doing this since the last century, but I really have no idea how to frame this article. It might be a review of a new graphics card. Nvidia just recently introduced its GeForce 9800 GTX, and this is our first look at that card.

But we didn’t really stop there. We threw together two and then three 9800 GTX cards in order to see how they perform in some incredibly powerful and borderline ridiculous configurations. Then we totally crossed in the line into crazy-land by doubling up on GeForce 9800 GX2 cards and testing the new generation of quad SLI, as well. What’s more, we tested against the previous generation of three-way SLI—based on the GeForce 8800 Ultra—and against the competing CrossFire X scheme involving three and four Radeons.

Most of you probably care incrementally less about these configurations as the GPU count—and price tag—rises. But boy, do we ever have a hefty amount of info on the latest GPUs compiled in one place, and it’s a pretty good snapshot of the current state of things. Keep reading if you’re into that stuff.

GT to the X

A new video card doesn’t come along every day. Seriously. About three Sundays ago, not a single product announcement hit my inbox. Most days, however, it seems that at least one new variant of an already known quantity hits the streets with some kind of tweak in the clock speeds, cooling solutions, product bundles, or what have you.

Such is the case—despite the GeForce 9-series name—with the GeForce 9800 GTX. This card is, ostensibly, the replacement for the older GeForce 8800 GTX, but it’s very, very similar to the GeForce 8800 GTS 512 released in December—same G92 GPU, same 128 stream processors, same 256-bit memory path, same PCI Express 2.0 interface, same 512MB of GDDR3 memory. Even the cooler has a similarly angled fan enclosure, as you can see by glancing at our trio of 9800 GTX cards pictured below.

GeForce 9800 GTX cards from Palit, BFG Tech, and XFX

Not that there’s anything wrong with that. The G92 is a very capable GPU, and we liked the 8800 GTS 512. Just don’t expect earth-shaking miracles in the move to from GeForce series 8 to 9. In terms of specs, the most notable differences are some tweaked clock speeds. By default, the 9800 GTX ships with a 675MHz core, 1688MHz shader processors, and 1100MHz memory. That’s up slighty from the defaults of 650MHz, 1625MHz, and 970MHz on the 8800 GTS 512.

The GeForce 8800 GTS 512 (left) versus the 9800 GTX (right)

As is immediately obvious, however, the 9800 GTX departs from its younger sibling in some respects. Physically, the GTS 512 is only 9″ long and has but one six-pin aux power plug and one SLI connector onboard. The 9800 GTX is larger at 10.5″ long and has dual six-pin power connectors and two SLI “golden fingers” interfaces along its top.

These dual SLI connectors create the possibility of running a triumvirate of 9800 GTX cards together in a three-way SLI config for bone-jarring performance, and we’re set to explore that possibility shortly.

Another feature new to the 9800 GTX is support for Nvidia’s HybridPower scheme. The idea here is that when the GTX is mated with a compatible Nvidia chipset with integrated graphics, the discrete graphics card can be powered down for everyday desktop work, saving on power and noise. Fire up a game, though, and the GTX will come to life, taking over the 3D graphics rendering duties. We like the concept, but we haven’t yet seen it in action, since Nvidia has yet to release a HybridPower-capable chipset.

The three 9800 GTX cards we have gathered here today present strikingly similar propositions. Prices for the BFG Tech card start at $329.99 at online vendors. In typical BFG style, there’s no bundled game, but you do get a lifetime warranty. XFX ups the ante somewhat by throwing in a copy of Company of Heroes and pledging to support its card for a lifetime, plus through one resale, for the same starting price of 330 bucks. The caveat with both companies is that you must register your card within 30 days after purchase, or the warranty defaults to a one-year term. Palit, meanwhile, offers a two-year warranty and throws in a copy of Tomb Raider Anniversary, for the same price. All three cards share the same board design, which I understand is because Nvidia exercises strict control over its higher-end products. That’s a shame, because we really like the enhancements Palit built into its GeForce 9600 GT and wouldn’t mind seeing them in this class of card, as well.

Another attribute all three cards share is Nvidia’s stock clock speeds for the 9800 GTX. That will impact our performance comparison, because the EVGA GeForce 8800 GTS 512 card we tested came out of the gate with juiced up core and shader clocks of 670MHz and 1674MHz, respectively, which put it very close to the 9800 GTX. That’s not to say 9800 GTX cards are all wet noodles. Already, BFG Tech has announced GTX variants ranging up to 755MHz core, 1890MHz shader, and 1150MHz memory frequencies. I’d expect similar offerings from some of the other guys soon, too.

Stacked

Setting up a three-way SLI rig with GeForce 9800 GTX cards isn’t all that different than it is with GeForce 8880 Ultras. For the 9800 GTX, we chose to upgrade from an nForce 680i SLI-based motherboard to a 780i SLI mobo in order to gain support for PCI Express 2.0. As we’ve explained, the nForce 780i SLI’s PCIe 2.0 is a little odd since it uses a PCI Express bridge chip, but Nvidia claims it should be adequate. The most ideal configuration would probably be a board based on the nForce 790i SLI, but I didn’t have one of those handy.



You will need some specialized hardware in order to make a setup like this go. In addition to the motherboard and graphics cards, you’ll need a three-way SLI connector like the one pictured above, which you may have to order separately. This connector snaps into place atop all three cards, connecting them together. You’ll also need a power supply with six auxiliary PCIe power connectors and sufficient output to power the whole enchilada. I used a PC Power & Cooling Turbo-Cool 1200 that’s more than up to the task.

We used the same basic building blocks for our quad SLI test rig, but we swapped out the 9800 GTX cards for a pair of GeForce 9800 GX2s from Palit and XFX. We tested the XFX card in our initial GeForce 9800 GX2 review. As you may have learned from that article, each of these cards has two G92 GPUs on it. They’re also not cheap. The Palit card is currently going for $599 on Newegg, although there’s a $30 mail-in rebate attached, if you’re willing to jump through that particular flaming hoop.

Since the GX2 really packs ’em in, a quad SLI setup actually requires fewer power leads and occupies less slot space than a three-way config. Quad SLI also avoids that middle PCIe slot on all capable (nForce 680i, 780i, 790i) motherboards. That slot could be a bottleneck because its 16 lanes of first-gen PCIe connectivity hang off the chipset’s south bridge.

Of scaling—and failing

Multi-GPU schemes like SLI are always prone to fragility, and sometimes their performance simply doesn’t scale up well from one GPU to two, or from two to three or four. We’ve written about this many times before, most extensively in this section of our CrossFire X review, so I won’t cover that same ground once again. You’ll see this dynamic in effect in our performance results shortly.

We noticed some particular quirks of three- and four-way SLI in preparing this article, though, that bear mentioning here. One of those issues involves video memory. Very high performance graphics subsystems require lots of memory in order to work effectively at the extreme resolutions and quality levels they can achieve. That presents a problem for G92-based SLI because current GeForce 9800 GTX and GX2 cards come with 512MB of memory per GPU, and SLI itself eats up some video memory. In some cases, we found that the GeForce 8800 Ultra, with 768MB of RAM, performed better due to apparent video memory size limitations with G92-based SLI.

In addition, we used the 32-bit version of Windows Vista on our GPU test platforms, largely because Nvidia’s 64-bit drivers have sometimes lagged behind their 32-bit counterparts on features, optimizations, or QA validation. Since we’re often testing pre-release hardware with early drivers, that was a real problem. However, choosing a 32-bit OS in this day and age has its own perils. As you may know, 32-bit versions of Windows have a total memory address space of 4GB. We installed 4GB worth of DIMMs in our test systems, but some of the OS’s total address space is reserved by hardware devices, including video cards. For example, we see a total of 3.58GB of physical memory available in Task Manager when we have a single GPU installed on our Gigabyte X38-DQ6-based system.

This limitation hasn’t been much of a problem for us in the past, but the problem grew more acute on our nForce 780i SLI-based test system. With a single GPU installed, that system showed only 2.8GB of physical RAM installed. With dual GPUs, the total dropped to 2.5GB, and then to 2.3GB with three GPUs. Our quad SLI system saw that number dip to 1.79GB, which is, well, uncomfortable. I doubt it had much impact on our performance testing overall, but it’s not a lot of headroom and one heck of a waste of RAM. The lesson: use a 64-bit OS with these exotic three- and four-way SLI setups. We’ll be moving our test platforms to Vista x64 soon.

Behold the SLI stack

We ran into a couple of other little problems, as well. For one, screenshots captured on the Windows desktop with Aero enabled had big, black horizontal bands running across them when SLI was enabled with Nvidia’s latest 174.53 drivers. This is just a bug and not typical of the SLI setups we’ve seen in the past. Another likely bug was related to the “Do not scale” setting in Nvidia’s drivers that disables GPU scaling of lower resolutions up to the monitor’s native res. When we had that option enabled with three- and four-way SLI, 3D applications would start up with a black screen and the system would become unresponsive. We’d have to reboot the system to recover. Nvidia’s graphics drivers usually avoid such quirkiness, but right now, those two issues are very real.

Test notes

You can see all of our test configs below, but I’d like to make note of a few things. First, the GeForce 9600 GT card that we tested was “overclocked in the box” a little more fully than most (the core is 700MHz, while most cards are 650-675MHz), so its performance is a little bit higher than is typical. Similarly, we tested the GeForce 8800 GT and Radeon HD 3870 at their stock speeds, which are increasingly rare in this segment. Most shipping products have higher clocks these days.

Beyond that, we’re in pretty good shape. Our examples of the Radeon HD 3850 512MB and GeForce 8800 GTS 512MB are both clocked above the baseline frequencies by typical amounts, and most of the higher end cards tend to run close to their baseline clock speeds.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core
2 Extreme X6800
2.93GHz
Core
2 Extreme X6800
2.93GHz
Core
2 Extreme X6800
2.93GHz
System
bus
1066MHz
(266MHz quad-pumped)
1066MHz
(266MHz quad-pumped)
1066MHz
(266MHz quad-pumped)
Motherboard Gigabyte
GA-X38-DQ6
XFX
nForce 680i SLI
EVGA
nForce 780i SLI
BIOS
revision
F7 P31 P03
North
bridge
X38
MCH
nForce
680i SLI SPP
nForce
780i SLI SPP
South
bridge
ICH9R nForce
680i SLI MCP
nForce
780i SLI MCP
Chipset
drivers
INF
update 8.3.1.1009

Matrix Storage Manager 7.8

ForceWare
15.08
ForceWare
9.64
Memory
size
4GB
(4 DIMMs)
4GB
(4 DIMMs)
4GB
(4 DIMMs)
Memory
type
2
x Corsair
TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
2
x Corsair
TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
2
x Corsair
TWIN2X20488500C5
DDR2 SDRAM
at 800MHz
CAS
latency (CL)
4 4 4
RAS
to CAS delay (tRCD)
4 4 4
RAS
precharge (tRP)
4 4 4
Cycle
time (tRAS)
18 18 18
Command
rate
2T 2T 2T
Audio Integrated
ICH9R/ALC889A

with RealTek 6.0.1.5497 drivers

Integrated
nForce 680i SLI/ALC850

with RealTek 6.0.1.5497 drivers

Integrated
nForce 780i SLI/ALC885

with RealTek 6.0.1.5497 drivers

Graphics Diamond Radeon HD
3850 512MB PCIe

with Catalyst 8.2 drivers

Dual
GeForce
8800 GT 512MB PCIe

with ForceWare 169.28 drivers

Dual
GeForce
9800 GTX 512MB PCIe

with ForceWare 174.53 drivers

Dual Radeon HD
3850 512MB PCIe

with Catalyst 8.2 drivers

Dual
GeForce
8800 Ultra 768MB PCIe

with ForceWare 169.28 drivers

Triple
GeForce
9800 GTX 512MB PCIe

with ForceWare 174.53 drivers


Radeon HD 3870 512MB PCIe

with Catalyst 8.2 drivers

Triple
GeForce
8800 Ultra 768MB PCIe

with ForceWare 169.28 drivers

Dual
GeForce
9800 GX2 1GB PCIe

with ForceWare 174.53 drivers

Dual

Radeon HD 3870 512MB PCIe

with Catalyst 8.2 drivers

Dual
Palit GeForce
9600 GT 512MB PCIe

with ForceWare 174.12 drivers



Radeon HD 3870 X2 1GB PCIe

with Catalyst 8.2 drivers


Dual Radeon HD 3870 X2 1GB PCIe

with Catalyst 8.3 drivers



Radeon HD 3870 X2 1GB PCIe
+

Radeon HD 3870 512MB PCIe

with Catalyst 8.3 drivers

Palit
GeForce
9600 GT 512MB PCIe

with ForceWare 174.12 drivers

GeForce
8800 GT 512MB PCIe

with ForceWare 169.28 drivers

EVGA
GeForce 8800 GTS 512MB PCIe

with ForceWare 169.28 drivers

GeForce
8800 Ultra 768MB PCIe

with ForceWare 169.28 drivers

GeForce
9800 GX2 1GB PCIe

with ForceWare 174.53 drivers

GeForce
9800 GX2 1GB PCIe

with ForceWare 174.53 drivers

Hard
drive
WD
Caviar SE16 320GB SATA
OS Windows
Vista Ultimate
x86 Edition
OS
updates
KB936710, KB938194, KB938979,
KB940105, KB945149,
DirectX November 2007 Update

Please note that we tested the single and dual-GPU Radeon configs with the Catalyst 8.2 drivers, simply because we didn’t have enough time to re-test everything with Cat 8.3. The one exception is Crysis, where we tested single- and dual-GPU Radeons with AMD’s 8.451-2-080123a drivers, which include many of the same application-specific tweaks that the final Catalyst 8.3 drivers do.

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Most of our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs. The three- and four-way SLI systems required a larger PSU, so we used a PC Power & Cooling Turbo-Cool 1200 for those systems. Thanks to OCZ for providing these units for our use in testing.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Doing the math

Nvidia’s G92 graphics processor is extremely potent, so the GeForce 9800 GTX is poised to become one of the fastest single-GPU graphics cards on the planet. When you start stacking those puppies up into multiples of two, three, and four, the theoretical peak throughput numbers start getting absolutely sick. Have a look.

Peak
pixel
fill rate
(Gpixels/s)

Peak bilinear

texel
filtering
rate
(Gtexels/s)


Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)

Peak
shader
arithmetic
(GFLOPS)

GeForce 8800 GT 9.6 33.6 16.8 57.6 336
GeForce 8800 GTS

10.0 12.0 12.0 64.0 230
GeForce 8800 GTS 512 10.4 41.6 20.8 62.1 416

GeForce 8800 GTX

13.8 18.4 18.4 86.4 346
GeForce 8800 Ultra

14.7 19.6 19.6 103.7 384
GeForce 8800 Ultra SLI (x2)

29.4 39.2 39.2 207.4 768
GeForce 8800 Ultra SLI (x3) 44.1 58.8 58.8 311.0 1152
GeForce 9800 GTX 10.8 43.2 21.6 70.4 432
GeForce 9800 GTX SLI (x2) 21.6 86.4 43.2 140.8 864
GeForce 9800 GTX SLI (x3) 32.4 129.6 64.8 211.2 1296
GeForce 9800 GX2 19.2 76.8 38.4 128.0 768
GeForce 9800 GX2 SLI (x4) 38.4 153.6 76.8 256.0 1536
Radeon HD 2900 XT

11.9 11.9 11.9 105.6 475
Radeon HD 3850 10.7 10.7 10.7 53.1 429
Radeon HD 3870 12.4 12.4 12.4 72.0 496
Radeon HD 3870 X2

26.4 26.4 26.4 115.2 1056
Radeon HD 3870 X2 + 3870 (x3)

37.2 37.2 37.2 172.8 1488
Radeon HD 3870 X2 CrossFire
(x4)

52.8 52.8 52.8 230.4 2112

You see, there are a lot of numbers there, and the become increasingly large as you add GPUs. Impressive.

One thing I should note: I’ve changed the FLOPS numbers for the GeForce cards compared to what I used in past reviews. I decided to use a more conservative method of counting FLOPS per clock, and doing so reduces theoretical GeForce FLOPS numbers by a third. I think that’s a more accurate way of counting for the typical case.

Even so, these numbers assume that each GPU can reach its theoretical peak and that we’ll see perfect multi-GPU scaling. Neither of those things ever really happens, even in synthetic benchmarks. They sometimes get close, though. Here’s how things measure in 3DMark’s synthetic feature tests.

Performance in the single-textured fill rate test tends to track more closely with memory bandwidth than with peak theoretical pixel fill rates, which are well beyond what the graphics cards achieve. The GeForce 9-series multi-GPU configs are absolute beasts in multitextured fill rate, both in theory and in this synthetic test.

In both tests, the GeForce 9800 GTX almost exactly matches the GeForce 8800 GTS 512. Those aren’t typos—just very similar results from very similar products.

Obviously, the SLI systems have scaling trouble in the simple vertex shader test. Beyond that, the results tend to fit pretty well with the expectations established by our revised FLOPS numbers. I wouldn’t put too much stock into them, though, as a predictor of game performance. We can measure that directly.

Call of Duty 4: Modern Warfare

We tested Call of Duty 4 by recording a custom demo of a multiplayer gaming session and playing it back using the game’s timedemo capability. Since these are high-end graphics configs we’re testing, we enabled 4X antialiasing and 16X anisotropic filtering and turned up the game’s texture and image quality settings to their limits.

We’ve chosen to test at 1680×1050, 1920×1200, and 2560×1600—resolutions of roughly two, three, and four megapixels—to see how performance scales. I’ve also tested at 1280×1024 with the lower-end graphics cards, since some of them struggled to deliver completely fluid rate rates at 1680×1050.

Apologies for the mess that is my GPU scaling line graph. All I can say is that I tried. You can see some of the trends pretty clearly, even with the mess.

The GeForce 9800 GTX tends to perform just a little bit better than the 8800 GTS 512, as one would expect give its marginally higher clock speeds. The result isn’t a big improvement, but it is sufficient to put the 9800 GTX in league with the GeForce 8800 Ultra, the fastest single-GPU card here.

AMD’s fastest card, the Radeon HD 3870 X2, is a dual-GPU job, and it’s quicker than the 9800 GTX. Costs more, too, so this is no upset.

Look to the results at 2560×1600 resolution to see where the multi-GPU configs really start to distinguish themselves. Here, two GeForce 9800 GTX cards prove to be faster than three Radeon HD 3870 GPUs and nearly as fast as four. However, the quad SLI rig gets upstaged by the three-way GeForce 8800 Ultra rig, whose superior memory bandwidth and memory size combine to give it the overall lead.

The frame rates we’re seeing here also give us a sense of proportion. Realistically, with average frame rates in the fifties, a single GeForce 9800 GTX will run CoD4 quite well at 1920×1200 with most image quality enhancements, like antialiasing and aniso filtering, enabled. You really only need multi-GPU action if you’re running at four-megapixel resolutions like 2560×1600, and even then, two GeForces should be plenty sufficient. The exception may be the GeForce 8800 GT and 9600 GT cards in SLI, whose performance tanks at 2560×1600. I believe they’re running out of video memory here, and newer drivers may fix that problem.

Enemy Territory: Quake Wars

We tested this game with 4X antialiasing and 16X anisotropic filtering enabled, along with “high” settings for all of the game’s quality options except “Shader level” which was set to “Ultra.” We left the diffuse, bump, and specular texture quality settings at their default levels, though. Shadows, soft particles, and smooth foliage were enabled. Again, we used a custom timedemo recorded for use in this review.

I’ve excluded the three- and four-way CrossFire X configs here since they don’t support OpenGL-based games like this one.

The GeForce 9800 GTX again performs as expected, just nudging ahead of the 8800 GTS 512. This time, it can’t keep pace with the GeForce 8800 Ultra, though.

Among the Nvidia multi-GPU systems, the three-way GeForce 8800 Ultra setup again stages an upset, edging out both the three- and four-way G92 SLI rigs at 2560×1600. Also, once more, we’re seeing frame rates of over 70 FPS with two 9800 GTX cards, raising the question of whether three or more G92 GPUs offer tangible benefits with today’s games.

Half-Life 2: Episode Two

We used a custom-recorded timedemo for this game, as well. We tested Episode Two with the in-game image quality options cranked, with 4X AA and 16 anisotropic filtering. HDR lighting and motion blur were both enabled.

Our single-GPU results for the 9800 GTX continue the cavalcade of precisely fulfilled expectations established by the 8800 GTS 512. Unfortunately, that doesn’t make for very good television.

The pricey multi-GPU solutions offer some flash and flair, though, with the G92 SLI configs finally showing signs of life. They scale better than the GeForce 8800 Ultra three-way setup, vaulting them into the 90 FPS range. Back on planet Earth, though, most folks would probably be perfectly happy with the performance of two GeForce 9600 GTs here, even at our top resolution.

Crysis

I was a little dubious about the GPU benchmark Crytek supplies with Crysis after our experiences with it when testing three-way SLI. The scripted benchmark does a flyover that covers a lot of ground quickly and appears to stream in lots of data in a short period, possibly making it I/O bound—so I decided to see what I could learn by testing with FRAPS instead. I chose to test in the “Recovery” level, early in the game, using our standard FRAPS testing procedure (five sessions of 60 seconds each). The area where I tested included some forest, a village, a roadside, and some water—a good mix of the game’s usual environments.

Due to the fact that FRAPS testing is a time-intensive endeavor, I’ve tested the lower-end graphics cards at 1680×1050 and the higher end cards at 1920×1200, with the G92 SLI and CrossFire X configs included in both groups.

This is one game where additional GPU power is definitely welcome, and the dual 9800 GTX and GX2 configs seem to be off to a good start at 1680×1050. Median lows in the mid-20s in our chosen test area, which seems to be an especially tough case, tend to add up to a pretty playable experience overall.

However, the performance of the three- and four-way G92 SLI configs begins to go wobbly at 1920×1200, where we’d expect them to get relatively stronger. Heck, the three-way 9800 GTX setup has trouble at 1680×1050, even—perhaps a sign that that third PCIe slot’s bandwidth is becoming a problem. Now look what happens when we turn up Crysis‘ quality options to “very high” and enable 4X antialiasing.

Ouch! All of the G92-based configs utterly flounder. The quad SLI rig simply refused to run the game with these settings, and the others were impossibly slow. Why? I believe what’s happening here is the G92-based cards are running out of video RAM. The GeForce 8800 Ultra, with its 768MB frame buffer, fares much better. So do the Radeons, quite likely because AMD’s doing a better job of memory management.

To be fair, I decided to test the G92-based configs at “very high” with antialiasing disabled, to see how SLI scaling would look without the video memory crunch. Here’s what I found.

Even here, three- and four-way SLI aren’t appreciably faster than two-way, and heck, quad SLI is still slower. You’re really just as well off with two GPUs.

Unreal Tournament 3

We tested UT3 by playing a deathmatch against some bots and recording frame rates during 60-second gameplay sessions using FRAPS. This method has the advantage of duplicating real gameplay, but it comes at the expense of precise repeatability. We believe five sample sessions are sufficient to get reasonably consistent and trustworthy results. In addition to average frame rates, we’ve included the low frames rates, because those tend to reflect the user experience in performance-critical situations. In order to diminish the effect of outliers, we’ve reported the median of the five low frame rates we encountered.

Because UT3 doesn’t natively support multisampled antialiasing, we tested without AA. Instead, we just cranked up the resolution to 2560×1600 and turned up the game’s quality sliders to the max. I also disabled the game’s frame rate cap before testing.

I probably shouldn’t even have included these results, but I had them, so what the heck. Truth be told, UT3 just doesn’t need that much of a graphics card to do its thing, especially since the game doesn’t natively support antialiasing. With the median low frame rates at almost 30 FPS on a GeForce 9600 GT, the rest of the results are pretty much academic. In both the Nvidia and AMD camps, the three-way multi-GPU configs consistently outpace the four-way ones here, as we’ve seen in other games at lower resolutions, when the CPU overhead of managing more GPUs dominates performance.

Power consumption

We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows Vista desktop with the Aero theme enabled. The cards were tested under load running UT3 at 2560×1600 resolution, using the same settings we did for performance testing.

Note that the SLI configs were, by necessity, tested on different motherboards, as noted in our testing methods section. Also, the three- and four-way SLI systems were tested with a larger, 1200W PSU.

The GeForce 9800 GTX proves predictable again, drawing just a tad bit more power than the 8800 GTS 512. Shocking.

Notice how the 9800 GTX draws quite a bit less power, both at idle and under load, than the GeForce 8800 Ultra. That fact explains what we see with the multi-GPU configs: the G92-based options draw considerably less power than the GeForce 8800 Ultra-based ones. Oddly enough, the three- and four-way G92 SLI rigs draw almost exactly the same amount of power, both at idle and when loaded. The slightly lower core and memory clocks on the 9800 GX2, combined with the fact that only two PCIe slots are involved, may explain this result.

Noise levels

We measured noise levels on our test systems, sitting on an open test bench, using an Extech model 407727 digital sound level meter. The meter was mounted on a tripod approximately 12″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured, including the stock Intel cooler we used to cool the CPU. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

Unfortunately—or, rather, quite fortunately—I wasn’t able to reliably measure noise levels for most of these systems at idle. Our test systems keep getting quieter with the addition of new power supply units and new motherboards with passive cooling and the like, as do the video cards themselves. I decided this time around that our test rigs at idle are too close to the sensitivity floor for our sound level meter, so I only measured noise levels under load.

The cooler on the 9800 GTX isn’t loud by any means, but it does have a bit of a high-pitched whine to it, and that shows up in our sound level meter readings. Nvidia may be taking a bit of a step back here with its reference coolers. The ones on the GeForce 8800 series were supremely quiet, and these newer coolers aren’t quite as good. That’s a shame.

Clustered at the bottom of the graph are the cards that required the 1200W power supply. That puppy, pardon my French, is freaking loud. Even at idle, where the 9800 GTX three-way and GX2 four-way configs both registered over 50 dB, as did the three-way Ultra rig. Under load, we’re off to the symphony. Then again, did you really expect a quad SLI rig pulling over 500W at the wall socket to be quiet? Even a relatively quiet PSU would crank up its cooling fan when feeding a 500W system.

Conclusions

For most intents, Nvidia’s G92 is currently the finest graphics processor on the planet. That fact shined through as we tested the GeForce 9800 GTX as a single graphics card, and it consistently performed—as expected—better than almost any single-GPU card available. I say “almost” because the older GeForce 8800 Ultra outran it at times, but I say “available” tentatively, because the Ultra is beginning to look scarce these days. The Ultra has its drawbacks, too. It doesn’t support all of the latest features for HD movie playback, such as HDCP over dual-link DVI or H.264 decode acceleration. It draws a lot of power. And at its best, the 8800 Ultra cost about twice what the 9800 GTX now does. If you’re looking for a high-end graphics card and don’t want to go the multi-GPU route, the GeForce 9800 GTX is the way to go.

The G92 GPU’s sheer potency creates a problem for Nvidia, though, when it becomes the building block for three- and four-way multi-GPU solutions. We saw iffy scaling with these configs in much of our testing, but I don’t really blame Nvidia or its technology. The truth is that today’s games, displays, and CPUs aren’t yet ready to take advantage of the GPU power they’re offering in these ultra-exclusive high-end configurations. For the most part, we tested with quality settings about as good as they get. (I suppose we could have tested with 16X CSAA enabled or the like, but we know from experience that brings a fairly modest increase in visual fidelity along with a modest performance penalty.) In nearly every case, dual G92s proved to be more than adequate at 2560×1600. We didn’t have this same problem when we tested CrossFire X. AMD’s work on performance optimizations deserves to be lauded, but one of the reasons CrossFire X scales relatively well is that the RV670 GPU is a slower building block. Two G92 GPUs consistently perform as well as three or four RV670s, and they therefore run into a whole different set of scaling problems as the GPU count rises.

Crysis performance remains something of an exception and an enigma. We now know several things about it. As we learned here, the game doesn’t really seem to benefit from going from two CPU cores to four. We know that two G92s in SLI can run the game pretty well at 1920×1200 using its “high” quality settings. “Very high” is a bit of a stretch, and three- and four-way SLI don’t appear to be any help. We also know this game will bump up against memory size limits with 512MB of video memory, especially with GeForce cards and current drivers. Crysis would seem to be the one opportunity for the G92 SLI configs to really show what they can do, but instead, it’s the older GeForce 8800 Ultra with 768MB of memory that ends up stealing the show. The Ultra also proves to be faster in CoD4 and Quake Wars at 2560×1600, thanks to its larger memory size and higher memory bandwidth.

So G92-based three- and four-way SLI remains a solution waiting for a problem—one that doesn’t involve memory size or bandwidth limitations, it would seem.

Personally, I’m quite happy to see the pendulum swing this way, for GPU power to outdistance what game developers are throwing at it. Now, a $199 card like the GeForce 9600 GT can deliver a very nice experience for most folks. If you want more power, the 9800 GTX is a solid option that doesn’t involve the compatibility issues that SLI and CrossFire bring with them. Yet I also can’t wait to see what these sort of high-end solutions could do when put to full and proper use. Unfortunately, we may have to wait until the current wave of console ports and cross-developed games passes before we find out.

Comments closed
    • ish718
    • 12 years ago

    Quad SLI= fail

    • swaaye
    • 12 years ago

    What bothers me the most about modern GPU coolers is how they change speed frequently while gaming. It even depends on what you are looking at in the game. My 8800GTX became very annoying in Oblivion, for example, because the noise level changed depending on the scenery. My Radeon 3850 was louder and changed RPM even more frequently. It sounded like I had a little hair drier in the PC.

    So, both cards got their coolers replaced with aftermarket solutions. The 8800GTX has a Thermalright HR-03 Plus and the 3850 has an Accelero S1. The Accelero S1 is incredible kit for $30, IMO. HR-03 may be incredibly effective, but it’s really too expensive (especially w/o the fan it requires). Both of these coolers dramatically outperform the OEM coolers with respect to thermals with only very slow fans, while also being basically inaudible. 5-volted 92 mm on the GTX and a slow (~800 RPM) 80 mm on the 3850.

    It bothers me to see the G80 cooler described as quiet, because it was anything but that in my opinion. Well, unless it’s idle, because then it is quite quiet as it is spinning very slowly. The cooler seems to run at 100% once the GPU reaches 85 C, which was what happened with most recent games.

    I’d like to see a more descriptive take on the sound of each card. Whether it’s high-pitched, low-pitched, and whether or not it changes speed a lot.

      • sigher
      • 12 years ago

      True on my ATI card a recent driver change also makes the fan spin up and down more often and it’s a bit annoying, and yeah you should judge the sound in the three ways mentioned, idle, revving behaviour, and what the sound is like when it’s running max.

    • Fighterpilot
    • 12 years ago

    Rather than get too pedantic about db levels why not trust in the TR guys to report their findings in a simple and easy to understand format that would have broad appeal?

    eg. Noise testing results:

    It was… (A) silent *passive cooling*
    (B) quiet
    (C) ok
    (D) noticable
    (E) noisy
    Isn’t that good enough?

      • rechicero
      • 12 years ago

      that could be OK. Not very scientific, but OK. In fact, what could be really great would be a chart con dB, but with colors meaning that, something like (values at random):

      Dark Green, silence [0-10 dB]
      Bright Green, almost silence [10-15 dB]
      Yellow Noticeable in a HTPC [15-20 dB]
      Orange Noticeable in a common PC [20-30 dB]
      Red Annoying in a common PC [30-45 dB]
      Dark red Vacuum cleaner [45+ dB]

    • mattthemuppet
    • 12 years ago

    be interesting to see how Hybrid graphics pans out, that’ll really be having your cake and eating it, at least from a power consumption point of view 🙂

    • rechicero
    • 12 years ago

    A good review, but (IMHO) I can’t understand the point in the noise analysis. I mean, you can do that in power consumption, that’s ok, power consumption is linear. If you change just the graphic card, the difference in PC is the difference between the two cards. That’s OK.

    But, in noise, 30dB + 10 dB doesn’t mean 40 dB. This noise analysis is only valid for that rig, in any other case, it means nothing. In fact, seeing the results, is obvious that there are two options: Most cooler are the same (+-1dB) or we are measuring the Intel Stock cooler under load. And a little explanation about what means the charts would be great too, dB are not watts and you can’t read a logarithmic chart as you read lineal chart.

    There are ppl out there (like me) who wants the better bang/buck/noise ratio (i.e. for gaming HTPC), and in this review (great in every other aspect), that part is seriously lacking. Is it so difficult to set a test rig all passive to do that tests? It would be a cheap rig (low-end micro, low end almost everything), and it would tell us something more relevant than Crossfire x4 or 2560×1600 test. Seriously, think about it. How many ppl might be considering Crossfire x4 and how many ppl might prefer a quieter card ?

    I think that would made this good review in an almost perfect one.

    Edit: Typos

      • Damage
      • 12 years ago

      Hmm. So I get that noise is not additive, but I’m not clear on how you get from there to “the data mean nothing.”

      The reason we started testing noise levels is because graphics card coolers became silly loud at one point and continue, to this day, to be one of the loudest components in a PC. When we test graphics cards, some are clearly louder than others and contribute to (indeed, in most cases, dominate) the noise emanating from a test system. The readings we get from our sound level meter tend to track well with our subjective impressions of the situation. Seems like that’s worthwhile data to have.

      On top of that, taking a system-level noise reading accounts for the fact that a particular video card config will, indeed, impact overall system noise in multiple ways. For instance, a card with more power draw or a multi-card setup will tax the power supply more, causing its fan to run louder. I can hear that happening in my test rigs as I use them and swap in different cards. Similarly, having two video cards installed may cause the north bridge fan on our SLI motherboard to work harder. Our results include those effects, and I’d say that’s worthwhile.

      In the extreme case, in this review, we even had to swap in a 1200W PSU for some configs, and it was noisy enough to really overwhelm the rest of the system. That’s an issue you’d have to contend with if you owned, say, a quad SLI rig.

      So we’re measuring overall system noise, using a sound level meter. The results aren’t perfect and pristine, and they may not track with your experience exactly. But they’re probably a better indicator of how using a certain video card would impact your system than somehow isolating the cooler alone. In addition to that, we’ve laid out the test configs for you in some detail and explained about variables like the larger PSU in some configs. You are, of course, free to take it or leave it. But calling the results meaningless seems like reaching.

        • rechicero
        • 12 years ago

        I said it meant nothing because when I read the data, I can’t know how noisy is the card. As I said, noise is not additive and, as you said, certain components, like huge PSU can overwhelm the rest of the system. Reading your cards, my only conclusion is “the stock cooler+PSU overwhelm the graphic card” in most cases. That info is better than nothing, right. But that’s only accurate with stock coolers, your micro and your PSU (and your case, if you use one).

        Your charts are great showing that the noise of a multicard rig is going to be overwhelmed by the PSU, that’s right. But, seriously, how many aftermarket coolers are out there compared with multicard rigs? (And if we talk about Crossfire x4… I’m not sure if anybody outside tech sites ever had one). How many HTPC compared with them? Do you really think is more important to talk about Croosfire x4 than talk about which cooler is noisier by itself? I don’t say your review is crap, I like it, I like techreport reviews, and because of that I’d like them to be even better than they are. Don’t you think the review would offer more relevant info with a realistic test of noise? And, don’t you think a review with more relevant info is a better one?

        The relative sound test that you’ve done is great for multicard rigs, but too limited for single card rigs, too limited for a lot of people like myself that want best performance with less noise. I know I could look for this data in other sites, but the thing is I like techreport…

        That’s all I say. You did a good job, and you could’ve done even better that way, that’s all. I just wanted to be constructive.

          • green
          • 12 years ago

          noise level of a single card is dependent on what cooler the manufacturer decided to use on it
          if they decide to use a 40mm fan running at 9000rpm expect it to be ridiculously noisy
          if they go with a 120mm fan (somehow) going at 1500rpm then expect the noise to be much lower
          and then there’s water cooling which is again a different kettle of fish
          or if they don’t put any form of mechanical based cooling on it the card is going to be dead silent
          the problem being all variations of the setup could be used on the same gpu

          so exactly what do you compare here?
          it wouldn’t exactly be fair to say a passive 9600gt beats a fan cooled hd3870 would it?
          yes it’s quieter, but use passive cooling on the hd3870 and you’re dead even again
          you’d need to get the same cooler on both cards and compare resultant temperature & noise levels
          but then you’ve defeated the whole point of the noise testing by using a non-standard/3rd party cooler on the card
          if you need a comparison of noise levels from different manufactures so you can find the quietest then TR is gonna need a lot more funding

          besides i think you’ve completely missed something here
          you keep talking about single card noise performance
          but you’ve put your comment on the wrong article
          “The GeForce 9 series *[

            • rechicero
            • 12 years ago

            Most cards use reference coolers, as you surely know, and most reviews compare reference models, as you surely know. Anyway, if you had a point, why test noise in the first time? But you surely know you’re wrong. You surely know that your point is like dismissing overcloking tests because every single CPU is different (and they are).

            And I was talking about how to make these reviews even better and why, didn’t mean to offend anyone. I doubt I did. Noise is logarithmic and not linear and you can’t treat those test as performance, or power or almost anything else. It was just a suggestion to make techreport reviews better. Where is the problem in that? It would be easy and cheap to do a real noise test, all you need is a passive cooled cpu, that’s all. In this review we can learn about the Intel stock cooler noise (~40 dB), and about 1200W PSU noise. I like to know it and it’s a good thing to know that there are more in a card noise than just the cooler, but, wouldn’t be great to know how noisy the coolers really are?

            I repeat, It’s a good review. But it could be great.

            • mattthemuppet
            • 12 years ago

            I don’t really see how the noise measurements can be realistically improved – as has already been noted there are large no. of variables at work, so the value of these data is really as Damage stated, as a real world /[

            • rechicero
            • 12 years ago

            But the point is that noise and everything else are different. If you use a stock cooler, because of the character of the noise, you’ll have most reference coolers within that 5%, even if you have one twice as noisier as another.

            This real world test makes as sense as testing CPUs at high-res and quality for games (real world). It makes sense? Yes. Does it offer real info about the CPUs? No. Does anybody test CPUs that way? Most ppl don’t and techreport, doesn’t.

            If you want to improve the noise measurement, do the same. Isolate. Build a rig that make little noise at load, install the card, take it to load, measure and it’s done. That’s all. About 10? minutes a test. It’s easy, it’s not time consuming and it’s more realistic. Of course, at the end of the day you must fudge the data. But you won’t have every single card and all x2 SLI or Crossfire sets within a 5% range, you’ll be able to tell the difference. Do you really think every single reference card out there makes the same noise (within 5%)? And the same if you use 2 of them? If the answer is “No”, then you agree with me.

            • green
            • 12 years ago

            overall i think you missed my point
            that’s a failure on my part on getting it across
            but i’ll happily clear it up now
            you are looking for bank/buck/noise as many people do
            this review is targeted at multi-gpu performance
            refer to “roundups” of cards instead

            personally i go for silentpcreview forums
            they usually have a thread going on best bang/noise card
            from which you can do a price lookup for bang/buck/noise

            in regards to what you wrote:

            cards using the reference cooler target neither performance nor low noise which is why there are different solution across a single gpu family (ie. passive, water, 3rd party).

            looking back, the main reason why noise tests are included these days are so that people know that if they buy the card it won’t sound like a vacuum cleaner (see FX5800). so no i don’t dismiss noise testing

            cpu overclocking is a difficult comparison. variations across different batches will affect headroom, lifespan, even idle temperature. but i know that if i get a q9300 that it comes with a stock cooler, and will run at a specific speed at a relative noise level, at a relative price regardless of which shop i go to as the chip it comes from one source.

            with a 9600gt i can get a stock speed, slightly overclocked, heavily overclocked, slightly under-clocked, or fairly under-clocked card with stock, passive, exotic, water, 3rd party, oversized, 2 part, or external cooler, and with standard, half, or double ram at varying speeds, price levels and performance, from a variety of sellers.

            i’m still trying to figure out whether or not to go with evga/xfx and their programs as i can’t remember the last time i had a video card die. or maybe waiting to see if i can get the passive 8800gt from sparkle. with the noise results from non-roundup reviews they don’t tell me X is louder than Y. they tells me X will likely be louder than Y no matter which price/bang card i choose.

            feedback is welcome.
            my issue is how much longer articles like this will take to do.
            i’m starting to find articles are becoming a little sparse here.

            • rechicero
            • 12 years ago

            You are right. You can’t test every single card out there but… at the end of the day, the vast majority of cards out there are reference models. And you’re right again with silentpcreviews… But the thing is I like techreport and their reviews are great. Why don’t offer some feedback to make them even better? That’s my point.

            And of course there’ll be other models, but I don’t ask Scott to review every single piece of hardware out there, I’m just suggesting a methodology to make better the noise tests they already do, with the same cards they are testing. Just 10 minutes more per card, that’s all.

            Anyway, it was just my 2 cents. If Scott thinks the idea have some merit, great. If not… his reviews are going to be great from almost every angle.

    • michael_d
    • 12 years ago

    Did you guys notice that in Crysis benchmarks increase in resolution and AA had very trivial impact on framerate in ATI Radeon cards?

    • Entroper
    • 12 years ago

    My first graphics card (not counting the S3 ViRGE) was a TNT2 Ultra. Faster than a lot of the first offerings that came before it.

    We now have graphics cards (in 4-way SLI) that are one thousand times faster (153 Gtexels/s vs. 166 Mtexels/s). Cripes!

    • ssidbroadcast
    • 12 years ago

    I just skimmed the review but was their any mention of nVidia’s wanky multi-monitor NON support in SLi? Is it still present? I remember being really impressed with ATi’s CF + Multi Monitor support features.

    • herothezero
    • 12 years ago

    Another set of nVidia benchmarks and I’m still glad I bought my 8800GTX.

    • slaimus
    • 12 years ago

    To prove your hunch about the Crysis memory limitation, how about testing some slower 1GB cards, the 1GB version of the 9600GT, 8800GT, 3870, or even a 2900XT?

      • Mikael33
      • 12 years ago

      You dont have to prove things that are obvious.

    • Valhalla926
    • 12 years ago

    Hmm, where’s the benchmark for the 8800GTS 512 in Crysis? Just for relative performance to the 9800GTX. Probably not to hot anyway, I’ll just knock a frame from the 9800GTX’s score.

    Speaking of Crysis, how badly did they write the code? There must be something horribly wrong with the source code for it to remain a slideshow, even with all this firepower.

      • Mikael33
      • 12 years ago

      Not all workloads easily scale via multi gpus

        • Valhalla926
        • 12 years ago

        I know, but it’s still discouraging.

    • Plazmodeus
    • 12 years ago

    Hey Wasson

    This is great review, but there’s a minute point that I’d be curious to have appended to the conclusion. The G92 is obviously the best GPU value these days, but are the 9x versions of it generally the best price/performance iteration of it? I’d be curious to hear whether you think that the 9800 or 9600 cards are a better bang/buck than the G92 based 8800GTS? A quick look at NCIX (I’m in Canada, so these are CDN prices) shows 9600GT at $170, 8800GT at ~$220, 8800GTS at ~$260, and 9800GTX at $340. To my mind, while that looks good for 9600GT, and the 8800’s, that seems to make 9800 GTX, which is almost $100 more for a paltry few more FPS, a sad deal. One must remember that the 9x cards have no new features over the 88 cards Whats your opinion on that?

      • Damage
      • 12 years ago

      Yep, 9600 GT and 8800 GT are better values than the GTX. Between the two, I’m torn on which is best. With the 9600 GT, you’re betting that shader power won’t matter much over the life of the card. That’s a little scary, since all of those cheap 3850 cards out there pack a lotta shader power. 8800 GT may be worth the extra.

    • bogbox
    • 12 years ago

    the 9800 gtx is the r[

    • l33t-g4m3r
    • 12 years ago

    yup, crazy-land is a fairly accurate description of where you would have to be, if you bought any of this stuff.

      • Steba
      • 12 years ago

      thats not true… the 9800 gtx is a good card for people who are still runing their x700’s or worse etc. if you already have an 8800 series there isnt any point to get the 9800 but for people who missed that boat, this is the best place to start.

        • l33t-g4m3r
        • 12 years ago

        I’m talking about Triple SLI.
        There are too many negatives to even think about considering it.
        Heck, I think regular SLI is pushing it, but at least regular SLI has a place for getting maximum res+AA+AF, and single sli cards can save space and a slot.
        On the other hand, Triple SLI is completely useless.
        You’d have to have a ridiculous setup to run 3 cards: 1000w psu, triple sli compatible mainboard, 64-bit vista(otherwise you’ll lose memory), case big enough to fit it in, decent cooling, and don’t forget the heat and noise.
        If regular SLI can’t help you, then you should just forget about it.
        Then again, this crap isn’t marketed for people that have brains…

    • leor
    • 12 years ago

    that’s a pretty awesome review. the product is of course about as underwhelming as it possibly could have been, but the review was great.

    If they’re going to call it the 9800GTX the least they could have done is include 1gb of RAM, SOMETHING to differentiate it from the 8800GTS 512.

    well thanks nvidia for the big yawn . . .

    • Thresher
    • 12 years ago

    It still seems to me that the HD3870X2 is the best single card solution and at the price, it’s really hard beat.

    My gaming rig has an 8800GTS (640) and my Mac has the X1900XT. While the 8800GTS is faster, the image quality on the ATI card seems to be better.

      • BabelHuber
      • 12 years ago

      I’m considering two 3850 512MB for my aging 975x. In have already seen them for ~€129.- per piece (including VAT). Sounds like a good deal for me…

      • l33t-g4m3r
      • 12 years ago

      If I was in the market for a dual-card setup, that’s probably what I’d get, but since my x1900 still can do the job, I’ll hold off until something better comes out.

      • Mikael33
      • 12 years ago

      9800gx2 much?? It’s faster, delivers more performance per watt(it draws less power) and it’s performance doesn’t vary as widely per application because of the R6XX’s unique architecture(superscaler VLIW and and it’s lower texture unit count not to mention x-fire fails more often to scale than SLI.
      In short I think the RXX series is a failure. It has potential but nvidia’s G8X and G9X seem far more efficient. It’s telling that AMD needed a dual gpu card to compete with nvidia’s single chip flagship 😉

        • Thresher
        • 12 years ago

        Price is the issue. For the money the 3870×2 is the better buy.

          • Mikael33
          • 12 years ago

          I disagree,crossfire breaks far to often and 4 way SLI is superior to 4 way xfire

            • crazybus
            • 12 years ago

            4 way SLI/Crossfire is ridiculous any way you look at it.

            • Mikael33
            • 12 years ago

            The point is that the 9800gx2 has a better upgrade path if you happen upon another 550-600+ for a second card.
            Currently 3 and 4 way crossfire isnt’ enabled for an opengl games and it takes 4 way crossfire to equal the performance of a single 9800gx2- a $559~ card while a 3870×2 in crossfire will run you $800~ (using neweg).
            Seems like the GX2 is the better deal 😉
            One thing crossfire does have going for it, however, is multi displays, amd gets major kudos for that.

          • crazybus
          • 12 years ago

          For that matter if you’ve gone the X38+ route then two 3850 512MB cards in Crossfire is not that bad either. Heck there’s finally a legitimate use for the 975X’s two PCIe x8 slots.

      • Mikael33
      • 12 years ago

      Oh and I forgot- image quality?? Nvidia has better anistropic filtering than the OP’s x1900 and better fsaa. Are you not enabling the high quality preset, which disables the “optimizations” which hurt image quality?

    • kuraegomon
    • 12 years ago

    When can we see some results with SLI and the 790i chipset? It’s got a couple of features designed precisely to address SLI scaling issues. I’m not knocking the 780i, just saying that the _real_ SLI performance story should be on the 790i. That direct GPU-to-GPU data transfer should make a real difference at the highest resolutions. And if it doesn’t, I’d like to know …

    Are the boards just not stable enough yet?

      • Damage
      • 12 years ago

      We may move to the 790i soon, but I wouldn’t get too excited about it. You forget: the 780i SLI also has direct GPU-to-GPU transfers:

      g[https://techreport.com/articles.x/13790<]§

    • Dposcorp
    • 12 years ago

    Great review TR. Not only fun to read, but now with very nice hardware porn pics 🙂

    • Krogoth
    • 12 years ago

    Excellent review.

    It still points out the sheer absurdity of SLI/CF solutions. They only make somewhat sense at uber-high resolutions that require 30″ LCD monitors. (or if you managed to find a quality 22″ CRT). The more important thing is that there is hardly anything on the PC worth playing that truly needs their power. I am not going to into to the difficulties to get stable and efficient drivers under these solutions.

    The laws of diminishing returns are so darn painful. Why spend over $499 to over a grand to get a 20-30% gain over single-card solutions? It makes the high-end PS3 and 360 Elite look mighty attractive.

    I am keeping a hold onto my aging X1900XT until there is something that utter blows it out of the water without requiring multi-card solutions.

    • Shining Arcanine
    • 12 years ago

    My passively cooled XFX GeForce 7950 GT still runs games well for me. Someone call me when Nvidia releases something that is truly ground breaking, such a a passively cooled card that matches the performance of this generation’s graphics cards, supports native double precision arithmetic, sips power at idle like my Dell laptop’s GeForce 7900 GS does and has good driver support on all platforms like how things were before Vista. When they do all of this, I will be interested.

    • Track
    • 12 years ago

    How are we supposed to trust this review if the CPU being used is only a Dual-Core? You can’t test 4 GPU cores released in 2008 with 2 CPU cores released in 2006..

      • Swampangel
      • 12 years ago

      Sure you can trust it.

      If you see a bunch of cards from budget to extreme high-end all clustered around a certain maximum framerate, then you’re limited by something other than the graphics card.

      That didn’t happen in any of the tests here, even at “lower” resolutions. All of these games are bottlenecked by 3D performance, not any other element of the test system.

        • Mikael33
        • 12 years ago

        What’s the point of testing with a quadcore cpu when 2 of the cores will be idle during gaming?
        Even in Crysis, with was supposed to be scale to 4 cores doesn’t show jack going from a a dualcore to a quadcore.

    • MattMojo
    • 12 years ago

    So, a under powered card with less memory that runs just a tid-bit faster then it’s previous generation is an advancement???….. I’ll keep nVidia’s last real accomplishment, my 2x liquid-cooled 8800 ultras from BFG thank you.

    Wake me when they actually make something work my attention and money.

    Mojo

      • eitje
      • 12 years ago

      okay, sleeping beauty…

      • A_Pickle
      • 12 years ago

      Seriously. I’m inclined to agree — this isn’t anywhere near to the performance jump from, say… the 7900 GTX to the 8800 GTX. That was nearly double.

      This? This is absurd. The 9000-series is in many cases, a smidgen SLOWER than comparable cards…

        • Mikael33
        • 12 years ago

        they have far higher performance per watt

          • A_Pickle
          • 12 years ago

          When the 8800 debuted over the 7900, we saw performance double. Now we’re seeing the 9800 ACTUALLY LOSE to the 8800 in some benchmarks. That’s not something to be proud of — performance-per-watt should come via fine-tuning a product over time… kind of like, how, old GPU architectures didn’t change names when they underwent revisions or die shrinks.

          But I guess that’s okay, I mean, it’ll be fine as long as the GPU companies release more power-efficient GPU’s that perform exactly like the previous generation, right?

            • Mikael33
            • 12 years ago

            Amd did the same thing the 3000 series- it’s just a tweak of the 2800XT with half the ram and memory bus width.
            nVidia had no reason to really make a part that was much faster- no competition from Amd.
            Btw their next gen part(as in this isnt next gen.. as in no crap it isnt much faster than the 8800 series) is apparently coming as early as summer.
            ps- Alot of such as myself knew the 9800gtx was just gonna be a slightly faster 512MB GTS, so it wasn’t really a surprise.

            • SecretMaster
            • 12 years ago

            Yes but comparatively speaking, the leap from the 2000 series to the 3000 series was a significant one. Yes there wasn’t much in terms of performance gains, but the vastly reduced power draw and noise levels were definately noteworthy. I don’t think you can really compare the two generation of ATI cards to this rehash that is the 9800GTX.

            • crazybus
            • 12 years ago

            Huh? They’re actually quite similar, with just the major difference being that release of R600->RV670 being closer than G80->G92.

            Both are built on a smaller process that uses significantly less power than the previous one and should give far great yields/wafer than the previous high end.

            Both offer similar performance to the previous top end part.

            Both integrate hardware video decoding support lacking in the previous version.

            Both use a narrower memory bus than the previous part giving less bandwidth put potentially lower production costs.

            The RV670 has the advantage of DX10.1 support, but that is largely offset by the G92s better performance/watt.

            Sure the performance delta from G80->G92 is disappointing but when you consider the phenomenal performance/cost ratio of the current midrange it’s hard to say this is a bad time to be in the market for a video card.

            • MattMojo
            • 12 years ago

            The problem here is that they used the same proc from the previous gen card (GTS) — then rebranded it as a whole new product line. We thrashed AMD about their poor performance when they touted their products as next gen when all they did was almost bring it up to par with nVidia’s offerings.

            I started my thread because nVidia is sitting on their backside and basking in the glow of their last accomplishment (8800) — my point was that I usually buy the next best thing as soon as released, but not this time as it is not the next best thing.

            Mojo

            • SecretMaster
            • 12 years ago

            All I’m saying is that there was actually a relatively justifiable reason for upgrading from a 2000 series to a 3000 series. I can’t say that about the 9800GTX.

            I’m not disputing the physical changes done to the cards themselves. Regardless if the cards are revisions of previous architectures, the 3000 series had noteworthy gains whereas the 9800 doesn’t. Even Damage was surprised by the revisions with the 3870/50

            • DrDillyBar
            • 12 years ago

            Or a 1×00 to a 3×00

            • MattMojo
            • 12 years ago

            I agree completely. Although more work is needed by AMD before we can count them in the lineup again, I think.

            Mojo

    • deruberhanyok
    • 12 years ago

    The picture at the top of page 2 is ludicrous in so many ways…

    And the memory footprint on 32-bit is equally hilarious. And the Crysis numbers, oh god, that’s just sad.

    Great writeup guys, thanks!

    • Jigar
    • 12 years ago

    It’s good to see my 2X 8800GT still does the job..

      • MadManOriginal
      • 12 years ago

      ‘Still’? That makes it sound like you’ve had them for a long time. How long have you had them, 5 months or so maybe?

    • d0g_p00p
    • 12 years ago

    The more I read about benchmarking Crysis with these insane GPU & CPU combo’s and still seeing this horrible performance, the more I am inclined to think that it’s awful and sloppy code.

    I mean come on, Crysis looks as great as many current titles graphically and that most of these titles run perfectly well with a single midrange card (8800GT, 9600GT) and run high resolutions with good framerates (Bioshock, CoD4,etc) however Crysis cannot even break 40FPS with quad SLI and $1K+ video setups is just pathetic.

    No wonder that Intel could not even run the demo of FC2 at a decent frame rate with the best hardware money can buy at GDC. No wonder why there is no 360 port of this game.

      • lethal
      • 12 years ago

      how many times…. Far Cry 2 is NOT developed by Crytec, the license is owned by UBISOFT and the current developer is Ubisoft Montreal. And there will be a x360 and PS3 version. Then again, these people are the same that made Assassin’s Creed, so don’t expect anything remotely scalable on the PC.

    • yogibbear
    • 12 years ago

    Not happy nvidia/amd-ati.

    I want 20 x SLI.

    Damn YOU!

    • nanite
    • 12 years ago

    Thx for Including Quake Wars to your benchmarks.
    I wonder if you have enabled multicpu core support in your ETQW benchmarks?
    cvar command:
    seta r_useThreadedRenderer “2”

    ETQW Community is not the biggest but going strong:
    §[<http://community.enemyterritory.com/forums/<]§ §[<http://community.enemyterritory.com/forums/showthread.php?t=24609<]§

    • Jive
    • 12 years ago

    I would like to know what your energy bill is like when you run these tests? 😀

      • kvndoom
      • 12 years ago

      He has a secret tunnel to his neighbors’ basement.

    • Joerdgs
    • 12 years ago

    Conclusion: 9600GT SLI is the way to go!

      • marvelous
      • 12 years ago

      No a single powerful card is way to go with Crysis. It just doesn’t scale properly.

        • Joerdgs
        • 12 years ago

        I’m looking more at the overall result than ‘just Crysis’. The 9600GT scales perfectly almost everywhere. And they’re cheap too.

        • Krogoth
        • 12 years ago

        Ahem, Crysis runs fine. It is called turning down some settings and not running with AA/AF enabled and using a high-end resolution.

        OMFG, what a revolutionary idea!

    • Flying Fox
    • 12 years ago

    Man Crysis still kind of sucks with all those GPUs. What kind of spaghetti code did those guys write?

      • UberGerbil
      • 12 years ago

      It must be spaghetti shader code, which is kind of an achievement in itself.

      But it’s probably just them trying to pack 10 pounds of crap in a 5 pound bag.

    • Damage
    • 12 years ago

    Please, uh, hose it:

    §[<http://slashdot.org/firehose.pl?op=view&id=619144<]§ Thanks. I think. Oh, also, please go here... §[<https://techreport.com/articles.x/14524<]§ ..and click the 'digg' button at the top of the page. Thanks.

      • DrDillyBar
      • 12 years ago

      I can diggit, but I ain’t no Hoser 😉

      • cygnus1
      • 12 years ago

      You know I DUGG that!

    • UberGerbil
    • 12 years ago

    Great, now I have this in my head
    §[<http://www.youtube.com/watch?v=3BiuttQl0xM&feature=related<]§ (At least it's not this §[<http://www.youtube.com/watch?v=JZdUfqZ0grA&feature=related<]§ )

    • Usacomp2k3
    • 12 years ago

    Make that 5.

    • willyolio
    • 12 years ago

    thank you for taking the time and showing us some of crazyland. what’s really funny to me, though? in 20 years, we’ll probably have this kind of power in an integrated chipset (or even part of a CPU/GPU fusion system-on-a-chip).

      • UberGerbil
      • 12 years ago

      20 years? It’ll be way less than that. Your integrated chipset today can run rings around my top-of-the-line Voodoo 3 from 1998.

        • willyolio
        • 12 years ago

        good point. it took 5 years for the 780G to outperform a top-of-the-line card. it’ll probably only take 10 before the next integrated chipset can take on today’s quad-SLIs.

Pin It on Pinterest

Share This