There’s more to it than that, of course. These are highly sophisticated graphics products we’re talking about here. There’s a new cooler involved. Oh, and a new silicon revision, for you propellerheads who must know these things. And most formidable of all may be the new price tag. But I’m getting ahead of myself.
Perhaps the most salient point is that Nvidia has found a way to squeeze even more performance out of its G80 GPU, and in keeping with a time-honored tradition, the company has introduced a new top-end graphics card just as its rival, the former ATI now owned by AMD, prepares to launch its own DirectX 10-capable GPU lineup. Wonder what the new Radeon will have to contend with when it arrives? Let’s have a look.
It’s G80, Jim, but not as we know it
For us, the GeForce 8800 is familiar territory by now. We’ve reviewed it on its own, paired it up by twos in SLI for killer performance, and rounded up a host of examples to see how they compared. By and large, the GeForce 8800 Ultra is the same basic product as the GeForce 8800 GTX that’s ruled the top end of the video card market since last November. It has the same 128 stream processors, the same 384-bit path to 768MB of GDDR3 memory, and rides on the same 10.5″ board as the GTX. There are still two dual-link DVI ports, two SLI connectors up top, and two six-pin PCIe auxiliary power connectors onboard. The feature set is essentially identical, and no, none of the new HD video processing mojo introduced with the GeForce 8600 series has made its way into the Ultra.
Yet the Ultra is distinct for several reasons. First and foremost, Nvidia says the Ultra packs a new revision of G80 silicon that allows for higher clock speeds in a similar form factor and power envelope. In fact, Nvidia says the 8800 Ultra has slightly lower peak power consumption than the GTX, despite having a core clock of 612MHz, a stream processor clock of 1.5GHz, and a memory clock of 1080MHz (effectively 2160MHz since it uses GDDR3 memory). That’s up from a 575MHz core, 1.35GHz SPs, and 900MHz memory in the 8800 GTX.
Riding shotgun on the Ultra is a brand-new cooler with a wicked hump-backed blower arrangement and a shroud that extends the full length of the board. Nvidia claims the raised fan allows the intake of more cool surrounding air. Whether it does it not, it’s happily not much louder than the excellent cooler on the GTX. Unfortunately, though, the longer shroud will almost certainly block access to SATA ports on many of today’s port-laden enthusiast-class motherboards.
If you dig the looks of the Vader-esque cooling shroud and want the bragging rights that come with the Ultra’s world-beating performance, you’ll have to cough up something north of eight hundred bucks in order to get it. Nvidia expects Ultra prices to start at roughly $829, though they may go up from there depending on how much “factory overclocking” is involved. That’s hundreds of dollars more than current GTX prices, and it’s asking quite a lot for a graphics card, to say the least. I suppose one could argue it offers more for your money than a high-end quad-core processor that costs 1200 bucks, but who can measure the depths of insanity?
The Ultra’s tweaked clock speeds do deliver considerably more computing power than the GTX, at least in theory. Memory bandwidth is up from 86.4GB/s to a stunning 103.7GB/s. Peak shader power, if you just count programmable shader ops, is up from 518.4 to 576 GLOPSor from 345.6 to 384 GFLOPS, if you don’t count the MUL instruction that the G80’s SPs can co-issue in certain circumstances. The trouble is that “overclocked in the box” versions of the 8800 GTX are available now with very similar specifications. Take the king of all X’s, the XFX GeForce 8800 GTX XXX Edition. This card has a 630MHz core clock, 1.46GHz shader clock, and 1GHz memory. That’s very close to the Ultra’s specs, yet it’s selling right now for about $630 at online vendors.
So the Ultra isand this is very technicalwhat we in the business like to call a lousy value. Flagship products like these rarely offer stellar value propositions, but those revved-up GTX cards are just too close for comfort.
The saving grace for this product, if there is one, may come in the form of hot-clocked variants of the Ultra itself. Nvidia says the Ultra simply establishes a new product baseline, from which board vendors may improvise upward. In fact, XFX told us that they have plans for three versions of the 8800 Ultra, two of which will run at higher clock speeds. Unfortunately, we haven’t yet been able to get likely clock speeds or prices from any of the board vendors we asked, so we don’t yet know what sort of increases they’ll be offering. We’ll have to watch and see what they deliver.
We do have a little bit of time yet on that front, by the way, because 8800 Ultra cards aren’t expected to hit online store shelves until May 15 or so. I expect some board vendors haven’t yet determined what clock speeds they will offer.
In order to size up the Ultra, we’ve compared it against a trio of graphics solutions in roughly the same price neighborhood. There’s the GeForce 8800 GTX, of course, and we’ve included one at stock clock speeds. For about the same price as an Ultra, you could also buy a pair of GeForce 8800 GTS 640MB graphics cards and run them in SLI, so we’ve included them. Finally, we have a Radeon X1950 XTX CrossFire pair, which is presently AMD’s fastest graphics solution.
Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.
Our test systems were configured like so:
|Processor||Core 2 Extreme X6800 2.93GHz||Core 2 Extreme X6800 2.93GHz|
|System bus||1066MHz (266MHz quad-pumped)||1066MHz (266MHz quad-pumped)|
|Motherboard||XFX nForce 680i SLI||Asus P5W DH Deluxe|
|North bridge||nForce 680i SLI SPP||975X MCH|
|South bridge||nForce 680i SLI MCP||ICH7R|
|Chipset drivers||ForceWare 15.00||INF update 126.96.36.1990
Matrix Storage Manager 6.21
|Memory size||4GB (4 DIMMs)||4GB (4 DIMMs)|
|Memory type||2 x Corsair TWIN2X20488500C5D
DDR2 SDRAM at 800MHz
|2 x Corsair TWIN2X20488500C5D
DDR2 SDRAM at 800MHz
|CAS latency (CL)||4||4|
|RAS to CAS delay (tRCD)||4||4|
|RAS precharge (tRP)||4||4|
|Cycle time (tRAS)||18||18|
|Hard drive||Maxtor DiamondMax 10 250GB SATA 150||Maxtor DiamondMax 10 250GB SATA 150|
|Audio||Integrated nForce 680i SLI/ALC850
with Microsoft drivers
with Microsoft drivers
|Graphics||GeForce 8800 Ultra 768MB PCIe
with ForceWare 158.18 drivers
|Radeon X1950 XTX512MB PCIe
+ Radeon X1950 CrossFire
with Catalyst 7.4 drivers
|GeForce 8800 GTX 768MB PCIe
with ForceWare 158.18 drivers
|Dual BFG GeForce 8800 GTS SLI 640MB PCIe
with ForceWare 158.18 drivers
|OS||Windows Vista Ultimate x86 Edition||Windows Vista Ultimate x86 Edition|
Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.
Our test systems were powered by OCZ GameXStream 700W power supply units. Thanks to OCZ for providing these units for our use in testing.
Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults.
The test systems’ Windows desktops were set at 1600×1200 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.
We used the following versions of our test applications:
- Rainbow Six: Vegas 1.04
- Battlefield 2142 1.2
- Supreme Commander 3223
- The Elder Scrolls IV: Oblivion 1.2
- S.T.A.L.K.E.R.: Shadow of Chernobyl 1.0001
- Half-Life 2: Episode One with trdem2 demo
- FutureMark 3DMark06 Build 1.1.0
- FRAPS 2.8.2
The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
We’ve talked a little bit about shader power and memory bandwidth, but let’s pause here to look at pixel and texel throughput alongside memory bandwidth. Shader power is becoming more and more prevalent in newer games, but these old-school metrics still dictate part of the performance picture. As expected, the Ultra’s at the top of the heap in nearly every measure.
| Peak memory
|GeForce 7950 GT||550||16||8800||24||13200||1400||256||44.8|
|Radeon X1900 XT||625||16||10000||16||10000||1450||256||46.4|
|GeForce 7900 GTX||650||16||10400||24||15600||1600||256||51.2|
|Radeon X1950 XTX||650||16||10400||16||10400||2000||256||64.0|
|GeForce 8800 GTS||500||20||10000||24||12000||1600||320||64.0|
|GeForce 8800 GTX||575||24||13800||32||18400||1800||384||86.4|
|GeForce 8800 Ultra||612||24||14688||32||19584||2160||384||103.7|
Yep, the 8800 Ultra has about twice the memory bandwidth of the GeForce 7900 GTX, believe it or not, and it leads in the other categories, including pixel fill rate, texturing capacity, and “impresses the chicks.” The only way any existing solution could keep up would be in a multi-GPU configuration. We can measure several of these capabilities via synthetic benchmarks, to see if the Ultra lives up to its potential.
The Ultra comes very near to its theoretical peak for multitexturing, as do the other solutions we’ve pitted against it.
S.T.A.L.K.E.R.: Shadow of Chernobyl
We tested S.T.A.L.K.E.R. by manually playing through a specific point in the game five times while recording frame rates using the FRAPS utility. Each gameplay sequence lasted 60 seconds. This method has the advantage of simulating real gameplay quite closely, but it comes at the expense of precise repeatability. We believe five sample sessions are sufficient to get reasonably consistent and trustworthy results. In addition to average frame rates, we’ve included the low frames rates, because those tend to reflect the user experience in performance-critical situations. In order to diminish the effect of outliers, we’ve reported the median of the five low frame rates we encountered.
For this test, we set the game to its “maximum” quality settings at 2560×1600 resolution. Unfortunately, the game crashed on both GeForce and Radeon cards when we set it to use dynamic lighting, so we had to stick with its static lighting option. Nevertheless, this is a good-looking game some nice shader effects and lots of vegetation everywhere.
The Ultra flies through S.T.A.L.K.E.R., as do the GTX and the pair of 8800 GTS cards in SLI. You’d be hard pressed to tell the difference between any of these three solutions by the seat of your pants. Only the poor Radeon X1950 XTX CrossFire setup struggles here, showing its age.
Here’s another new game, and a very popular request for us to try. Like many RTS and isometric-view RPGs, though, Supreme Commander isn’t exactly easy to test well, especially with a utility like FRAPS that logs frame rates as you play. Frame rates in this game seem to hit steady plateaus at different zoom levels, complicating the task of getting meaningful, repeatable, and comparable results. For this reason, we used the game’s built-in “/map perftest” option to test performance, which plays back a pre-recorded game.
Another note: the frame rates you see below look pretty low, but for this type of game, they’re really not bad. We’ve observed frame rates in the game similar to the numbers from the performance test, but they’re still largely acceptable, even at higher resolutions. This is simply different from an action game, where always-fluid motion is required for smooth gameplay.
The Ultra snags the top spot here, aided by the fact that the 8800 GTS in SLI doesn’t appear to scale to two cards well in this game. The median low frame rate numbers, meanwhile, are kind of all over the map, which just shows how variable they are in Supreme Commander, for whatever reason.
We tested this one with FRAPS, much like we did S.T.A.L.K.E.R. In order to get this game to present any kind of challenge to these cards, we had to turn up 16X anisotropic filtering, 4X antialiasing, and transparency supersampling (or the equivalent on the Radeons, “quality” adaptive AA). I’d have run the game at 2560×1600 resolution if it supported that display mode.
BF2142 looks gorgeous and runs well on any of these configs, but the 8800 Ultra looks best, plays best, and turns in the fastest average and low frame-rate numbers.
Half-Life 2: Episode One
This one combines high dynamic range lighting with 4X antialiasing and still has fluid frame rates at very high resolutions.
The Ultra is juuust barely edged out by a pair of Radeon X1950 XTXs in CrossFire here, but it’s extremely close. The GTX again shadows the Ultra, running just behind it.
We turned up all of Oblivion’s graphical settings to their highest quality levels for this test. The screen resolution was set to 1920×1200 resolution, with HDR lighting enabled. 16X anisotropic filtering was forced on via the cards’ driver control panels. We tried enabling 4X antialiasing, as well, but got inconsistent results from Nvidia’s current 158.18 drivers for Vista x86. Antialiasing only worked intermittently, and we haven’t yet found a consistent work-around or fix. As a result, we’ve tested without AA.
We strolled around the outside of the Leyawin city wall, as show in the picture below, and recorded frame rates with FRAPS. This area has loads of vegetation, some reflective water, and some long view distances.
Grabbing a pair of GTSes will buy you more performance in Oblivion than the Ultra.
Rainbow Six: Vegas
This game is notable because it’s the first game we’ve tested based on Unreal Engine 3. As with Oblivion, we tested with FRAPS. This time, I played through a 90-second portion of the “Dante’s” map in the game’s Terrorist Hunt mode, with all of the game’s quality options cranked. The game engine isn’t compatible with multisampled antialiasing, so we couldn’t enable AA.
This Xbox 360 port will tax any current video card at this resolution, but the Ultra once again comes out ahead of the pack.
The GTS SLI rig scales up to two cards nicely in 3DMark, allowing it to take top honors. The Ultra is all alone in second place, running ahead of the neck-and-neck GeForce 8800 GTX and Radeon X1950 XTX CrossFire.
Through the remainder of 3DMark’s synthetic tests, the Ultra proves again to be just a little faster than the GeForce 8800 GTX.
We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement.
The idle measurements were taken at the Windows desktop. The cards were tested under load running Oblivion at 1920×1200 resolution with 16X anisotropic filtering. We loaded up the game and ran it in the same area where we did our performance testing.
The GeForces were all measured on the same motherboard, but we had to use a different board in order to run the Radeon X1950 XTX in CrossFire, so keep that in mind.
The power consumption numbers we observed in our test scenario aren’t quite what we expected, given Nvidia’s claims about the Ultra’s peak power use being lower than the GTX’s. However, power use can vary from one scenario to the next, and it’s possible the Ultra’s peak power use is still lower, depending on how one tests it. We’ve found our test scene from Oblivion to be very power intensive, and it’s a good real-world test, for what it’s worth.
The other thing to note here is that, for all its speediness, the Ultra draws substantially less power than the dual-GPU solutions that offer similar performance.
Noise levels and cooling
We measured noise levels on our test systems, sitting on an open test bench, using an Extech model 407727 digital sound level meter. The meter was mounted on a tripod approximately 14″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.
You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured, including the Zalman CNPS9500 LED we used to cool the CPU. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.
The Ultra carries on the GTX’s tradition of excellent acoustics. Amazingly, the fastest video card on the market is also one of the quietest.
What’s there to say that hasn’t been said? The GeForce 8800 Ultra’s clock speeds are a little bit higher than the 8800 GTX’s, and as a result, it performs somewhat better. That’s more than sufficient to make this the new Fastest Single Video Card on the Planet. Perhaps the best thing one could say for the Ultra is that Nvidia didn’t blunt the GTX’s virtueswhich include gorgeous image quality, fairly reasonable power draw numbers, and whisper-silent coolingin order to get more performance.
I also prefer the Ultra to the option of running two GeForce 8800 GTS cards in SLI, for a variety of reasons. The 8800 GTS SLI config we tested was faster than the Ultra in some cases, but it was slower in others. Two cards take up more space, draw more power, and generate more heat, but that’s not the worst of it. SLI’s ability to work with the game of the moment has always been contingent on driver updates and user profiles, which is in itself a disadvantage, but SLI support has taken a serious hit in the transition to Windows Vista. We found that SLI didn’t scale well in either Half-Life 2: Episode One or Supreme Commander, and these aren’t minor game titles. I was also surprised to have to reboot in order to switch into SLI mode, since Nvidia fixed that issue in its Windows XP drivers long ago. Obviously, Nvidia has higher priorities right now on the Vista driver front, but that’s just the problem. SLI likely won’t get proper attention until Nvidia addresses its other deficits compared to AMD’s Catalyst drivers for Vista, including an incomplete control panel UI, weak overclocking tools, and some general functionality issues like the Oblivion AA problem we encountered.
The holder of the graphics performance crown is rarely available for $88.88 at Wal-Mart, but the Ultra’s value proposition is more suspect than usual for a top-end partnot because it breaks new ground in graphics card pricing, which it does, but because there are GTX cards already available with strikingly similar clock speeds for about $200 less. That fact tarnishes the performance crown this card wears, in my view. I expect the Ultra to make more sense as a flagship product once we seeif we see”overclocked in the box” versions offering some nice clock speed boosts above the stock specs. GeForce 8800 Ultra cards may never be killer values, but at least then they might justifiably command their price premiums.
We’ll be keeping an eye on this issue and hope to test some faster-clocked Ultras soon.
When we do, we may be testing them alongside the Ultra’s intended prey, cards based on AMD’s upcoming R600 GPU. Stay tuned.