Home Nvidia’s GeForce GTX 285 and 295 graphics cards

Nvidia’s GeForce GTX 285 and 295 graphics cards

Scott Wasson Former Editor-in-Chief Author expertise
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

Yeah, I’ve been playing with the very latest video cards for weeks now, using the newest games and a screaming new test rig. Life’s rough, you know. But now I have just a few hours to write an entire review that’s destined to be packed with more GPU goodness than six regular ones. I’ve been mainlining caffeine ’til I can barely sit still to type, and each key-press seems to be registering ttthhrreeeeee tttiiimmmeesss. So let’s skip the pleasantries and get down to business. The primary subjects of our attention today are the GeForce GTX 285 and 295, two cards that are both based on the new 55nm version of Nvidia’s GT200 graphics processor, the 1.4 billion-transistor monster behind the GeForce GTX lineup.

The move from a 65nm fab process to a 55nm one promises additional goodness from the B-step version of the GT200, affectionately known by some as the GT200b. The goodness comes mainly in the form of reduced power consumption and heat production, but in turn, those things can lead to increased performance headroom. That is, Nvidia can turn up the clock speeds without resorting to quad-slot cooling or an external power supply. Good times, indeed.

Other than the die shrink, the GT200b is more or less unchanged from the GT200. That means it still has 240 gizzywhatchits and 512 hoozydoers or whatever, just like the old version. I dunno, man, go read my GeForce GTX 280 review if you want info on the GPU chiclets and stuff. It’s all in there. What has changed is clock speeds—in the case of the new single-GPU flagship, the GeForce GTX 285—and, thanks to better thermals, the ability to shoehorn two of these GPUs into a single card—that’s the GTX 295.

The GTX 285: GT200b undiluted
Let’s start our whirlwind tour of the zillion graphics cards in this little roundup with the GeForce GTX 285. This puppy more or less mirrors the GeForce GTX 280 that it succeeds, with a gig of RAM onboard and an all-around resemblance to its elder sibling, right down to the 10.5″ board length and the plastic cooler shroud.

Yep. Uh huh.

The 285 should outperform the 280, though, thanks to a bump in clock speeds in all three dimensions: the main GPU clock is up to 648MHz, the shaders are up to 1476MHz, and the memory is up to 1242MHz—or 2484 MT/s, since it’s of the GDDR3 variety and since doing the unit conversion for you makes us sound smart. We’ll talk about what the clock speed changes mean to theoretical performance capacity shortly.

Before we go there, though, we should note the other positive effect of the move to a 55nm chip: this card has two six-pin PCIe power plugs, and that’s it. The GTX 280’s second connector requires an eight-pin plug, an annoying requirement that, for many, would involve either a plug adapter or a new PSU.

Cards like this one should become available today, and Nvidia tells us it expects street prices to be around $379. This particular GTX 285, as you can see, comes from Asus, and it’s clocked a little higher than stock with a 670MHz GPU core, 1550MHz shaders, and 1300MHz memory. The folks at Asus don’t bundle their card with too many frills, but they do include a DVI-to-HDMI adapter, and happily, they provide three years of warranty coverage with no product registration required. Too many board vendors have taken to requiring online registration in order to get warranty coverage beyond a year, a trend I’m pleased to see Asus avoiding.

Ganging up: The GTX 295

Inside of this quintessentially Cash-meets-Vader cooling shroud is a pair of circuit boards, each of which has mounted on it a GT200b GPU and associated memory. Like so:

A mock cutaway of the GeForce GTX 295. Source: Nvidia.

This is how Nvidia likes to do its multi-GPU graphics cards, dating all the way back to the GeForce 7950 GX2. Somewhere in there is a PCI Express switch chip that allows two GPUs to share a single PCIe x16 slot.

What you get from the GTX 295 is not quite as much computing power as a pair of GeForce GTX 285 cards. All 240 stream processors are intact on each GPU, but one ROP partition is deactivated. As a result, each GPU can output 28 pixels per clock rather than 32, with a corresponding drop in antialiasing power. Also, because ROP partitions have memory controllers in ’em, each GPU has a 448-bit aggregate path to memory and 896MB of RAM instead of a full gigabyte. Many of these numbers will no doubt sound familiar if you know about the GeForce GTX 260, which also has one of its ROPs deactivated.

Why the gimpy GPU? Many reasons: to keep chip yields up and costs down, to reduce the power envelope, to make everything fit into a small space, or because it somehow makes sense. Yes, it’s a little weird that the GTX 295 has 1792MB of GDDR3 memory onboard. Doesn’t really matter, though, at the end of the day. The GTX 295 will be plenty potent with a pair of GPUs in this configuration—to say the least.

Yep, the 295 does indeed require an 8-pin PCIe power connection in addition to a 6-pin. That’s pretty much a foregone conclusion. The connector you might not have been expecting, though, is the HDMI port on the back (or is it front?) of the card, which replaces the analog video output. Only makes sense, I suppose. The other connector of note is a single SLI interface, which protrudes from the top of the card, threatening quad SLI.

Despite its dually nature, the GTX 295 is still only 10.5″ long. Well, “only” is a strong word, but let’s say the 295 is no longer than the 285 or the Radeon HD 4870 X2, which is nice. GTX 295 cards like this one seem to be selling right now for a prevailing price of $499. That’s a whole heck of a lot of money, and you’ll presumably be using it with a pricey 30″ LCD monitor or something similar. Otherwise, there’s little point to a video card that costs half a grand.

Not that Nvidia isn’t trying, of course. The firm helpfully points out that the GTX 295’s dual GPUs can either pair up for peak graphics power or be split so that one card handles graphics duties and the other does PhysX calculations. We’re still a fair distance from the day when an entire GT200b GPU is best used for physics, though. We do have a copy of Mirror’s Edge for the PC, which has been fortified with additional PhysX vitamins and minerals, and we plan to have a look at it very soon.

Make room for the big dawg

In addition to testing the new GeForces, we’ve gathered together quite a few intriguing Radeon cards. Many Radeon board vendors have been building custom versions of the 4800-series cards, and several of them are represented here today.

Ladies and gentlemen, this honkin’ contraption is our very first triple-wide video card, the Palit Revolution 700 Deluxe. Yes, it’s a Radeon HD 4870 X2 with a little something extra. Let us marvel at its… girth.

Palit says it chose this triple-slot cooling design in order to keep the card quiet, and that approach seems to have worked. To my ears, this card does indeed sound quieter than the regular dual-slot versions of the 4870 X2. And before you write it off as a gimmick, consider that this card takes up one less slot than a CrossFire pair of Radeon HD 4870s. In its own, large way, it somehow makes sense, like Kim Kardashian in a two-piece.

But it does kinda dwarf a regular old Radeon HD 4870 X2, even though both cards are 10.5″ long.

Palit has made use of all that expansion plate real estate by equipping the Revolution 700 with an output port quad-fecta: DVI, VGA, HDMI, and DisplayPort are all represented. Included in the box is an HDMI-to-DVI adapter, too—a crucial bit for many of us.

Amazingly enough, though, Palit has chosen to stick with the Radeon HD 4870 X2’s stock clock speeds. Surely they could have gotten an extra 50MHz out of that honking cooler, no?

Nevertheless, the Revolution 700 is a nice example of a card maker going out on a limb to offer something a little bit different. Unfortunately, it doesn’t yet appear to be available at online retailers in the U.S. We received this example of the Revolution 700 a few weeks back, and at that time, Palit told us to expect street prices around $540. I would kind of expect that plan to be modified by the arrival of the GTX 295 and by the price cuts some vendors have made on regular ol’ 4870 X2s based on AMD’s reference design. Many of those X2 cards are now selling for $449, and most currently have a $50 mail-in rebate attached, as well. We’ll have to see where prices are when the Revolution 700 arrives in volume, which I believe should happen any day now.

Sapphire’s gem: the 4850 X2
Speaking of creative interpretations of the Radeon HD 4800 series, get a load of this thing:

That’s Sapphire’s Radeon HD 4850 X2, and as far as I know, it’s totally unique. No other vendor seems to offer a dual-GPU version of the Radeon HD 4850, and this board is apparently Sapphire’s own custom design. Perhaps that explains why the board itself is so very long—an eighth beyond 11 inches, in fact, longer than even Palit’s triple-slot monstrosity. You’ll want to check very carefully to see whether there’s room inside your PC before ordering one of these.

Sapphire’s 4850 X2 is even longer than a 4870 X2—or a GeForce GTX 295

In fact, the little heatsink pictured above protrudes from the back of the card, grabbing even more space. And yes, the card requires both a 6-pin and an 8-pin PCIe auxiliary power connection.

For all of its quirks, the 4850 X2 has some pretty obvious attractive attributes. Among them is the price-performance prospect of such a thing. Right now, the 4850 X2 is selling for $299 at Newegg, which puts it at roughly the same price as a couple of Radeon HD 4850s. And since its GPUs and memory run at stock speeds, it is very much like a couple of those. But this X2 has one big advantage: it sports 2GB of GDDR3 memory, or a gigabyte per GPU, double the amount on most 4850 cards. And in this performance class, when you’re looking to run the latest games at the highest resolutions and quality settings, a gig is what you want. You’ll see what I mean when we get to our benchmarks.

Asus custom coolers get caught in the CrossFire
The Radeon HD 4850 was an instant hit around these parts, but the first implementations caused us to raise (or is it singe?) an eyebrow over their thermals. Their single-slot coolers were never very beefy, and AMD tuned their fan speed control points pretty aggressively for good acoustics. The result was GPU temperatures approaching 90°C under load and idle temperatures around 80°C. Nothing to be worried about, according to AMD, but those temps were high enough to prompt us to look forward to 4850 cards with aftermarket cooling.

Asus’ EAH4850 TOP looks to be just what we had in mind. This card has a custom-designed dual-slot cooler, and its GPU and memory clocks are tweaked up to 680MHz and 1050MHz, respectively, for a little extra performance. The EAH4850 TOP sells for $164.99 at Newegg right now, with $30 mail-in rebate (for those who enjoy the stimulating combination of paperwork and games of chance that is the MIR.) Like all Asus video cards, the EAH4850 TOP has a three-year warranty with no registration required.

Asus was nice enough to supply us with a pair of these cards for some CrossFire testing, and interestingly enough, we seem to have caught them in a board design transition. As you can see, the components are placed differently on the two cards. They have the same GPU and memory clock speeds, and should be functionally identical. For what it’s worth, the card with the row of capacitors running across the back appears to be the newer layout, while the other one follows AMD’s original reference design.

We should go ahead and address an issue we found with these cards up front, though. You’ll see in our acoustic and temperature testing that the Asus 4850 was among the quietest and coolest cards we tested in a single-card configuration, with GPU temperatures under load that are quite dramatically reduced versus the stock Radeon HD 4850. But look closely at that cooler design, and you’ll notice that it has a fan, not a blower, onboard. That little fan collects air from above itself and pushes that air down over the cooler’s metal fins, an arrangement that’s very effective in a single-card setup. But if you place another video card in the slot directly adjacent, as happens in CrossFire configurations on many motherboards, then Asus’ cooler becomes starved for air, and GPU temperatures begin to climb.

We first noticed this problem during our performance testing, when our test system would lock up at random. There wasn’t any particular pattern to it, except that we could run a game for while on it without issue, but eventually, inevitably, the screen would go black and the system would lock. Once we started troubleshooting with a eye toward a thermal issue, the problem became clear almost immediately. We didn’t even have to make use of both GPUs. So long as a second card was nestled up against the Asus 4850, the Asus would overheat in a matter of minutes. You could watch it happening. Temperatures would rise, the card’s fan speed would peak, and GPU temps would continue climbing. Eventually, within five to ten minutes, the temperature would climb past the 100°C mark, and shortly thereafter, the screen would go blank. Bam. Game over.

We tried everything we could looking for a fix. Swapping the two cards, which do after all have different PCB designs, was no help. Our Gigabyte EX58-UD5 motherboard has a rather large south bridge cooler that could obstruct airflow, so we tried testing on an Asus P5E3 Premium, as well—same problem. Asus even sent us a matched pair of EAH4850 TOP cards based on the new PCB design, but they showed the exact same behavior. In fact, I gave up on CrossFire altogether and mounted the EAH4850 TOP in the P5E3 Premium with a Radeon X1950 card adjacent to it, and the 4850 still overheated.

More alarmingly, all of this happened on our open-air test bench, where ambient temperatures are much lower than inside of the average PC.

We were in communication with Asus throughout this troubleshooting process, and at first, their R&D department said it couldn’t duplicate the problem. After a little more testing, though, the company changed its tune. Asus now says it has made a revision to its card design, and it expects to be shipping the revised cards going forward. We hope to get our hands on one for testing soon.

In the meantime, you’ll want to avoid buying this card, especially if you plan to run it in CrossFire or with a larger expansion card of any sort in the adjacent slot. There are plenty of other options. AMD’s Radeon HD 4850 reference design may run a little hot, but at least it seems to be properly engineered.

Here’s another Asus Radeon card with a gorgeous custom cooler, the EAH4870 DK 1G. “DK” stands for Dark Knight, and I was shocked to find a picture of a dude in black armor holding a lance on the front of the box. Where’s Batman?

That disappointment aside, the 4870 DK is a nice example of a Radeon HD 4870 with 1GB of GDDR5 memory. You’ll find it for $249.99 at Newegg, with a $20 mail-in rebate. No, it doesn’t run at higher clock speeds than AMD’s reference designs, but when we were gathering cards for use in this article, we had a very difficult time finding the combination of 1GB of RAM and “overclocked” speeds in a Radeon HD 4870.

Sadly, that sweet-looking custom cooler on the EAH4870 DK 1G has the same basic fan design as the Asus 4850. Like the 4850, the EAH4870’s cooler is very effective in a single-card setup. In fact, it had by far the lowest GPU temperature we measured when used alone. But like the 4850, the EHA4870 suffocates when a card is placed in the slot next to it, blocking air to the cooling fan. The 4870 isn’t as quick to heat up as the 4850, but the card still generates heat faster than the cooler is capable of removing it, so it’s headed for the same destination.

Moving to different a motherboard than our EX58-UD5 helped somewhat by slowing down the rate at which the card overheats. Still, we saw the EAH4870’s MEMIO sensor reach GPU temperatures in excess of 116°C in testing on the Asus P5E3 Premium, and getting there only took about 30 minutes of continuous GPU use. We found we could almost reverse the equation by using Asus’ SmartDoctor utility to set the fan speed manually at 100%—but doing so was dreadfully loud, as the fan reached a speed it apparently won’t hit when relying on the card’s automatic speed control.

The bottom line is that this cooler is unsuitable for use in any situation where a second card obstructs its fan from taking in air. We don’t yet have word from Asus how it plans to address this problem with this product, either. Again, I’d advise you to seek out an alternative 4870 card. In this case, AMD’s stock cooler is really quite good.

If you’ve purchased either of these cards and are experiencing system lock-ups or heat-related GPU problems, I’d suggest contacting Asus for a warranty replacement immediately. They should take good care of you. If they give you any trouble about it, feel free to drop us a line at [email protected]. I’m not sure we can do anything to help, but we’d like to know if these issues aren’t being addressed properly.

The problems we encountered with these cards aren’t endemic to just Asus, either. Sapphire’s 4850 X2 cooler is a bit clumsy and loud, to name just one other example. A number of board vendors have latched on to the idea of offering their own coolers, and they seem to target two things: the cooler must be a custom design, and it must use multiple slots. Those things alone aren’t necessarily good, though. I’m perplexed why anyone would offer a dual-slot cooler that isn’t expressly designed to direct hot air out the back of the PC. But many do.

AMD and Nvidia clearly invest considerable engineering effort in their stock coolers, to make sure they have good acoustics and tolerate a wide range of conditions. Look at the way the blower on the GTX 285 card is angled slightly; that’s so it can take in air even if another card is right next to it. If a board vendor doesn’t intend to invest sufficient engineering effort into building an even better cooler than the stock one, then why bother?

So who competes with whom?
We have an awful lot of results from different video cards in the following pages, and you probably want to know which cards compete with which others. In some cases, we can give a fairly straightforward answer. For instance, the Radeon HD 4850 generally matches up against the GeForce 9800 GTX+. Prices for both are around $150 right now. Similarly, the Radeon HD 4870 1GB matches up pretty directly against the GeForce GTX 260 at somewhere near 250 bucks, and the Radeon HD 4870 X2 is the closest rival to the GeForce GTX 295 at between $450 and $500.

However, in all of the cases I’ve cited above, AMD’s pricing generally undercuts Nvidia’s. We found this to be true back in November, when we did a pretty extensive analysis of video card pricing, and although prices have dropped considerably since then, the general trend holds. In some cases, like the Radeon HD 4870 X2 versus the GeForce GTX 295, the price difference is substantial—$50 to start, more if you factor in mail-in rebates on the 4870 X2.

Other cards have no clear direct competitor. The GeForce GTX 285 falls into this category. At between $379 and $400, its closest competition from the Radeon camp is either the 4850 X2 at $300 or the 4870 X2 at $450. And none of this is taking rebates into account, which can cloud the picture even more while also ripping some people off. (I love rebates.)

Now that you’re as confused as I am, we can proceed to our performance results.

Test notes
We conducted our tests on a brand-new GPU test rig, which looks like so:

That’s a Gigabyte EX58-UD5 motherboard, 6GB of Corsair DDR3 memory rated for operation at up to 1600MHz, and lurking under the cooler is a Core i7-965 Extreme processor. The cooler itself is impressive, innit? It’s a Thermaltake V1 AX, and it’s nice and quiet, which gives us a little more room for acoustics testing.

All in all, this test system is incredibly fast, and at long last, we’re able to test both CrossFire and SLI on the same motherboard. Ahhh.

The detailed info about our tests systems is shown in the table below. One thing I do want to point out is that we’re using the version of the GeForce GTX 260 with 216 SPs, as opposed to the older version with 192. Nvidia says it’s now shipping all new GTX 260s with 216 SPs, so any 192-core stragglers on the market still should be older cards.

Also, please note that several of the cards we tested have higher clock speeds than the baselines established by AMD and Nvidia. The EVGA GeForce GTX 260 Core 216 card we tested runs faster than stock, as does the Asus GTX 285 and the Asus Radeon HD 4850. These are real, shipping products, and they are warranted by their makers at their given clock frequencies.

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core i7-965
System bus QPI 4.8 GT/s
Motherboard Gigabyte
BIOS revision F3
North bridge X58 IOH
South bridge ICH10R
Chipset drivers INF update
Matrix Storage Manager
Memory size 6GB (3 DIMMs)
Memory type Corsair
Dominator TR3X6G1600C8D
at 1333MHz
CAS latency (CL) 8
RAS to CAS delay (tRCD) 8
RAS precharge (tRP) 8
Cycle time (tRAS) 24
Command rate 2T
Audio Integrated
with Realtek drivers
Asus EAH4850 TOP Radeon HD 4850 512MB PCIe
with Catalyst 8.12 (8.561.3-081217a-073402E) drivers
Dual Asus EAH4850 TOP Radeon HD 4850 512MB PCIe
with Catalyst 8.12 (8.561.3-081217a-073402E) drivers
Visiontek Radeon HD 4870
512MB PCIe
with Catalyst 8.12 (8.561.3-081217a-073402E) drivers
Dual Visiontek Radeon HD 4870
512MB PCIe
with Catalyst 8.12 (8.561.3-081217a-073402E) drivers
EAH4870 DK 1G Radeon HD 4870 1GB PCIe
with Catalyst 8.12 (8.561.3-081217a-073402E) drivers
EAH4870 DK 1G Radeon HD 4870 1GB PCIe
+ Radeon HD 4870 1GB PCIe
with Catalyst 8.12 (8.561.3-081217a-073402E) drivers
Radeon HD 4850 X2 2GB PCIe

with Catalyst 8.12 (8.561.3-081217a-073402E) drivers
Revolution R700 Radeon HD 4870 X2 2GB PCIe
with Catalyst 8.12 (8.561.3-081217a-073402E) drivers
9800 GTX+ 512MB PCIe
with ForceWare 180.84 drivers
Dual GeForce 9800 GTX+ 512MB PCIe
with ForceWare 180.84 drivers

Palit GeForce 9800 GX2 1GB PCIe
with ForceWare 180.84 drivers

GeForce GTX 260 Core 216 896MB PCIe

with ForceWare 180.84 drivers

GeForce GTX 260 Core 216 896MB PCIe

+ Zotac GeForce GTX 260 (216 SPs) AMP²! Edition 896MB PCIe
with ForceWare 180.84 drivers

GeForce GTX 280 1GB PCIe

with ForceWare 180.84 drivers

GeForce GTX 285 1GB PCIe
with ForceWare 181.20 drivers

Dual GeForce GTX 285 1GB PCIe
with ForceWare 181.20 drivers

GeForce GTX
295 1.792GB PCIe
with ForceWare 181.20 drivers

Hard drive WD Caviar SE16 320GB SATA
OS Windows Vista Ultimate x64 Edition
OS updates Service Pack 1, DirectX
November 2008 update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Specs and synthetics
Before we get to play any games, we should stop and look at the specs of the various cards we’re testing. Incidentally, the numbers in the table below are derived from the observed clock speeds of the cards we’re testing, not the manufacturer’s reference clocks or stated specifications.

fill rate

Peak bilinear

Peak bilinear

FP16 texel


Peak shader
arithmetic (GFLOPS)

Single-issue Dual-issue

GeForce 9500 GT

4.4 8.8 4.4 25.6 90 134

GeForce 9600 GT

11.6 23.2 11.6 62.2 237 355

GeForce 9800 GT

9.6 33.6 16.8 57.6 339 508
GeForce 9800 GTX+

11.8 47.2 23.6 70.4 470 705
GeForce 9800 GX2

19.2 76.8 38.4 128.0 768 1152
GeForce GTX 260 (192 SPs)

16.1 36.9 18.4 111.9 477 715
GeForce GTX 260 (216 SPs)

17.5 45.1 22.5 117.9 583 875
GeForce GTX 280

19.3 48.2 24.1 141.7 622 933
GeForce GTX 285

21.4 53.6 26.8 166.4 744 1116
GeForce GTX 295

32.3 92.2 46.1 223.9 1192 1788
Radeon HD 4650 4.8 19.2 9.6 16.0 384
Radeon HD 4670 6.0 24.0 12.0 32.0 480
Radeon HD 4830 9.2 18.4 9.2 57.6 736
Radeon HD 4850

10.9 27.2 13.6 67.2 1088
Radeon HD 4870

12.0 30.0 15.0 115.2 1200
Radeon HD 4850 X2

20.0 50.0 25.0 127.1 2000
Radeon HD 4870 X2

24.0 60.0 30.0 230.4 2400

Theoretically, the GeForce GTX 285 has an edge in almost every category over any other single-GPU graphics card, with the exception being the Radeon HD 4870’s slightly higher peak shader compute rate. In fact, the GTX 285 even beats the Radeon HD 4850 X2 at theoretical fill and texture filtering rates.

Interestingly, the GTX 295 has higher peak fill and filtering rates than the 4870 X2, but the 4870 X2 has slightly higher memory bandwidth and decisively more shader arithmetic potential. Then again, we’ve found that current GPU architectures don’t always behave as their specifications might suggest. Perhaps some synthetic benchmarks will help sort things out for us.

None of the cards reach anything close to their theoretical peak color fill rates in this synthetic test. We’ve found that memory bandwidth often tends to be the limiting factor here. The results do seem to line up accordingly, roughly speaking, with the GTX 285 ahead of both the Radeon HD 4870 and the 4850 X2, while the GTX 295 trails the 4870 X2.

As far as I can tell, the units for 3DMark’s texture fill rate test are just completely horked, and FutureMark has shown no interest in fixing this problem. We’ll probably switch to a different benchmark utility as a result eventually, but for now, I believe we can use these numbers for relative comparisons, regardless. What these results show us is that the Radeon HD 4800 series has a substantial advantage over the current GeForces in delivered texture fill and filtering rates. The GeForce GTX 285 scores just a little lower than the Radeon HD 4850, and the GTX 295 is behind the 4850 X2.

In spite of the Radeons’ theoretical advantage in arithmetic rates, the GeForces tend to be faster in three of the four 3DMark shader tests. In the GPU particles and GPU cloth tests, a second GPU doesn’t seem to help and sometimes even hurts performance. That’s the nature of the beast with multi-GPU schemes. Not all games or applications scale well across multiple graphics processors.

Far Cry 2
We tested Far Cry 2 using the game’s built-in benchmarking tool, which allowed us to test the different cards at multiple resolutions in a precisely repeatable manner. We used the benchmark tool’s “Very high” quality presets with the DirectX 10 renderer and 4X multisampled antialiasing.

This is one game that will punish even the latest video cards at relatively common resolutions—and we weren’t even using the “Ultra high” quality settings. Notice how, just at 1680×1050, the Radeon HD 4870 with 512MB of memory is slower than the 4870 1GB. That effect is magnified as the resolution increases, and the 512MB cards become completely useless at 2560×1600—even the relatively powerful configs, like dual Radeon HD 4870 512MB cards in CrossFire.

As for the two cards we’re ostensibly reviewing, well, the GTX 285 is indeed the fastest single-GPU solution, and two of them in SLI are faster than anything else we tested. Yet a single GTX 285 turns out to be slower than the (less expensive) Radeon HD 4850 X2, so the ol’ value proposition looks shaky. Meanwhile, the GTX 295 nearly averages 60 FPS at 2560×1600, easily ahead of the 4870 X2 but only a tad faster than two 4870 1GB cards in CrossFire.

Left 4 Dead
We tested Valve’s zombie shooter using a custom-recorded timedemo from the game’s first campaign. We maxed out all of the game’s quality options and used 4X multisampled antialiasing in combination with 16X anisotropic texture filtering.

Yeah, so pretty much any of these cards will run Left 4 Dead fluidly at the highest resolution we can manage. That kind of makes fine-grained analysis seem superfluous, but we can still see clear performance differences at 2560×1600, for what it’s worth.

Call of Duty: World at War
We tested the latest Call of Duty title by playing through the first 60 seconds of the game’s third mission and recording frame rates via FRAPS. Although testing in this matter isn’t precisely repeatable from run to run, we believe averaging the results from five runs is sufficient to get reasonably reliable comparative numbers. With FRAPS, we can also report the lowest frame rate we encountered. Rather than average those, we’ve reported the median of the low scores from the five test runs, to reduce the impact of outliers. (Jeez, dude, you have a religion degree. Quit talking like that.) The frame-by-frame info for each card was taken from a single, hopefully representative play-testing session.

Since this game isn’t too hard on the GPU, we tested with all of its visual quality options at their highest settings and the screen res at 2560×1600. Why such a high resolution? Well, this is a review of video cards that cost upwards of 300 bucks. You don’t need an expensive video card for lower resolutions.

Nvidia pretty much cleans up here, with the GeForce 9800 GTX+ outperforming even the Radeon HD 4870 1GB. Both the GTX 285 and 295 perform relatively well.

Fallout 3
This is another game we tested with FRAPS, this time simply by walking down a road outside of the former Washington, D.C. We used Fallout 3‘s “Ultra high” quality presets, which basically means every slider maxed, along with 4X antialiasing and what the game calls 15X anisotropic filtering.

Here’s another case where cards with 512MB of memory tend to suffer, especially the GeForces. Frame rates for the 9800 GX2 and the single and dual 9800 GTX+ configs were all over the place. They would sometimes decline from run to run until we had to restart the game. I wouldn’t call Fallout 3 playable (at these extreme settings) on any of them.

The new GeForces play Fallout 3 just fine at these settings. However, they don’t really separate themselves from their competition. The Radeon HD 4850 X2 is faster than the GeForce GTX 285, and the GeForce GTX 295 seems to run into the same performance wall as all of the faster configs we tested.

The performance bottleneck at work here is the game’s level-of-detail mechanism making changes as you cross the terrain. The saw-tooth pattern in the frame-by-frame data is caused by those LOD adjustments happening every so often. Frame rates recover, and then another one hits. Of course, when the lowest troughs are at 60 FPS, it’s barely perceptible as you play.

Dead Space
This is a pretty cool game, but it’s something of an iffy console port, and it doesn’t allow the user to turn on multisampled AA or anisotropic filtering. Dead Space also resisted our attempts at enabling those features via the video card control panel. As a result, we simply tested Dead Space at a high resolution with all of its internal quality options enabled. We tested at a spot in Chapter 4 of the game where Isaac takes on a particularly big and nasty, er, bad guy thingy. This fight is set in a large, open room and should tax the GPUs more than most spots in the game.

Wow, uh, it’s a clean sweep for Nvidia. Doesn’t get much more definitive than that. Thing is, if you look at the numbers, even the slowest Radeon runs the game just fine, with its lowest frame rate above 30 FPS. AMD apparently has some driver work to do here, since performance doesn’t scale well at all with multiple GPUs.

Then again, because of the game’s control problems at high frame rates, the GeForce GTX 285 SLI was the least playable config we tested. That can be remedied with a tweak in the Nvidia control panel, though.

Crysis Warhead
This game is sufficient to tax even the fastest GPUs without using the highest possible resolution or quality setting—or any form of antialiasing. So we tested at 1920×1200 using the “Gamer” quality setting. Of course, the fact that Warhead tends to apparently run out of memory and crash (with most cards) at higher resolutions is a bit of a deterrent, as is the fact that MSAA doesn’t always produce the best results in this game. Regardless, Warhead looks great on a fast video card, with the best explosions in any game yet.

Some of the multi-GPU configs turn in decent average frame rates here, but their median lows are kind of iffy. Those configs include the GeForce 9800 GTX+ in SLI, the Radeon HD 4850 in CrossFire, and the GeForce 9800 GX2. Those lows are around 18 to 22 FPS, which is a little dicey. The common thread? They all have 512MB of memory, which seems to be a bit constraining.

Of course, the GeForce GTX 285 and 295 both handle this game without trouble, with the GTX 285 sitting at the top of the heap among single-GPU configs and the GTX 295 turning in the highest average frame rate overall.

Power consumption
We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows Vista desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead at 2560×1600 resolution, using the same settings we did for performance testing.

The GT200 GPU has the highest dynamic range in terms of power use we’ve ever seen, and the 55nm version of the GT200 continues that tradition. At idle, the GeForce GTX 285 draws less power than the Radeon HD 4850, remarkably enough. Even the dual-GPU GTX 295 consumes less power at idle than a single Radeon HD 4870 512MB.

When running a game, the story changes, as the GeForce GTX cards’ power use ramps up. Still, the GTX 285 draws less power than the card it replaces, the GTX 280. Disabling that ROP partition and keeping the clock speeds down on the GTX 295 seems to have had the intended effect, too: it pulls less juice than the Radeon HD 4870 X2, by over 50W.

Noise levels
We measured noise levels on our test systems, sitting on an open test bench, using an Extech model 407738 digital sound level meter. The meter was mounted on a tripod approximately 8″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

This is our first time out with a new and improved sound meter that has a lower sound floor and seems to give more precise measurements than the old one. As a result, we’ve returned to measuring noise levels at idle as well as under load.

At idle, the GTX 295 produces a little more hiss than most cards, by a noticeable amount. That’s a little surprising give the low noise levels achieved by a number of cards with similar power draw at idle. When running a game, also, the GTX 295 is among the loudest cards we tested. By contrast, the GTX 285 is a pretty good acoustic citizen, very much like the GeForce GTX 280 was before it.

The loudest card of the bunch, clearly, is Sapphire’s Radeon HD 4850 X2. The dual fans on its cooler tend to spin pretty quickly, especially at idle, where its noise levels are simply annoying. When we asked Sapphire about this behavior, they offered us a pair of BIOS updates (the card has two BIOSes) intended to lower fan noise levels. We flashed both BIOSes following Sapphire’s instructions, rebooted, and BAM! We’d bricked the card. I mean, it’s fallen, and it can’t get up. No POST, nada. I was going to take Sapphire to task for not providing the updated BIOS files on its website, but I can understand why the company might not want consumers attempting to flash the card’s BIOSes. Bad things.

The other config that’s noisy at idle is the Radeon HD 4870 1GB in CrossFire. It’s loud because the primary card in the pair, the Asus EAH4870 DK 1G, is being starved for air by the second card, and the cooler has ramped up its fan speed as a result. You can see that the Asus card alone is fairly quiet at idle, at 40.3 dB. Adding a second card to the mix creates problems, though.

The dual Asus 4850s, meanwhile, remain nice and quiet as the interior card in the pair overheats.

GPU temperatures
I used GPU-Z to log temperatures during our load testing. In the case of multi-GPU setups, I recorded temperatures on the primary card.

Nvidia seems to be pretty consistent, allowing GPU temperatures to creep into the mid-80s under load, and the new GTX 285 and 295 cards fall right in line. Both the Palit and Sapphire X2 cards keep temperatures much lower than AMD’s own stock coolers do, which is nice.

The shame of it all is that those Asus coolers are reasonably quiet and very effective in a single-card configuration. Look at the single-card temperature on that 4870 1GB! However, with dual cards installed, the Asus coolers essentially fail to do their jobs. These peak temperatures near 100°C were logged just before the system locked up.

The GeForce GTX 285 spent much of its time in our tests being hounded by less expensive multi-GPU solutions, including the GeForce 9800 GX2 and especially the Radeon HD 4850 X2, which has enough memory to really present a challenge. Still, the GTX 285 is the fastest single-GPU video card on the planet, and that counts for a fair bit. When multi-GPU solutions stumble, as they sometimes inescapably do, the GTX 285 should still perform as expected. Impressively, the GTX 285 has the lowest power draw at idle of any card in the field, and its acoustics and power consumption under load are both quite acceptable for this class of graphics card. The GTX 285’s status as the fastest single GPU card makes it an ideal building block for a fast SLI setup, too—the dual GTX 285 config outstripped everything else we tested.

The GTX 285’s $379 price tag makes it undeniably a luxury-priced item in a time when most games simply don’t require the fastest possible video card. The fact that you can buy a Radeon HD 4870 1GB or a GeForce GTX 260 for substantially less money—well over a hundred bucks—may be rather off-putting since the performance delta simply isn’t that huge, especially in terms of real-world playability. But the GTX 285 has essentially captured the single-GPU performance crown, and I suppose there is a price premium to be attached to that.

Speaking of performance crowns, if there’s a dual-slot, dual-GPU, single-card performance crown, then the GeForce GTX 295 has snagged that one. The GTX 295 is an even more extreme solution than the 285, obviously, but it’s the class of the exotics. You really shouldn’t even consider buying one of these right now unless you plan on forking out for a 30″ monitor, as well. But if you’re going to go all out, the GTX 295 isn’t a bad way to go. Its noise levels are a little high compared to that freakish triple-slot Palit 4870 X2, but no worse under load than a pair of GTX 285s.

As for the Radeons, AMD appears to have given its partners more freedom than ever before to come up with their own custom board and cooling designs—enough rope, in fact, to hang themselves. Asus seems to have obliged. The custom coolers on both its 4850 and 4870 cards have severe thermal problems with an adjacent card installed, as we’ve chronicled.

Sapphire has produced more mixed results with its 4850 X2. On one hand, the card is too long and too loud to appeal to most people. I really can’t recommend it as it is. On the other hand, the card’s basic hardware formula is dynamite. The price-performance mix makes a lot of practical sense, as witnessed by the fact that this $299 card was such a menace to the GeForce GTX 285. Sapphire told us at CES that it plans on re-launching the 4850 X2 with some fixes in place soon. If they can lower its noise levels, that would be nice. If they can also reduce the length of the card, then we’d really be talking. We’ve heard rumors than AMD might decide to introduce its own 4850 X2 to combat the GeForce GTX 285. Sounds like a plan to me.

The best custom design we saw from a Radeon board vendor is Palit’s Revolution 700. This thing’s cooler is huge, but it’s also effective and fairly quiet. We’ll have to see how Palit prices this card when it arrives on U.S. shores in volume, but this is one bit of freelancing we can appreciate on its merits.

The Tech Report - Editorial ProcessOur Editorial Process

The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.

Scott Wasson Former Editor-in-Chief

Scott Wasson Former Editor-in-Chief

Scott Wasson is a veteran in the tech industry and the former Editor-in-Chief at Tech Report. With a laser focus on tech product reviews, Wasson's expertise shines in evaluating CPUs and graphics cards, and much more.