Paired up: The latest GPUs

Usually, when we’re putting together an article like this one, it has an overriding theme. Either we’re reviewing a particular video card, or we’re conducting a comparative review of similar products. We do have elements of the latter in this article, but we don’t have a single, underlying theme other than this: like high-school seniors in summer camp, the graphics cards in question here have paired up. We have CrossFire setups for the latest Radeons, including the HD 6850 and 6870, and we have SLI configs for the latest GeForces, including the brand-spanking-new GeForce GTX 580.

If that’s not a coherent enough theme for you, well, our apologies. We’ve watched too much cable TV to make sense consistently any more. We simply had a few questions we wanted to see answered in the wake of our GeForce GTX 580 review. We wanted to know how that card performs in SLI, because after all, the fastest single GPU money can buy has tremendous potential as a building block of a multi-GPU setup. We also wondered whether a pair of relatively inexpensive video cards, like the Radeon HD 6850 or the GeForce GTX 460 768MB, might not be a viable alternative to a single high-end card like the GTX 580. In order to make sense of those comparisons, we were itching to do one of our patented value analyses on the various single- and multi-GPU offerings.

Undaunted by the multi-faceted nature of our task, we gathered up a bundle of intriguing new video cards and set to testing. We’ve included everything from a single Radeon HD 6850 to dual GeForce GTX 580s in SLI, along with many notable configs in between, for a total of 16 different combinations. The result is, for better or worse, what you’ll see below and on the following pages. Perhaps it will clarify some things for you, in spite of our best efforts.

Some new cards in the deck

In order to answer the questions we posed above, we naturally had to add some new video cards to our stable. That included, of course, a pair of GeForce GTX 580s.

We decided to go asymmetrical for our GTX 580 SLI config, kinda like the chick I dated in high school who had different colored eyes.

The first card in the pair comes from Zotac, and it’s a nicely bestickered version of Nvidia’s reference design, right down to the GTX 580’s standard core and memory clock speeds. This card is currently selling for $529.99 on Newegg, or 30 bucks over Nvidia’s suggested e-tail price, as are many other GTX 580 cards. Zotac covers its GTX 580 with a two-year warranty by default, but the term extends to “lifetime” if you register. Beyond that, this card’s primary distinguishing feature is a bundled copy of Prince of Persia: The Forgotten Sands. We’ve not yet played the game, but it has a semi-decent 75 rating on Metacritic, along with a less promising user score of 5.3. Still, the bundled game might be worthy of note, since otherwise, most GTX 580 cards are essentially the same.

Not that there’s anything wrong with that. In fact, we quite like the GTX 580 reference design, and we’ve found Nvidia’s stock cooler to be relatively quiet and effective.

Our second GTX 580 card comes from Asus, whose approach to the whole branding thing involves admirable restraint. You’ll find no atomic frogs or rocket-propelled fairy godmothers with magic swords on the Asus card itself, just a spare, bold sticker bearing the card maker’s name. Under the hood, though, this GTX 580 has been tweaked slightly: the core clock is 782MHz, up 10 whole megahertz from Nvidia’s default. In spite of the extra juice, Asus’ GTX 580 is going for $524.99 at Newegg, five bucks less than the Zotac, though without the bundled game. Asus covers its video cards with a three-year warranty, and happily, no registration is required to get the full warranty term.

Together, these wonder twins make a slightly asymmetrical $1054.98 graphics powerhouse. By rights, we should have tested this monster SLI setup against a similarly beastly config from AMD involving a pair of Radeon HD 5970 cards. However, we opted to test a pair of Radeon HD 5870s instead, as a sort of consolation prize, for various reasons—mainly because AMD is achingly close to rolling out its brand-new 6900-series GPUs, and we’ll be testing those shortly. Those will surely be the GTX 580’s real competition.

Ever since we published our review of the Radeon HD 6800 series, we’ve been hearing from a certain segment of the AMD fan base who desperately wants us to test a version of the Radeon HD 6850 that runs at higher clock speeds than AMD’s defaults. That’s only fair, they reason, since we also tested some of the many GeForce GTX 460 cards that run at higher-than-base clock frequencies. We were determined to accommodate these requests, a task we found was easier said than done. The Radeon HD 6850 has apparently been quite the hot product, as evidenced by rising prices and spotty availability.

Fortunately, Asus stepped in with a pair of its EAH6850 DirectCU cards for us to test. These have a 790MHz GPU core clock—15MHz higher than stock—and are presently listed for $199.99 at Newegg. That’s not a huge bump in clock speed, but it’s about par for the course among Radeon HD 6850s. You will find a few with 820MHz core clocks, but those run $10-20 more than the Asus model we’ve tested—and all are quite a bit above AMD’s original $179 suggested price.

At its initial suggested price, the Radeon HD 6850 was set to do battle against the 768MB version of the GeForce GTX 460. However, as the Radeon’s price has risen, the GTX 460 768MB’s price has remained steady at $169.99. The Asus card in question runs at a 700MHz core clock with 3680 MT/s memory, up from reference speeds of 675MHz and 3600 MT/s. The 6850 is now more direct competition for the 1GB version of the GeForce GTX 460, but we have included the 768MB cards for comparison nonetheless. Both the 6850 and the GTX 460 768MB are intriguing candidates for affordable multi-GPU mayhem, and none is more affordable than this 768MB card.

The Asus 6850 and GTX 460 768MB cards share another attribute, as well: Asus’ fancy DirectCU cooler that, per its name, routes a pair of copper heatpipes directly over the surface of the GPU. I don’t want to give away too much, but this little innovation may be something more than marketing hype, as we’ll soon see.

We should set the table for our testing by reminding you that we’ve been rather down on the recent trend toward fan-based GPU coolers that don’t pipe warm air out directly of an expansion slot opening. Such coolers are typically quiet in single-card configs, but don’t fare well when another video card (or any other expansion card) is nestled up against the fan, blocking its airflow. If this Asus cooler performs well in SLI and CrossFire, it will be an exception to the recent trend.

Our testing methods

Many of our performance tests are scripted and repeatable, but for some of the games, including Battlefield: Bad Company 2, we used the Fraps utility to record frame rates while playing a 60-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We raised our sample size, testing each FRAPS sequence five times per video card, in order to counteract any variability. We’ve included second-by-second frame rate results from Fraps for those games, and in that case, you’re seeing the results from a single, representative pass through the test sequence.

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-965 Extreme 3.2GHz
Motherboard Gigabyte EX58-UD5
North bridge X58 IOH
South bridge ICH10R
Memory size 12GB (6 DIMMs)
Memory type Corsair Dominator CMD12GX3M6A1600C8

DDR3 SDRAM at 1600MHz

Memory timings 8-8-8-24 2T
Chipset drivers INF update 9.1.1.1025

Rapid Storage Technology 9.6.0.1014

Audio Integrated ICH10R/ALC889A

with Realtek R2.51 drivers

Graphics
Asus Radeon HD
5870 1GB

with Catalyst 10.10c drivers

Asus Radeon HD
5870 1GB + Radeon HD
5870 1GB

with Catalyst 10.10c drivers

Asus ROG
Matrix Radeon HD
5870 2GB

with Catalyst 10.10c drivers

Radeon HD 5970
2GB

with Catalyst 10.10c drivers

Asus Radeon HD
6850 1GB

with Catalyst 10.10c drivers

Dual Asus Radeon HD
6850 1GB

with Catalyst 10.10c drivers

XFX Radeon HD
6870 1GB

with Catalyst 10.10c drivers

Sapphire
Radeon HD 6870 1GB + XFX Radeon HD
6870 1GB

with Catalyst 10.10c drivers

Asus GeForce GTX 460
768MB

with ForceWare 260.99 drivers

Dual Asus GeForce GTX 460
768MB

with ForceWare 260.99 drivers

MSI Hawk
Talon Attack GeForce GTX 460 1GB 810MHz

with ForceWare 260.99 drivers

MSI Hawk
Talon Attack GeForce GTX 460 1GB 810MHz +

EVGA GeForce GTX 460 FTW 1GB 850MHz

with ForceWare 260.99 drivers

Galaxy GeForce GTX 470 1280MB GC

with ForceWare 260.99 drivers

GeForce GTX 480 1536MB

with ForceWare 260.99 drivers

GeForce GTX 580 1536MB

with ForceWare 262.99 drivers

Zotac GeForce GTX 580 1536MB +

Asus GeForce GTX 580 1536MB

with ForceWare 262.99 drivers

Hard drive WD RE3 WD1002FBYS 1TB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition

DirectX runtime update June 2010

Thanks to Intel, Corsair, Western Digital, Gigabyte, and PC Power & Cooling for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

Some further notes on our methods:

  • We measured total system power consumption at the wall socket using a Yokogawa WT210 digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

    The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead 2 at a 1920×1080 resolution with 4X AA and 16X anisotropic filtering. We test power with Left 4 Dead 2 because we’ve found that the Source engine’s fairly simple shaders tend to cause GPUs to draw quite a bit of power, so we think it’s a solidly representative peak gaming workload.

  • We measured noise levels on our test system, sitting on an open test bench, using an Extech 407738 digital sound level meter. The meter was mounted on a tripod approximately 10″ from the test system at a height even with the top of the video card.

    You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

  • We used GPU-Z to log GPU temperatures during our load testing.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Power consumption

We’ll kick off our testing with power and noise. Notice that the cards marked with asterisks in the results below have custom cooling solutions that may perform differently than the GPU maker’s reference solution.

Turn your attention first to our comparison between the pairs of sub-$200 video cards, the 6850 and GTX 460 768MB, versus a single GeForce GTX 580. The GTX 580 consumes a little less power at idle, but when running a game, the 580 pulls more juice than either the 6850 CrossFireX config or the GTX 460 768MB SLI. The multi-GPU solutions don’t look too bad on that front, at least.

As for the GTX 580 SLI rig, well, that requires considerably more power than anything else we tested, a hefty 575W at the wall socket under load. That’s a lot, but as we’ll soon see, such power draw may be appropriate to the performance achieved.

Noise levels and GPU temperatures

Let’s focus first on the Asus 6850 and GTX 460 768MB cards with DirectCU coolers. In single-card configs, they achieve among the lowest noise levels and GPU temperatures we measured. They have substantially less power to dissipate in the form of heat than some of the higher-end solutions, but still—this is an impressive performance from that DirectCU cooler. I prefer the tuning of the 6850 to the GTX 460 768MB; the 6850’s GPU temperatures are a little higher—though still lower than most—and it’s measurably quieter under load. That difference comes to the fore when we add a second video card next door. The GTX 460 768MB works hard to keep the primary GPU’s temperature relatively low, at 73°C, and that shows on the decibel meter, where only the GTX 460 1GB cards are louder. The 6850 CrossFireX pairing, meanwhile, maintains decent temperatures but registers substantially lower noise levels.

Happily, the bottom line here is that Asus’ DirectCU cooler is excellent when the adjacent slot is unobstructed and, crucially, still adequate in multi-GPU configurations. With similar thermal loads, though, the 6850’s fan control tuning is quite a bit more acoustically friendly.

Meanwhile, the GeForce GTX 580 has a very good cooler of its own, and as a result, it’s quieter than both the GTX 460 768MB SLI and 6850 CrossFireX setups. Adding a second GTX 580 results in slightly higher noise levels and GPU temperatures, but heck, the 580 SLI setup isn’t much louder than a single Radeon HD 6870.

Pixel fill and texturing performance

I’ll own up to the fact that the next few pages are graphics egghead type stuff, and some of you may want to skip directly to the game tests. There’s no shame in doing so, mainly because no one will know.

Peak pixel
fill rate
(Gpixels/s)


Peak bilinear

integer texel
filtering rate
(Gtexels/s)


Peak bilinear

FP16  texel
filtering rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)
GeForce GTX 460 768MB

16.8 39.2 39.2 88.3
GeForce GTX 460 1GB 810MHz

25.9 47.6 47.6 124.8
GeForce GTX 470 GC

25.0 35.0 17.5 133.9
GeForce GTX 480

33.6 42.0 21.0 177.4
GeForce GTX 580

37.1 49.4 49.4 192.0
Radeon HD 5870

27.2 68.0 34.0 153.6
Radeon HD 6850

25.3 37.9 19.0 128.0
Radeon HD 6870

28.8 50.4 25.2 134.4
Radeon HD 5970

46.4 116.0 58.0 256.0

These figures aren’t destiny for a video card. Different GPU architectures will deliver on their potential in different ways, with various levels of efficiency. However, these numbers do matter, especially among chips with similar architectural DNA.

I’ve not included numbers in the table above for multi-GPU configurations, but you can essentially double the figures for dual-GPU solutions, at least in theory. A single GTX 580 has roughly twice the potential of the GTX 460 768MB in both pixel fill rate (including antialiasing power) and memory bandwidth, so those cases would be an even match. In the other categories, both related to texture filtering, two GTX 460 768MB cards should theoretically be superior. Against the 6850, a single GTX 580 doesn’t come close to twice the peak capacity in any category except FP16 texture filtering rates, where the GF110 GPU’s new secret sauce gives it well over double the potential. Two 6850s should be substantially ahead of a single GTX 580 in every other category.

Apply that doubling principle to dual GTX 580s, and you’ll see they should soundly eclipse any other solution we’ve tested in terms of pixel fill rate, FP16 filtering rate, and memory bandwidth.

This color fill rate test tends to be limited by memory bandwidth more than anything else, and so the dual GTX 580s in SLI take the top spot, more or less as expected. Notice, also, how well the 6850 CrossFireX setup fares, thanks to its copious memory bandwidth. The GTX 460 768MB, with one of the GF104 chip’s three memory controllers and ROP partitions disabled, simply lacks the bandwidth to keep pace.

We’ve shunned 3DMark’s texture fill test recently because it doesn’t involve any sort of texture filtering. That’s tragic and sad, since texture filtering rates are almost certainly more important than sampling rates in the grand scheme of things. Still, this is a decent test of FP16 texture sampling rates, so we’ll use it to consider that aspect of GPU performance. Texture storage is, after all, essentially the way GPUs access memory, and unfiltered access speeds will matter to routines that store data and retrieve it without filtering.

Given the fact that Nvidia’s most powerful GPUs are much larger, AMD’s unfiltered sampling rates are relatively high—witness the much cheaper and smaller Radeon HD 6870 matching the GTX 580. However, the gap between the GTX 460 768MB and the 6850 is quite small.

Here’s a more proper test of texture filtering, although it’s focused entirely on integer texture formats, not FP16. Texture formats like these are still widely used in games.

This test isn’t compatible with SLI, so we’ve left out the multi-GPU results for everything but the dual-GPU, single-card Radeon HD 5970.

Happily, after struggling in the dark for a while, we finally have a proper test of FP16 filtering rates, courtesy of the guys at Beyond3D.

The contest between the GTX 460 768MB and the Radeon HD 6850 in the three sets of results above is instructive. For simple integer, bilinear filtering, the 6850 has a slight edge. However, as the filtering challenge steps up to higher anisotropy levels and then to FP16 texture formats, the GTX 460 768MB pulls ahead. That’s generally the way of things with the current GPU architectures from AMD and Nvidia.

That trend is much more dramatic in the case of the GTX 580, whose substantial memory bandwidth allows it to take good advantage of its prodigious capacity for FP16 filtering. If you doubled the FP16 filtering scores for the GTX 460 768MB and the 6850, as one might do for SLI and CrossFireX, the GTX 580 would still be faster.

Shader and geometry processing performance

Peak shader
arithmetic

(GFLOPS)

Peak

rasterization

rate

(Mtris/s)


Peak
memory
bandwidth
(GB/s)
GeForce GTX 460 768MB

941 1400 88.3
GeForce GTX 460 1GB 810MHz

1089 1620 124.8
GeForce GTX 470 GC

1120 2500 133.9
GeForce GTX 480

1345 2800 177.4
GeForce GTX 580

1581 3088 192.0
Radeon HD 5870

2720 850 153.6
Radeon HD 6850

1517 790 128.0
Radeon HD 6870

2016 900 134.4
Radeon HD 5970

4640 1450 256.0

The second stage of our graphics-egghead challenge involves a couple of related aspects of GPU performance, computational power and geometry throughput. As you can see in the numbers above, we have some wide divergences between the two major GPU architectures in this generation. AMD’s Radeons have much higher peak theoretical shader throughput, and Nvidia’s GeForces can, in theory, achieve substantially higher rates of triangle rasterization. To take things even further down the egghead hole (wait, what?), current Nvidia GPUs should be capable of higher geometry processing rates at other stages in the DirectX 11 pipeline, too, but those are a bit harder to quantify than the rasterization rate.

As with the stage-one egghead stuff, the peak rates here are only a small part of the story. We’ll want to measure the chips’ actual throughput in various ways to get a clearer sense of their true performance.

The first tool we can use to do so is ShaderToyMark, a pixel shader test based on six different effects taken from the nifty ShaderToy utility. The pixel shaders used are fascinating abstract effects created by demoscene participants, all of whom are credited on the ShaderToyMark homepage. Running all six of these pixel shaders simultaneously easily stresses today’s fastest GPUs, even at the benchmark’s relatively low 960×540 default resolution.

Sadly, this test isn’t multi-GPU compatible, but we don’t need to guess whether a pair of sub-$199 video cards would beat out a GTX 580 here: the 580 is more than twice as fast as either the 6850 or the GTX 460 768MB, so its lead is secure. The GTX 580’s total package of shader execution efficiency, memory bandwidth, and cache size gives it an uncontested win. The GTX 460 768MB is also quicker than the 6850 here, despite having a much lower theoretical FLOPS peak.

Up next is a compute shader benchmark built into Civilization V. This test measures the GPU’s ability to decompress textures used for the graphically detailed leader characters depicted in the game. The decompression routine is based on a DirectX 11 compute shader. The benchmark reports individual results for a long list of leaders; we’ve averaged those scores to give you the results you see below.

This test does work with multiple GPUs, but adding a second graphics chip only provides a minor boost in performance here.

Finally, we have the shader tests from 3DMark Vantage.

Clockwise from top left: Parallax occlusion mapping, Perlin noise,

GPU cloth, and GPU particles

The first two tests measure pixel shader performance, and the Radeons excel. The Perlin noise test looks to be a case where the Radeons actually achieve something close to their peak arithmetic rates, as the 6850 beats out the much larger GeForce GTX 480.

These two tests involve simulations of physical phenomena using vertex shaders and the DirectX 10-style stream output capabilities of the GPUs. The tables turn in dramatic fashion here, as the GeForces dominate. Adding a second video card isn’t much help in this test, and may be a hindrance, as it is in the case of the GeForce GTX 580.

Geometry processing throughput

The most obvious area of divergence between the current GPU architectures from AMD and Nvidia is geometry processing, which has become a point of emphasis with the advent of DirectX 11’s tessellation feature. We can measure geometry processing speeds pretty straightforwardly with a couple of tools. The first is the Unigine Heaven demo. This demo doesn’t really make good use of additional polygons to increase image quality at its highest tessellation levels, but it does push enough polys to serve as a decent synthetic benchmark.

We can push into even higher degrees of tessellation using TessMark’s multiple detail levels.

These results offer us two clear outcomes. One, doubling up on GPUs can nearly double your geometry processing throughput, so long as the GPU can properly balance the load between them, as they’re doing here. Two, the Radeons are a bit slower than the competing GeForces at lower tessellation levels, but the Radeons fall far behind once the polygon detail gets too high. That’s an architectural difference that may or may not matter, depending on how games developers make use of tessellation.

HAWX 2

We already commented pretty extensively on the controversy surrounding tessellation and polygon use in HAWX 2, so we won’t go into that again. I’d encourage you to read what we wrote earlier, if you haven’t yet, in order to better understand the issues. We have included scores from the HAWX 2 benchmark in our tests below for your information, but be aware that this test’s results are the subject of some dispute.

Clearly, the large number of polygons HAWX 2 is pumping out plays to the GeForces’ current architectural strengths. With that said, dual Radeon HD 6850s achieve very nice frame rates here at the highest display resolution you’re likely to encounter on a single display.

Lost Planet 2

Our next stop is another game with a built-in benchmark that makes extensive use of tessellation, believe it or not. We figured this and HAWX 2 would make a nice bridge from our synthetic tessellation benchmark and the rest of our game tests. This one isn’t quite so controversial, thank goodness.

This benchmark emphasizes the game’s DX11 effects, as the camera spends nearly all of its time locked onto the tessellated giant slug. We tested at two different tessellation levels to see whether it made any notable difference in performance. The difference in image quality between the two is, well, subtle.

The Radeon HD 6850 CrossFireX pair performs quite well, shadowing the GeForce GTX 580. Meanwhile, the GTX 460 768MB SLI setup runs into problems, likely because it’s running low on local memory. The effective memory size of a multi-GPU pairing is actually a little bit less than that of a single card, because memory can’t be shared between the two cards and multi-GPU synchronization requires some additional overhead. At very high resolutions like this one—the sort, you know, for which you’d want a multi-GPU rig—768MB of memory appears to be rather cramped.

Civilization V

In addition to the compute shader test we’ve already covered, Civ V has several other built-in benchmarking modes, including two we think are useful for testing video cards. One of them concentrates on the world leaders presented in the game, which is interesting because the game’s developers have spent quite a bit of effort on generating very high quality images in those scenes, complete with some rather convincing material shaders to accent the hair, clothes, and skin of the characters. This benchmark isn’t necessarily representative of Civ V‘s core gameplay, but it does measure performance in one of the most graphically striking parts of the game. As with the earlier compute shader test, we chose to average the results from the individual leaders.

This may be the clearest case in a real game where the formidable peak arithmetic rates of today’s Radeons actually pay off.

Another benchmark in Civ V focuses, rightly, on the most taxing part of the core gameplay, when you’re deep into a map and have hundreds of units and structures populating the space. This is when an underpowered GPU can slow down and cause the game to run poorly. This test outputs a generic score that can be a little hard to interpret, so we’ve converted the results into frames per second to make them more readable.

The GTX 580 SLI setup struggles a bit here, only delivering a couple more frames per second than a single GTX 580. Meanwhile, the GTX 460 768MB SLI config looks to be running low on memory again. The Radeons have no similar issue with multi-GPU scaling, and the dual 6850 setup proves nearly as fast as one GTX 580.

StarCraft II

Up next is a little game you may have heard of called StarCraft II. We tested SC2 by playing back a match from a recent tournament using the game’s replay feature. This particular match was about 10 minutes in duration, and we captured frame rates over that time using the Fraps utility. Thanks to the relatively long time window involved, we decided not to repeat this test multiple times, like we usually do when testing games with Fraps.

We tested at the settings shown above, with the notable exception that we also enabled 4X antialiasing via these cards’ respective driver control panels. SC2 doesn’t support AA natively, but we think this class of card can produce playable frame rates with AA enabled—and the game looks better that way.

Generally speaking, the Radeons tend to outperform similarly-priced GeForces in SC2. The GTX 460 768MB card, in both single and dual configs, really struggles once again.

Battlefield: Bad Company 2
BC2 uses DirectX 11, but according to this interview, DX11 is mainly used to speed up soft shadow filtering. The DirectX 10 rendering path produces the same images.

We turned up nearly all of the image quality settings in the game. Our test sessions took place in the first 60 seconds of the “Heart of Darkness” level.

Here’s a case where the GTX 460 768MB doesn’t appear to suffer too much as a result of its lower video memory size, but the relative performance results look familiar: dual GTX 460 768MB cards in SLI aren’t quite as fast as a single GTX 580, but dual 6850s are. And dual GTX 580s in SLI are the fastest of all the solutions tested.

Metro 2033

We decided to test Metro 2033 at multiple image quality levels rather than multiple resolutions, because there’s quite a bit of opportunity to burden these GPUs simply using this game’s more complex shader effects. We used three different quality presets built into the game’s benchmark utility, with the performance-destroying advanced depth-of-field shader disabled and tessellation enabled in each case.

The Radeons become a little stronger as the detail level increases in this game. In the end, the GTX 580 and the 6850 CrossFireX solutions are essentially tied, while the GTX 460 768MB SLI setup once more lags behind. The GTX 580 SLI rig remains in a class of its own.

Aliens vs. Predator
AvP uses several DirectX 11 features to improve image quality and performance, including tessellation, advanced shadow sampling, and DX11-enhanced multisampled anti-aliasing. Naturally, we were pleased when the game’s developers put together an easily scriptable benchmark tool. This benchmark cycles through a range of scenes in the game, including one spot where a horde of tessellated aliens comes crawling down the floor, ceiling, and walls of a corridor.

For these tests, we turned up all of the image quality options to the max, with two exceptions. We held the line at 2X antialiasing and 8X anisotropic filtering simply to keep frame rates in a playable range with most of these graphics cards.

At this point in the review, you’re growing veeery sleepy, as you realize the results from one game to the next all follow the same basic trends. Sip some coffee and try to pay attention, slacker.

DiRT 2: DX9

This excellent racer packs a scriptable performance test. We tested at DiRT 2‘s “ultra” quality presets in both DirectX 9 and Direct X 11. The big difference between the two is that the DX11 mode includes tessellation on the crowd and water. Otherwise, they’re hardly distinguishable.

DiRT 2: DX11

Once more, the GTX 460 768MB SLI setup performs quite well at 1920×1080 but runs out of room at 2560×1600, especially with DirectX 11.

The value proposition

By now, we’ve craftily lulled you into a benchmark results trance. We’ll try to awaken you with a look at the value propositions presented by each of these solutions. We’ll start by compiling an overall average performance index, based on the highest quality settings and resolutions tested for each of our games, with the notable exception of the disputed HAWX 2. We’ve excluded directed performance tests from this index, and for Civ V, we included only the “late game view” results.

The GTX 460 768MB SLI’s struggles at higher resolutions really hurt it in our overall index, an unavoidable consequence of our decision to emphasize high resolutions. Keep in mind that, at lower resolutions, the 768MB cards often perform relatively well.

With this performance index established, we can consider overall performance per dollar by factoring price into the mix. Rather than relying on list prices, we grabbed our prices off of Newegg. Because of the holiday sales going on, we decided to grab our prices last week, right before the Black Friday/Cyber Monday sales took hold and warped things a bit. Generally, for graphics cards with reference clock speeds, we simply picked the lowest priced variant of a particular card available. For instance, that’s what we did for the GTX 580. For the cards with custom speeds, such as the Asus GTX 460 768MB and 6850, we used the price of that exact model as our reference.

AMD card Price Nvidia card
$169.99 GeForce GTX 460 768MB
Radeon HD 6850 $189.99
$214.99 GeForce GTX 460 1GB 810MHz
Radeon HD 6870 $249.99
Radeon HD 5870 $279.99
$299.99 GeForce GTX 470
$399.99 GeForce GTX 480
$499.99 GeForce GTX 580
Radeon HD 5870 2GB $499.99
Radeon HD 5970 $499.99

A simple mash-up of price and performance produces results like these:

Generally, the cheaper solutions tend to do best in terms of raw performance per dollar, but the 6850 CrossFireX config lands in third place, surprisingly enough.

We can get a better sense of the overall picture by plotting price and performance on a scatter plot. On this plot, the better values will be closer to the top left corner, where performance is high and price is low. Worse values will gravitate toward the bottom right, where low frame rates meet high prices.

The various solutions tend to cluster along a line extending from the bottom left to the top right, which suggests prices are usually pretty rational and track fairly well with performance. We do have some stand-outs among the crowd, including, most prominently, our Radeon HD 6850 CrossFireX setup. The 6870 CrossFireX setup looks strong, too, as does the GeForce GTX 460 1GB SLI config. The weakest values of the group include the single and dual-GPU versions of the GeForce GTX 460 768MB, for reasons we’ve already discussed, and the pricey, older Asus ROG Matrix Radeon HD 5870 2GB card. The GTX 580 SLI config isn’t a great value, but it is truly in a class by itself, in terms of both performance and price, among the contenders here.

Another way we can consider GPU value is in the context of a larger system purchase, which may shed a different light on what it makes sense to buy. These multi-GPU solutions are relatively expensive, so we’ve paired them with a proposed system config that’s very similar to the hardware in our testbed system.

CPU Intel Core i7-960 $579.99
Cooler Thermaltake V1 $51.99
Motherboard Gigabyte GA-X58A-UD5 $289.99
Memory 6GB Kingston HyperX DDR3-1600 $104.99
Storage Western Digital Caviar Black 1TB $89.99
Asus DRW-24B1ST $19.99
Audio Asus Xonar DG $29.99
PSU PC Power & Cooling Silencer Mk II 750W $119.99
Enclosure Corsair Graphite Series 600T $159.99
Total $1,446.91

That system price will be our base. We’ve added the cost of the video cards to the total, factored in performance, and voila:

Pairing a nice system with a powerful graphics solution is, in fact, quite rational from a price-performance perspective. We can even justify the GTX 580 SLI setup, since it vastly improves performance without doubling the system cost. The Radeon HD 6850 CrossFireX solution again looks quite good here, although dual GTX 460 1GB cards do, too.

Of course, the results would look different with a less expensive system, so your mileage may vary.

Conclusions

We came into this review with multiple purposes, and so we leave it confused and bewildered. Or maybe not entirely. I believe we’ve established several things, so let’s consider them one by one.

First, the easiest call we can make is that the Radeon HD 6850 CrossFireX setup does indeed offer performance comparable to a single GeForce GTX 580 at a lower price. That combo adds up to a very nice value proposition, provided you are willing to sacrifice four expansion slots worth of space in your PC for graphic cards—and provided you’re willing to live with the occasional compatibility and performance pitfalls one will inevitably encounter with a multi-GPU solution.

Not that we’d buy a pair of 6850s right now, honestly. The Radeon HD 6900 series should be hitting the market any day now, and we’re not entirely comfortable recommending a dual-6850 purchase with higher-end single cards so close.

Fickle, aren’t we? But the market’s changing, folks.

Next, we’ve pretty much established that the GeForce GTX 460 768MB’s smaller memory size can be a hindrance, especially in SLI where one would expect to be able to run games at higher resolutions and detail levels. With that said, our extensive use of the 2560×1600 resolution really hurt the GTX 460 768MB. In the cases where we tested at multiple resolutions, dual GTX 460 768MB cards in SLI proved to be just slightly slower than a GeForce GTX 580—for substantially less money. If you know for certain you want to drive a two-megapixel display and nothing more, a couple of GTX 460 768MB cards might not be a bad proposition. Even so, you may find your performance hindered by the smaller amount of video RAM at higher quality levels, as we did at 1920×1080 in Metro 2033.

Finally, the performance of two GeForce GTX 580 cards in SLI is well and truly bitchin’. The power draw and noise levels are at least reasonable for this class of setup, too. There are few substitutes for a GTX 580 SLI rig at this very moment, although as we’ve noted, some potential competition from AMD is imminent. Beyond that, an intriguing mystery package arrived unexpectedly in Damage Labs just today. Watch this space for even more of the same, hopefully with a tighter focus, in the coming weeks.

Comments closed
    • rUmX
    • 9 years ago

    Surprised the 460 SE SLI was not tested. Too bad. I would have really enjoyed the beating it would have taken. But it wouldn’t take last place since that’s reserved for the pathetic <1GB card from team green.

    • can-a-tuna
    • 9 years ago

    You should disqualify nvidia’s HAWX results:

    §[<http://www.hardwarecanucks.com/news/video/amd-nvidia-spat-msaa-cheating-hawx-2/<]§

      • PixelArmy
      • 9 years ago

      Or… just do some /[

    • Silus
    • 9 years ago

    Nice review Scott!

    Although I would never run SLI or Crossfire setups (I prefer single card setups), those dual 6850s and GTX 460s 1 GB do indeed look awesome from a price/performance perspective.

    And that mystery package is no doubt the GTX 570 🙂

    • muyuubyou
    • 9 years ago

    TL;DR: no

    • flip-mode
    • 9 years ago

    Wow, um, I just noticed some price changes since you priced the cards:

    The GTX 470 you linked to is now $70 more expensive to $369
    5870 has gone up by $10 to $289.
    GTX 480 has gone up by $40 to $439.

    Happy Holidays?

    The good news is there is a GTX 470 for $255 now:
    §[<http://www.newegg.com/Product/Product.aspx?Item=N82E16814125321&cm_re=GTX_470-_-14-125-321-_-Product<]§

    • Chrispy_
    • 9 years ago

    I really like the SLI tests hammering the GTX460 768 here.

    You wouldn’t /[

    • KoolAidMan
    • 9 years ago

    I would like to know what sort of performance increase you generally see in Starcraft 2 with all settings being the same but with FSAA disabled.

    Thanks

    • Jambe
    • 9 years ago

    Gosh darn reply. Borked.

    • can-a-tuna
    • 9 years ago

    l[

      • Waco
      • 9 years ago

      I can’t believe people still think that TR has a bias towards any CPU/GPU/[insert component here] brand in particular.

        • mdk77777
        • 9 years ago

        q[

          • flip-mode
          • 9 years ago

          Are you seriously so damn personally offended that Nvidia has the fastest GPU, and that pairing two of them together results in the fastest multi-GPU, and that the fastest always costs the mostest? How do you feel about Intel’s Core i7 CPUs and the fact that Intel is asking $1000 for one small piece of silicon, in contrast to Nvidia asking $499 for an entire video card with over a gig of fast memory and an enormous piece of silicon and a fairly technologically advanced heatsink and so on? You’re not at all upset by the continued high prices of 5870s, 5850s, 6870s, and 6850s? Go back under your bridge.

          If it’s not the Nvidia fanboys making me want to buy Radeons, it’s the Radeon fanboys making me want to buy Geforces.

            • mdk77777
            • 9 years ago

            No, I am not offended. I just like to see statistics that mirror reality.
            truncating the bottom 99% of a result to show the “huge difference” in the remaining 1% is absurd. See my following post on the guru3d testing. Yes the 580 SLI wins, but the more realistic analysis of value for performance gain is presented. I have no problem with anyone spending as much at they want to get the very best.

            • Voldenuit
            • 9 years ago

            The take home lesson is that SLI and Crossfire have the biggest payoff at high resolutions (assuming the card has enough RAM, which the GTX 460 768 MB doesn’t).

            That’s not news to anyone that’s been following tech for the past few years.

            • Voldenuit
            • 9 years ago

            It’s a conspiracy, I tells ya!

          • Lans
          • 9 years ago

          Wow what a long saga! I guess I’ll reply to this one as there is something I want to reply to most directly. And that is I personally like how Hexus.net does the value calculations, one example:

          §[<http://www.hexus.net/content/item.php?item=27307&page=18<]§ but there is always problems with reducing a whole data set into a single number. I also thinks good and healthy to allow readers to provide constructive criticism but I agree with others that your criticism is so far unsubstantiated and thus not very constructive. For example, if you bother to check: GTX 480/470 (March 31, 2010): §[<http://www.techreport.com/articles.x/18682/4<]§ HD 6870/6850 (October 25, 2010): §[<http://www.techreport.com/articles.x/19844/7<]§ This article (November 30, 2010): §[<http://www.techreport.com/articles.x/20043/2<]§ The system components and games are largely the same. There is minor changes over time but I don't think it'll hold much water to argue for pre-determined results. One very good reason why to keep system and benchmarks the same is so you can have comparable results. While I do understand that people who might be interested in GTX 460 SLI/HD 68x0 CF might want to a system more like this, for instance: §[<http://www.techreport.com/articles.x/19868/6<]§ I certainly would like to see 3 systems (low, mid, and high) for all benchmarks because I want more information to make trade off when building systems but that triples the work for each review...

      • PixelArmy
      • 9 years ago

      This post pretty much confirms that TR quote.

      l[https://techreport.com/articles.x/19934/8). The answer is BOTH! Either way you’re going to be called biased. So you might as well just include more info. Alternatively they can just keep removing benchmarks until we have nothing to read about. Hell, let’s remove all the DX11 games.

      • mdk77777
      • 9 years ago

      q[http://www.guru3d.com/article/geforce-gtx-580-sli-review/6<]§ The 580 sli still wins every test, but the margin is much smaller. Hence the average user quickly sees that the price to achieve the improved performance is disproportionate to the improvement. The performance index that all other computations are based on obviously favors the very highest end cards. Using this as the starting index skews all subsequent tests. Really, there are over a dozen ways this testing was intentionally skewed. No, not stupid, just able to analyze the very basics of statistics and how they can be manipulated. If you don't care how someone determines value, well I have some stocks that I would like to sell you.

        • mdk77777
        • 9 years ago

        q[

          • flip-mode
          • 9 years ago

          If I was Scott I’d simply ban you. It’s one thing to disagree with his testing methods and an entirely different thing to make unfounded accusations, but such is the behavior of rabid fanboys. I guess part of the price Scott pays for running a public website with fully disclosed testing methods and then daring to publish any kind of analysis or render any kind of opinion is that he has to not only put up with, but let his website host very highly biased and groundless attacks such as yours.

            • mdk77777
            • 9 years ago

            If he were honest, he would just admit that this is what the article set out to prove.

            Again, I have no problem with someone trying to prove a particular point of view.

            I just think it should be clearly stated.

            Banning someone for showing skewed methods and biased methods would only confirm my point and not refute it.

            Really, nothing rabid about my statements at all. (I have made no unfounded attack on either Scott or you)

            I have only pointed out the methods that were used to achieve the desired result.

            Why have you not made any factual statement about the actual tests instead of an ad home attack.

            • flip-mode
            • 9 years ago

            If you were supplying anything other than opinion, i.e., if you were making specific claims, then it would be possible for me to respond in kind. I can respond with an opposing opinion, when I essentially have done, but what specific points have you made?

            q[

        • paulWTAMU
        • 9 years ago

        what other settings justify multi-GPU setups?? There’s not a lot between 1920×1200 and 2560×1600…and most mid range cards can handle most games at 1920…this is a niche test but an interesting one. But that’s just the nature of the best; how many people spend 500+ dollars on GPUs to begin with?

    • phez
    • 9 years ago

    I’m not really sure how useful this article is if you only test just a uber high resolution that most people don’t run at.

    The 768mb 460, while on its way out, can still hold its own at lower resolutions; considering its price point. But this article makes it look totally worthless even in SLI.

      • rUmX
      • 9 years ago

      Is there any point in buying a gaming card with 768mb of VRAM in 2010 (soon to be 2011)?

    • Arag0n
    • 9 years ago

    From my point of view the CPU used in the system build it’s out of place.

    1st, you won’t buy the best CPU for not the best GPU. So, if someone buys a radeon 6850 will surely buy some CPU in the range of 180-260$.

    2nd, if you buy a 580 SLI system, you need better CPU, adding that difference to the system total price and not just the GPU price.

    3th, as far as more constrained is the GPU in the system (extremly high resolutions), less important becomes the CPU. So if the main point of a SLI 580 it’s to get acceptable frame-rates at 4MP displays, and not as much frames as you can at 2MP or less resolutions, maybe the CPU doesn’t needs to be so good neither.

    So, in conclusion, you need to pair a system that it’s properly fitted for every GPU. Where the CPU doesn’t become a big constraint for the card performance, but you aren’t expending a lot of money into CPU horsepower that goes god-knows where. This should be the proper way to compare systems and not just setting up an standart high-end, high-priced system and fit every card inside to see what’s the result.

      • Voldenuit
      • 9 years ago

      I would like to see midrange card reviews paired with slower CPUs* to see what the scaling and (more importantly) real world performance will be like.

      How many people run a GTX 460 768MB on a Corei7 965?

      Thought so.

      * When reviewed separately, not as part of a comparison article such as today’s.

        • green
        • 9 years ago

        l[

          • StuG
          • 9 years ago

          Actually alot of people I know have extremely high end i5’s with the middle ground GPU’s. They can justify a strong CPU as it lasts them 3-5 years easy, where as a GPU will only last them 2ish and they can get really good details with a middle ground GPU that they upgrade every year or so.

            • Arag0n
            • 9 years ago

            But from my point of view that it’s not the reason to pair a high-end CPU with almost any GPU. I would agree with your friends point of view because I usually do the same but I never buy the high end but I also sell on eBay and buy again my graphic card every 1-2 years. I get subsidized the 60-75% of the price of my next card and I can keep the specs up without breaking my bank account.

            But the point is, that with the proper build, you can do the same with the CPU. That’s why I choosed AMD, because I was sure that by the time beeing I was going to be able to upgrade my system to at the next generation, and then the same things come. You can have a middle-pack CPU that you can sell later to buy a newer one with discount without having to change the cooler, the motherboard or anything.

            Every 4-5 years I need to do the big step and sell motherboard-ram-cpu anyways. My worst choice was when I paired a Pentium D socket 775 with DDR2 long ago and by god knows why, Intel breaked compatibility inside the same socket to newer CPU’s, and I couldn’t upgrade from my Pentium D to a Q6600. That’s why I moved to AM2+ with a Phenom 9950 that should be updated in this year or next to a newer Phenom II x6 or similar.

      • Meadows
      • 9 years ago

      3th?

    • kilkennycat
    • 9 years ago

    Dual GTX560’s with 1GB (or greater) should prove a very interesting future comparison…. but I’m getting ahead of myself………..

      • Voldenuit
      • 9 years ago

      Can I borrow your DeLorean for the weekend? ^_~

    • Voldenuit
    • 9 years ago

    If you’re testing an overclocked card, it should be named as such in the graphs. So that ASUS EAH6850 DirectCU card should be labelled as ‘Radeon HD 6850 790 MHz’ in the graphs.

    Also, what’s with all the asterixes? They only mean something if it’s easy to figure out – say, with a legend? You should have a legend with each graph saying ‘* – custom cooler’ for comprehension’s sake.

      • flip-mode
      • 9 years ago

      page 3, paragraph 1, sentence 2

        • Voldenuit
        • 9 years ago

        Yes, I know it’s there (otherwise I wouldn’t have been able to say what legend I recommended, right?).

        My point was that having to comb the text to find the meaning of the graphs is counter-productive to their informative nature. It would have been trivial to add a single line of information to a handful of graphs, and made them more legible as a result.

    • flip-mode
    • 9 years ago

    q[

      • Goty
      • 9 years ago

      I don’t see why they don’t just overclock the cards manually. The 460s and the 6800s all overclock exceptionally well and exceptionally easily, so there really isn’t any reason not to do things this way. The argument that whatever overclocks achieved by the reviewer aren’t necessarily representative of the average obtainable by the consumer is weak at best and nonsense at worst, unless they’re being sent “special” samples by manufacturers.

      Honestly, out of the people who read enthusiast hardware sites regularly, who is going to be stupid enough to pay the premiums for an overclocked card when you can obtain substantially better overclocks on your own in just a few minutes anyway?

        • HurgyMcGurgyGurg
        • 9 years ago

        Well the only valid argument for overclocked cards is that they in general have to represent good silicon. You can also get stuck with a card that won’t push 20 MHz over stock.

        They tend to have better coolers too.

          • Goty
          • 9 years ago

          Paying for the better cooler is all well and good, but paying for any kind of overclock is just moronic. If a card with an aftermarket cooler

          As for cards that overclock either exceptionally well or exceptionally bad, those are going to be outliers and actually less representative of the cards available to consumers on average than the likely result achievable by any reviewer.

        • Ryu Connor
        • 9 years ago

        Why do people have such a hard time with such a simple concept?

        This is OC 101 stuff and people just fail at it hard.

          • flip-mode
          • 9 years ago

          Failures abound. Others fail to recognize simple concepts like added value. An extra 20% clockspeed for 13% more money? Bigwin. That’s what the GTX 460 1GB FTW gives you.

          How bout an extra 2% clockspeed for an extra 11%? Failhard. That’s what Asus’ Radeon gives you (just looking at the OC, not the other aspects of the card such as the nice cooler).

          Let’s not even get into the fact that even stock clocked 6850s are all running 10% over MSRP right now, and the fact that the GTX 460 FTW yields 6870 performance for $50 less the the typical going rate of any 6870 that’s in stock.

          The other point that I’m making is that calling 15 MHz on a reference 775 MHz an “overclock” is frickin pathetic. Never mind the whole “manual or factory overclock” argument – 15 MHz qualifies only as a marketing gimmick, not a respectable factory overclock. Heck.

            • Goty
            • 9 years ago

            I understand your argument that the overclock on the 6850 is basically worthless, but you’re missing the point of my rebuttal: EVERY factory overclock is worthless. You mention 20% greater performance for only 13% more money, which would be a great deal if there was no other way for me to get the same improvement, but I can get that same 20% increase in performance for FREE from the vast majority of stock-clocked cards.

            • flip-mode
            • 9 years ago

            So your opinion that EVERY factory overclock is worthless carries more weight than someone else’s opinion that 20% more performance, GUARANTEED AND WARRANTIED, is worth 13% more price? Um, OK, I guess I hadn’t looked at it from that perspective, but now that I do, I can see that that you’ve hit on a real nugget of wisdom there!

            • Goty
            • 9 years ago

            Your unwarranted and childish condescension aside, I’ve never seen an RMA refused for a card that was overclocked within reason and with the exception of products that are never released in any meaningful quantities, you can’t buy factory overclocked cards that deviate significantly from the average achievable clockspeeds.

          • Goty
          • 9 years ago

          What exactly am I missing? Please inform me. From my standpoint, paying for extra performance that I’m almost guaranteed to be able to get on my own is something only a moron would do, so the inclusion of any kind of factory overclocked card /[

            • NeelyCam
            • 9 years ago

            Now you’re just getting arrogant.

            I think the overclock+warranty for a bit of extra cash makes more sense. Then again, I’m willing to pay a bit extra for more reliability and less hassle. The time when I spent hours looking for a site that sells something $10 cheaper than newegg is past. Time is money. And so is peace of mind.

            • Ryu Connor
            • 9 years ago

            It’s really hard to respond to this question. Not because there isn’t an answer, but because my natural reaction is to mock you. That isn’t conducive; you’ll just turtle up and refuse to listen. Of course I must weigh if you’re genuinely receptive to the truth. Sometimes people get these ideas in their head – that aren’t right – and even if I present it with a cherry on top you’ll refuse to accept it. The reasons for that can range from pride to stupidity.

            The real short of it is that you don’t understand overclocking. You’re also applying the term overclocking to something the vendors aren’t doing. Factory overclocking is a nice slang term that should have never crawled into the vernacular.

            To distill further:

            Overclocks are not guaranteed.

            Do not confuse that to mean you cannot warranty a card that fails that you had OC’d. That means that given clock speeds beyond the vendor range is not guaranteed.

            You have not been in this industry or overclocking very long if you haven’t seen why that’s a bad thing.

            1. I’ve seen working OC’s fail due to platform changes.

            2. I’ve seen working OC’s fail due to thermal changes outside a specific range (such as from dust build up or changes in air flow).

            3. I’ve seen working OC’s fail and no matter what you do you’re only solution for stability again is to drop back to the stock clock.

            4. I’ve seen lemon OC’s. I’ve seen the infamous Celeron 300a not make it to 450. Just because you’ve seen a hundred other people hit the speed doesn’t mean that your chip will. You just might get the one crappy die from the wafer.

            None of the above examples represent dead cards. You can’t send it back and say, I want a new one even though it works. You’re stuck with this card that is slower than you want.

            Meanwhile that extra 13% that netted you a chip that is *binned* at 20% faster, you can RMA it from not working at that clock speed anymore. You can RMA it for crashing at that speed. You’re buying a guarantee of that clock and of stability at that speed. Real overclocking does not provide that, period. There is no defensible discussion that can be had against that. It’s a cold hard fact that can’t be refuted except with irrationality.

            Generally at this point people start to play daft to support their position. Like saying they were arguing that a dead OC card can receive an RMA. Nobody is fooled by that. Most people also go well out of their way to avoid discussion of the realty of stability and clock speed promises no matter the operating conditions. Worse yet some resort to anecdotes in an effort to dodge the above questions. Nobody who understands overclocking is that stupid. I would suggest not making that argument, unless ignorant and daft is really the sort of name you want for yourself around here.

            This is why I get agitated on this subject. It’s literally OC 101, these are basic concepts the new generation of enthusiast should know. OC’ing has become so easy from the yonder ages that nobody understands the basics anymore. It’s a crying shame and I’m tired of reading the immense ignorance of it all. When challenged on it, people just show their butts instead of admit they were wrong to begin with. You can’t be a pundit and expect to be taken seriously if you’re entire platform is fundamentally wrong.

            • Jambe
            • 9 years ago

            I think you’re being a bit pedantic about it. One needn’t understand or appreciate overclocking to make an informed decision about which graphics card to buy. Call it a banana if you want to – if you see a card which is guaranteed to be x percent faster than its normal brethren and that extra speed is a relative bargain and/or is attractive from a longevity PoV, then you’ll get that card. All one needs to figure this out is middle-school algebra (or a helpful website, friend, relative, etc).

            There are two points of view being contrasted here (variably), but neither is bad or wrong.

            Side 1: I’m relatively tech savvy and can OC and troubleshoot a GPU, so I’ll get a baseline model and OC it. If it OC’s and is stable, I’m all good. If it doesn’t OC, I’ll just have to live with it.

            Side 2: I’m not savvy enough to OC/troubleshoot (or I am but I don’t wanna take the time or worry about whether it’ll OC worth a hoot) so I’ll hunt Newegg for a good deal on a card that I’m certain will run at its rated speed (it may be “factory OC’d”, it may not).

            These viewpoints aren’t contradictory or even mutually exclusive (one might OC components in a PC or two at home but not those of a client, friend, or relative).

            • poulpy
            • 9 years ago

            Agreed, yet it invariably ends up in flamewars.. §[<http://bit.ly/foefo8<]§ Peace, people, peace! Think of teh children FFS!!1!

            • Ryu Connor
            • 9 years ago

            I think what you’ve posted is pretty reasonable. As it assumes that people are making informed decisions. The person who realizes OC’s are inherently unpredictable take the risk. The people who can’t afford unpredictability go for a binned version.

            That’s not what’s coming out of peoples mouths. Nor do they express any understanding of what you’re saying.

            They assume the latter doesn’t exist and that the former has no unpredictability at all. That’s not being pedantic, that’s them being ignorant.

            • Chrispy_
            • 9 years ago

            Bingo.

            Being pedantic is a requirement when feeding the trolls. You should never feed the trolls in the first place but once the flames start, feed them irrefutable fact backed by sound reasoning to put out the fire.

            Pedantic, yes.
            Necessary, yes.

            • flip-mode
            • 9 years ago

            +1 You’ve done a good job explaining things. Unfortunately, this topic will probably continue to come up in every single video card review.

            Historically, I think it is unprecedented to see vendors clock their cards 20% higher than reference. Maybe it has happened before, but I don’t remember it. I think Nvidia may have had a hand in officially sanctioning vendors to set such high clockspeed deviations. On one hand, cards like the GTX 460 FTW are distorting my judgement, making even 5% or 10% bumps seem weak. A respectable factory “overclock” should be at least 5 or 10 percent, and even 5 seems borderline shameful. On the other hand, the paltry 2% bump that the Asus card offers is truly weak in any realistic frame of reference. I’m more inclined to buy a reference clocked card just due to the fact that I find 2% mildly insulting. What saves the Asus card is that they really aren’t charging for it considering the fancy cooler and the fact that ALL 6850s are currently selling at inflated prices.

            • Jambe
            • 9 years ago

            I just think the argument is a bit silly from either angle tbh.

            What’s the worst case scenario for a person like Goty? A card might not overclock /[

            • Goty
            • 9 years ago

            *facedesk*

            I really think the problem here is that you can’t understand even the most basic of counterarguments. Nothing you’ve said is incorrect, you just can’t seem to wrap your brain around my point of view.

            • flip-mode
            • 9 years ago

            That’s because he’s making logical arguments and you’re giving opinions. Nothing you’ve actually posted is anything other than opinion. He hasn’t shared much of an opinion of his own, he’s only shared observations of fact. Example in paraphrase:

            You: “Factory overclocks are worthless”.

            Him: “Factory overclocks are warrantied at the vendor’s specified clockspeed”

            See? Yours is an opinion. His is a fact. Do you have any facts to share?

        • LawrenceofArabia
        • 9 years ago

        Being a website for PC enthusiasts I really must agree with testing cards overclocked in addition to the clocks they came with. Yes there are many people who are going to buy these cards and not touch the clocks, so factoring in the cards that come higher clocked from the factory should still be included. But both the Barts and GF104 gpus have bunches of overclocking headroom available to the enthusiast user.

        I feel that the performance for user overclocks is just as important to the enthusiast as performance data at 2560×1600, a resolution that only said enthusiasts are really going to game on. I personally don’t expect Damage to spend weeks getting every last clock possible out of a card, but knowing things like what performance gains are available, how the coolers handle the increased load acoustically, or how well the card stays overclocked in a dual gpu config. Information like the Cypress overclock performance ceiling I have been forced to gather from sources other than the tech report.

        No, overclocking isn’t guaranteed, but there is absolutely extra performance lurking in this generation of gpus, show it to us as you would other aspects of the chips.

          • Voldenuit
          • 9 years ago

          q[

            • LawrenceofArabia
            • 9 years ago

            Read my post, I wasn’t talking about how high you can clock the cards, I was talking about expected performance gains and other factors like cooler noise and temps that a review website is far more useful for because of the more scientific nature of the data.

      • can-a-tuna
      • 9 years ago

      That’s because TR wants you to think so. Read my post sucker and learn.

      • Krogoth
      • 9 years ago

      Again why does it matter?

      Factory OC cards have always been a marketing scam. The only victims are epenis junkies with more money and time than sense. Factory OC typically fail at their “rated speed” a year or so after usage. You are forced to revert to stock speeds or hope vendor still has stock of your card and are willing to honor their “lifetime warranty”.

      The vast majority of video card buyers get stock units from both camps. The small minority that want a extra boost aren’t shy about doing manual OC’ing to get that boost. They are also aware of the risks involved and are willing to take it.

        • derFunkenstein
        • 9 years ago

        How is it a marketing scam? Do the advertised clocks not actually exist? Otherwise you know what you’re getting. A little faster for a little more $$

          • Krogoth
          • 9 years ago

          I have seen and heard too many horror stories from owners of factory OC cards. They are a risky buy if you want some long-term usage. IMHO, they aren’t worth the $$$ and hassle for some extra epenis points.

          Lifetime warranty = lifetime of the product line. Factory OC parts are usually a limited batch. Vendor will mostly likey be out of stock by the time your card starts to become flaky. They will likely honor your warranty by giving you a stock unit or a newer equivalent if they feel generous. The worse case being the cold shoulder “Sorry, we out of stock for that item”.

          They are nothing more than a marketing scam that lures in epenis types into thinking that they are getting a “bargain”.

        • dmjifn
        • 9 years ago

        > OC typically fail at their “rated speed” a year or so
        > after usage.

        Do you have more info on this? I’m curious because I bought an MSI GTX460 OC card. I am not expecting the whopping 50MHz OC to result in failure but I’d also not heard of any warranty fulfillment shenanigans either caused by “Factory OC”.

          • flip-mode
          • 9 years ago

          No he doesn’t because he made it up. Seriously. That’s his shtick.

            • mdk77777
            • 9 years ago

            Read the FOLDING forums. Factory OC cards fail on a fairly regular basis.
            It is an extreme use, but does show that factory OC cards fail much faster than non-OC.

            • flip-mode
            • 9 years ago

            Here’s what Krogy said: q[

            • Krogoth
            • 9 years ago

            Do some research. The results might shock you. There is a fair share of them within the TR forums.

            • flip-mode
            • 9 years ago

            Don’t tell me to do your research for you. How bout you do your own research before you start making claims. You made the claim so now unless you’ve totally made it up, link me some corroboration. Otherwise, you’re up to your usual nonsense: portraying your baseless opinions as common knowledge.

            • Ryu Connor
            • 9 years ago

            1. Folding forums are inherently a limited sample size.
            2. Folding forums are a limited sample size who pushes their hardware 24/7 at 100% load.
            2a. Folding forums are a limited sample size who push their hardware and sometimes cut corners – “crate” computing – If you can’t see how that might create exaggerated failure numbers I can’t help you.
            3. You’re culling data from folding forums based on memory, not from a empirical collection stored away in a database or excel sheet.
            3a. Even if you were empirically collecting the failure data for cards, you still lack the success rates for the same model cards that do not fail!
            4. You can’t even official detail why any card you’ve read about died. You don’t know the root cause. You have no way to determine if the power supply was at fault or if the system was exposed to conditions outside of it specified operating ranges!

            Let it go Krogoth and MDK. You’re just gonna end up like Goty – looking both daft and ignorant.

            • mdk77777
            • 9 years ago

            q[

            • flip-mode
            • 9 years ago

            Your take-my-word-for-it attitude does not alter the definition of the term “fact” and your why-did-you-not-do-it-my-way attitude does not alter the definition of the term “bias”. Your pride and win-the-internet attitude make you look ridiculous. Any uninvolved party that reads this long exchange is probably going to think two things: “mdk77777 is an egotistical idiot” and “those guys arguing with mdk77777 are idiots for wasting their time”. I believe that latter is certainly closer to fact that supposition.

            Edit: and by the way, your presumption of his presumptions is very presumptuous. Post #111 holds plenty of water without making any of the presumptions that you presume were made. Ryu don’t have to presume anything about what cards you have used or whether or not you know the cause of failure for any particular cards. All he has to do is state the obvious – you don’t have access to or knowledge of the failure data for the video card market as a whole and for the factory overclocked video card market as a whole. You would need both of those data sets to credibly support any of the claims you have made. Your posts make semi-educated people cringe at the level of their incredibility.

            • mdk77777
            • 9 years ago

            Let me see.

            The only claim I made was that OC cards fail more often when folding.
            I also specifically claimed that this was an extreme application.

            I allowed the reader to understand from this that I was not implying all OC cards would fail, or that OC cards would fail in light gaming.

            What I do know, it that when run 24/7/365 at a very near their maximum ability, they do fail much faster and much more often than non-OC cards.

            Now the fact that you find this so incredulous says much more about you than it does me.

            You seem to think that the entire Industry does not set standards for a reason, that reliability and lifespan are not considerations in setting standard clocks.

            Your attitude seems to be that you know more than the people who actually make the chips.

            Amazing that you can so freely question my education and intelligence when you exhibit such profound lack of knowledge about basic physics and economics.

            • flip-mode
            • 9 years ago

            q[

            • Lans
            • 9 years ago

            I think this “credible source” standard is too high for regular folks and that the only thing we can do is provide anecdotal sources (and hopefully an extremely large amount of it). Then again with lowered standards, we can only say if it is credible to make the claim that factory OC’ed cards fail more often than stock/reference clock cards or not.

            So far there is some supposed (I didn’t check it out myself) anecdotal evidence that factory OC’ed cards fails quicker but how much quicker (i.e. it is statically significant enough?) and what is the sample size?

            I don’t view folding 24/7 as a truly niche segment in that is it not too different from games albeit average for gaming is maybe 8/7 or less… Accounting for that, if the average warranty length is about 2 years or whatever it is (most between 1 and 3 years) then it is not unusual for cards that have been used for folding 24/7 to fail at a higher rate after 2 years * 1/3 or about 9 months (to be generous, maybe after 1 years of 24/7 heavy use) given I would expect manufactures to offer warranty based on what is the heaviest use for say 95% of users to minimize the cards they need to service under warranty to “acceptable rate”.

            • Xaser04
            • 9 years ago

            The problem with this line of thought is that we don’t have a frame of reference. How many of these said cards would have failed even if they had been running at “reference” clocks? Was it the factory overclock that caused the failure or something else? What conditions or tests led to the failures?

            Unfortunately due to the nature of the testing (you can’t re-run the exact same test if the card has already failed / broke) all we can prove is correlation not causality.

    • ClickClick5
    • 9 years ago

    The question is where will the 69xx series lay?

    • ssidbroadcast
    • 9 years ago

    q[< kinda like the chick I dated in high school who had different colored eyes.<]q Really? I'm jealous. Nothing is cooler than having different colored iris'.

      • xtremevarun
      • 9 years ago

      So when the zombie virus apocalypse happens, keep this girl safe. She might be humanity’s only hope for a vaccine ;D

        • sweatshopking
        • 9 years ago

        a vaccine, and “THE BUSINESS”!!!

      • Meadows
      • 9 years ago

      It’s spelt “ires”.

        • poulpy
        • 9 years ago

        In which language exactly?
        Unless you tried, and probably failed, to “make a funny” in good old English it’s “irises” or “irides”..

          • ssidbroadcast
          • 9 years ago

          Irises or irides, it’s super cool. No higher barrier-of-entry when it comes to individuality. (Note: people who wear fake-colored contact lens are total tools.)

            • Meadows
            • 9 years ago

            Differently coloured eyes are bad either way.

    • Krogoth
    • 9 years ago

    Damage, thanks for the review.

    Basically, single-card solutions for the most part make more fiscal sense if you are running 2 megapixels. You need to amp up with 4 Megapixel along with heavy doses of AA/AF to justify getting some kind of SLI/CF setup. 6870, 6850 and 460 1GiB CF/SLI seemed to do the job for the most part without depleting the savings account.

    580 SLI is only good for bragging rights if anything else. A single 580 is practically good enough. Unless you want high levels of AA/AF along with 4 megapixels.

    • moshpit
    • 9 years ago

    Hints on mystery package? I’m betting on an OC flavor of 580.

      • mczak
      • 9 years ago

      Nah, that should be GTX 570. Apparently other sites have received them now too, launch date is Dec 7.
      Could be quite nice – 320bit memory, 15 SMs, and clocks a bit higher than GTX 480. Should perform quite close to a GTX 480, though I haven’t seen any pricing information yet.

      • d0g_p00p
      • 9 years ago

      Think higher man… What’s coming out from AMD in the next month or so?

    • elmopuddy
    • 9 years ago

    I almost bought a second 460 1gb when I upgraded my monitor to a Dell 27″, but decided on a 580.. very happy, and no worries about SLI-Xfire issues.

      • Deanjo
      • 9 years ago

      Same here, plus the performance carries over to other OS’s where multicard/multigpu setups are largely useless. Not to mention that a single GTX-580 uses less power then any of the multi-card setups and is a quieter solution.

        • mczak
        • 9 years ago

        Actually, a 6850CF is so close in power draw (both idle and load) that it probably depends on sample variance which one draws less. Noise also seems to be quite similar. So you basically trade off the multi-gpu issues of a HD 6850CF setup with the higher price of the GTX 580.

    • Mithent
    • 9 years ago

    Two Radeon 68xx cards are a pretty good value option at the moment, but I’m concerned about whether they’re the most future-resistant*. Will tessellation become widely-used?

    Will be interesting to see how the Radeon 69xx cards fit into the mix.

    (* My Q6600 and 8800-series card have served me well for 3 years now, yet I have no intention of changing the CPU, and only now am I getting tempted by graphics cards. That’s pretty future-resistant.)

      • Pettytheft
      • 9 years ago

      Tessellation is a way off. By the time most games are starting to use it we’ll probably be 2 generations of cards past these. Disabling it doesn’t really change all that much in IQ anyway.

        • derFunkenstein
        • 9 years ago

        and 2-3 generations will be about half-way into the life of his proposed build, looking at his current hardware and how long he’s had it. Hence his question makes much much sense.

      • Goty
      • 9 years ago

      It’s not a question about whether tessellation will be used, but whether or not tessellation will be used /[

      • mcnabney
      • 9 years ago

      It will be widely used when the next generation of consoles are released.

      • PixelArmy
      • 9 years ago

      Given that tessellation is touted as a major DX11 feature (along with DirectCompute and multi-threading), it should be widespread in DX11 games… Whether DX11 itself will be adopted is another story. IMO, it at least looks promising. Think of future games and google whether there’s info on them being DX11.

        • poulpy
        • 9 years ago

        q[

          • PixelArmy
          • 9 years ago

          Not all, but a good percentage of the current games that use DX11 use tessellation. §[<http://en.wikipedia.org/wiki/List_of_games_with_DirectX_11_support<]§ Just off the top of my head, Dirt2, AvP, F1, Metro 2033, Lost Planet, Hawx 2, Stalker, Civ V, all use tessellation. You can probably find an article about what's supported by each. The Battlefield games will use FrostBite which uses it. I couldn't imagine of Crysis 2 not using it. Dirt3 and Grid2 will likely use it (given their predecessors already do). What games are you thinking of? Again, DX11 adoption might not seem that quick, but those that do, do seem like they'll use tessellation.

            • poulpy
            • 9 years ago

            Sorry I wasn’t actually disagreeing with tessellation getting some attention, more commenting on the logic of the below:

            q[

            • PixelArmy
            • 9 years ago

            Poorly worded on my side. My points I guess were: 1) It’s a major DX11 feature. 2) If devs are using DX11 but not the other two major features, it’s /[

    • sweatshopking
    • 9 years ago

    bad link….?

    • cegras
    • 9 years ago

    Is there any justification for noise measurements from 10 feet? Normally a desktop will at most be within 5″ or less, whether it’s under the table or not.

    Can you compare these to results taken from a case in a ‘typical’ position?

      • Damage
      • 9 years ago

      You may want to revisit the age-old ” vs. ‘ abbreviation question.

        • cegras
        • 9 years ago

        Ah, sorry! I don’t normally think in imperial, but that was a bit of a careless oversight.

        I’m still surprised at the overall noise levels you are measuring though, considering it’s an open test bench.

    • Majiir Paktu
    • 9 years ago

    Am I behind the times with my 1680×1050 LCD, or are you guys being a bit absurd with resolutions higher than 1920×1200?

      • bthylafh
      • 9 years ago

      Probably you need insanely high resolutions to really show the differences between the higher-end cards.

      This is especially true considering they were concentrating on SLI/Crossfire, which are a waste if you /[

        • LawrenceofArabia
        • 9 years ago

        Pretty much, notice how Dirt 2 with DX11 at 1920×1080 graph’s lowest fps value was 47 on the 768mb GTX460. 47 frames is still an excellent framerate by most standards. Dirt 2 isn’t the most demanding game tested but it is hardly disappointing in graphical quality. I doubt any of the cards tested would drop below 20fps at a 2 megapixel resolution, maybe even 30.

        For the vast majority, basically everyone that doesn’t own a 4MP display, a single reasonably high end gpu will take care of all their gaming needs, keeping framerates at an average of over 60 fps and the minimum above 40. But multi gpu setups are for enthusiasts, and you really need and enthusiast display to let them shine.

      • JustAnEngineer
      • 9 years ago

      1920×1080 is the target resolution for almost everything these days. If you’re running with less than this, you probably are slightly behind the times. 1920×1200 is a great resolution that provides more vertical space for text, but it’s becoming less common as 1920×1080 displays fill the shelves.

      Those of us with 2560×1600 displays really appreciate that Damage includes this resolution in nearly every review. Considering that 2560×1600 is the maximum resolution supported by the past several generations of graphics cards (Radeon x1### and newer), it’s a good top-end resolution to consider, too.

      Because he was testing SLI/Crossfire configurations, the high resolutions were needed to show a performance benefit. It’s exactly the sort of person who already dropped $1012 +tax on an UltraSharp U3011 monitor who would consider spending another $400 on graphics cards, so the high resolution matches up well with the configurations that were tested.

    • paulWTAMU
    • 9 years ago

    Those Civ scores…since when should a g-d TBS require 2 uber-cards to get 50 FPS??? That’s nucking futz

      • derFunkenstein
      • 9 years ago

      Look at the settings – 2560×1600, 8x MSAA. And while I don’t own the game (hoping for a Christmas present) it’s BEAUTIFUL.

        • paulWTAMU
        • 9 years ago

        yeah but it still has lower frames per seconds than some FPS’s.

        And the graphics…eh, they’re not good enough ot justify the performance hit.

          • Voldenuit
          • 9 years ago

          Do you need 60 fps in a (slow-paced) strategy game? I’m sure it’s perfectly playable in the 20s to 30s.

            • paulWTAMU
            • 9 years ago

            It isn’t. It hangs like mad. I had to upgrade to a 460 1 gig (on sale) to get playable frames at 1680×1050.

            • Meadows
            • 9 years ago

            It’s not the pace, it’s the responsiveness.

Pin It on Pinterest

Share This