Nvidia’s GeForce GTX 570 graphics processor

The torrent of new hardware pouring into Damage Labs here at the end of 2010 is absolutely stunning. Just as we were putting the finishing touches on our massive multi-GPU mash-up last week, a package arrived containing yet another video card to review. Yet that card was not what we might have expected: a new Radeon based on the forthcoming Cayman GPU. Instead, we were greeted with a black-and-green box containing a pre-emptive strike from Nvidia in the form of the GeForce GTX 570.

The GTX 570 is based on a de-tuned version of the GF110 graphics chip that powered the GeForce GTX 580 into first place in the single-GPU performance sweeps last month. We pretty much expected a slightly slower variant of the GF110 to hit the market sooner or later, but I have to admit that I wasn’t entirely excited about the prospect. We already seemed to have our value leaders in the form of the Radeon HD 6850/6870 and higher-clocked variants of the GeForce GTX 460, and we had our performance leader in the GTX 580.

Heck, though, I just wasn’t thinking it through. Here’s the basic proposition for the GTX 570: performance equivalent to—or a little better than—the GeForce GTX 480, only with power draw and a price range similar to the GTX 470’s. If you’re familiar with the current GPU landscape, you’ll know that’s a very solid proposition indeed. We’re talking about one of the fastest graphics cards around for well under 400 bucks. Obviously, Nvidia wanted to get this puppy to market in time to greet the upcoming Radeons. Those Radeons could now have some very formidable competition with which to contend.

GPU

clock

(MHz)

Shader

ALUs

FP16

textures

filtered/

clock

ROP

pixels/

clock

Memory

transfer

rate

Memory

interface

width

(bits)

GeForce GTX 470 607 448 28 40 3.4 Gbps 320
GeForce GTX 480 700 480 30 48 3.7 Gbps 384
GeForce GTX 570 732 480 60 40 3.8 Gbps 320
GeForce GTX 580 772 512 64 48 4.0 Gbps 384

Let’s face it: video cards are multiplying like cockroaches these days, with different models coming out seemingly every week. When I was younger, smarter, and more wildly enthusiastic about, well, everything, I could quote specifications for each and every CPU and video card available almost instantly. These days, I’m lucky to remember to brush my teeth, so tools like the table above are a must. I’ve included both the GTX 400- and GTX 500-series cards because the silicon on which they’re based is incredibly similar. The GF110 chip in the 500 series has exactly the same number and configuration of functional units as the GF100 GPU in the 400 series. The major differences are lower power draw and doubled FP16 texture filtering rates in the GF110.

Nvidia has selectively hobbled the GF110 in creating the GTX 570, but the reductions aren’t terribly drastic. Only one of its 15 shader multiprocessor partitions is disabled, as is one of its six memory controller/ROP partition combos. The GPU’s core clock speed is down 50MHz from its elder sibling, as is the base clock frequency of its 1280MB of GDDR5 memory. The result, as I’ve mentioned, is a video card whose basic capacities match those of the GeForce GTX 480 pretty closely. We’ll explore the particulars a little further in the following pages.

One happy consequence of these cuts is the fact that the GTX 570 only requires dual six-pin auxiliary power inputs, whereas the GTX 580 requires an extra-strength PSU with an eight-pin connector. Otherwise, the 570 looks very similar to its sibling, right down to the 10.5″ board length and dual-slot profile. Under that angular cooling shroud is a vapor chamber cooler similar to the one on the 580, but Nvidia tells us the 570’s cooler isn’t quite as beefy.

Aww, isn’t that cute? Our reference card from Nvidia has made friends with a retail GTX 570 from Zotac. The Zotac card arrived yesterday morning, a bit late in the review process, but we found time to drop it into an SLI pairing to run through a subset of our usual tests. You’ll see partial results for this config on the following pages, to give you at least a taste of the GTX 570’s multi-GPU performance.

This Zotac card runs at the same base clock frequencies as the reference model, and Nvidia expects cards like it to sell for $349.99 at online vendors, with availability starting today. We may see higher-clocked versions of the GTX 570 selling at a bit of a premium, as well.

Our testing methods

Many of our performance tests are scripted and repeatable, but for some of the games, including Battlefield: Bad Company 2, we used the Fraps utility to record frame rates while playing a 60-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We raised our sample size, testing each Fraps sequence five times per video card, in order to counteract any variability. We’ve included second-by-second frame rate results from Fraps for those games, and in that case, you’re seeing the results from a single, representative pass through the test sequence.

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-965 Extreme 3.2GHz
Motherboard Gigabyte EX58-UD5
North bridge X58 IOH
South bridge ICH10R
Memory size 12GB (6 DIMMs)
Memory type Corsair Dominator CMD12GX3M6A1600C8

DDR3 SDRAM at 1600MHz

Memory timings 8-8-8-24 2T
Chipset drivers INF update 9.1.1.1025

Rapid Storage Technology 9.6.0.1014

Audio Integrated ICH10R/ALC889A

with Realtek R2.51 drivers

Graphics
Asus Radeon HD 5870 1GB

with Catalyst 10.10c drivers

Asus Radeon HD 5870 1GB + Radeon HD 5870 1GB

with Catalyst 10.10c drivers

Asus ROG Matrix Radeon HD 5870 2GB

with Catalyst 10.10c drivers

Radeon HD 5970 2GB

with Catalyst 10.10c drivers

Asus Radeon HD 6850 1GB

with Catalyst 10.10c drivers

Dual Asus Radeon HD 6850 1GB

with Catalyst 10.10c drivers

XFX Radeon HD 6870 1GB

with Catalyst 10.10c drivers

Sapphire Radeon HD 6870 1GB + XFX Radeon HD 6870 1GB

with Catalyst 10.10c drivers

Asus GeForce GTX 460 768MB

with ForceWare 260.99 drivers

Dual Asus GeForce GTX 460 768MB

with ForceWare 260.99 drivers

MSI Hawk Talon Attack GeForce GTX 460 1GB 810MHz

with ForceWare 260.99 drivers

MSI Hawk Talon Attack GeForce GTX 460 1GB 810MHz +

EVGA GeForce GTX 460 FTW 1GB 850MHz

with ForceWare 260.99 drivers

Galaxy GeForce GTX 470 1280MB GC

with ForceWare 260.99 drivers

GeForce GTX 480 1536MB

with ForceWare 260.99 drivers

GeForce GTX 570 1280MB

with ForceWare 263.09 drivers

Zotac GeForce GTX 570 1280MB +

GeForce GTX 570 1280MB

with ForceWare 263.09 drivers

GeForce GTX 580 1536MB

with ForceWare 262.99 drivers

Zotac GeForce GTX 580 1536MB +

Asus GeForce GTX 580 1536MB

with ForceWare 262.99 drivers

Hard drive WD RE3 WD1002FBYS 1TB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition

DirectX runtime update June 2010

Thanks to Intel, Corsair, Western Digital, Gigabyte, and PC Power & Cooling for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

Some further notes on our methods:

  • We measured total system power consumption at the wall socket using a Yokogawa WT210 digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

    The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead 2 at a 1920×1080 resolution with 4X AA and 16X anisotropic filtering. We test power with Left 4 Dead 2 because we’ve found that the Source engine’s fairly simple shaders tend to cause GPUs to draw quite a bit of power, so we think it’s a solidly representative peak gaming workload.

  • We measured noise levels on our test system, sitting on an open test bench, using an Extech 407738 digital sound level meter. The meter was mounted on a tripod approximately 10″ from the test system at a height even with the top of the video card.

    You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

  • We used GPU-Z to log GPU temperatures during our load testing.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Pixel fill and texturing performance

Peak pixel

fill rate

(Gpixels/s)

Peak bilinear

integer texel

filtering rate

(Gtexels/s)

Peak bilinear

FP16 texel

filtering rate

(Gtexels/s)

Peak

memory

bandwidth

(GB/s)

GeForce GTX 460 768MB 16.8 39.2 39.2 88.3
GeForce GTX 460 1GB 810MHz 25.9 47.6 47.6 124.8
GeForce GTX 470 GC 25.0 35.0 17.5 133.9
GeForce GTX 480 33.6 42.0 21.0 177.4
GeForce GTX 570 29.3 43.9 43.9 152.0
GeForce GTX 580 37.1 49.4 49.4 192.0
Radeon HD 5870 27.2 68.0 34.0 153.6
Radeon HD 6850 25.3 37.9 19.0 128.0
Radeon HD 6870 28.8 50.4 25.2 134.4
Radeon HD 5970 46.4 116.0 58.0 256.0

Before we go off to the races, we should set a bit of context for our evaluation of the GeForce GTX 570. Right now, there’s really no video card from AMD that competes directly with the GTX 570 at $350. The closest rival of note is probably the Radeon HD 6870, but the 6870 is $100 cheaper and is based on a much smaller graphics chip. The 6870 doesn’t match up too poorly against the GTX 570 in terms of some of the peak rates in the table above, but that’s theory. In reality, Nvidia’s GPU architectures tend to achieve performance closer to their theoretical peaks most of the time.

Another possible competitor might be a couple of Radeon HD 6850 cards paired up in a CrossFireX config. Two of those would only cost a little more than a single GTX 570, so they might be considered a viable alternative, with the proper caveats about slot real-estate, power draw, and multi-GPU compatibility issues kept in mind. You could, in theory at least, double the 6850’s rates above for a CrossFire setup. That setup should outgun a single GTX 570 in every category except for FP16 texture filtering.

However, the real threat from AMD in the GTX 570’s price range will surely be a member of the soon-to-arrive Radeon HD 6900 series, so we won’t focus too closely on the AMD-versus-Nvidia angle today. That showdown is coming soon enough.

You may have noticed in the table above that the GeForce GTX 480 has one really noteworthy advantage over the 570: memory bandwidth. 3DMark’s color fill rate test tends to be bandwidth-bound more than anything, and so we have an expected result: the 570 can’t quite match the 480 in this test.

3DMark’s texture fill test doesn’t involve any sort of texture filtering. That’s unfortunate, since texture filtering rates are almost certainly more important than sampling rates in the grand scheme of things. Still, this is a decent test of FP16 texture sampling rates, so we’ll use it to consider that aspect of GPU performance. Texture storage is, after all, essentially the way GPUs access memory, and unfiltered access speeds will matter to routines that store data and retrieve it without filtering.

The GTX 570 edges out the 480 here, which is also according to script. AMD’s sampling rates, even on its smaller GPUs, are generally higher than Nvidia’s, though, as is evident.

The GTX 570 trails the Radeon HD 6870 in our simplest bilinear filtering test, but as the complexity of the filtering method increases and the texture format jumps up to 16 bits per color channel, the 570 rises through the ranks. The results of the FP16 filtering test are telling. At 810MHz, the much less expensive GeForce GTX 460 1GB card has a higher theoretical peak rate than the GTX 570. Yet the 570’s measured performance is substantially better, likely due to its greater memory bandwidth and the larger amount (128KB more) of L2 cache associated with its five memory controllers.

Shader and geometry processing performance

Peak shader

arithmetic

(GFLOPS)

Peak

rasterization

rate

(Mtris/s)

Peak

memory

bandwidth

(GB/s)

GeForce GTX 460 768MB 941 1400 88.3
GeForce GTX 460 1GB 810MHz 1089 1620 124.8
GeForce GTX 470 GC 1120 2500 133.9
GeForce GTX 480 1345 2800 177.4
GeForce GTX 570 1405 2928 152.0
GeForce GTX 580 1581 3088 192.0
Radeon HD 5870 2720 850 153.6
Radeon HD 6850 1517 790 128.0
Radeon HD 6870 2016 900 134.4
Radeon HD 5970 4640 1450 256.0

By virtue of its higher clock speeds, the GTX 570 has somewhat better peak shader arithmetic and triangle rasterization rates than the GeForce GTX 480. As always, the vast SIMD arrays in the Radeon GPUs yield some eye-popping numbers for peak arithmetic rates. At the same time, Nvidia’s DX11 GPUs tend to have higher geometry throughput, as represented here by rasterization rates (though there’s really more to it than just those.)

The first tool we can use to measure delivered pixel shader performance is ShaderToyMark, a pixel shader test based on six different effects taken from the nifty ShaderToy utility. The pixel shaders used are fascinating abstract effects created by demoscene participants, all of whom are credited on the ShaderToyMark homepage. Running all six of these pixel shaders simultaneously easily stresses today’s fastest GPUs, even at the benchmark’s relatively low 960×540 default resolution.

Up next is a compute shader benchmark built into Civilization V. This test measures the GPU’s ability to decompress textures used for the graphically detailed leader characters depicted in the game. The decompression routine is based on a DirectX 11 compute shader. The benchmark reports individual results for a long list of leaders; we’ve averaged those scores to give you the results you see below.

Finally, we have the shader tests from 3DMark Vantage.

Clockwise from top left: Parallax occlusion mapping, Perlin noise,

GPU cloth, and GPU particles

Overall, the GTX 570 looks quite strong, as expected, trailing only the GeForce GTX 580 among the single-GPU configs. Obviously, the Radeons sweep the first two 3DMark Vantage shader tests, which are focused on pixel shaders and seem to map well to the wide SIMD machines AMD produces. Otherwise, though, Nvidia’s largest GPUs tend to outperform today’s smaller Radeons.

Geometry processing throughput

The most obvious area of divergence between the current GPU architectures from AMD and Nvidia is geometry processing, which has become a point of emphasis with the advent of DirectX 11’s tessellation feature. We can measure geometry processing speeds pretty straightforwardly with a couple of tools. The first is the Unigine Heaven demo. This demo doesn’t really make good use of additional polygons to increase image quality at its highest tessellation levels, but it does push enough polys to serve as a decent synthetic benchmark.

We can push into even higher degrees of tessellation using TessMark’s multiple detail levels.

GPUs based on Nvidia’s DirectX 11-class Fermi architecture tend to hold up well under the most demanding geometry processing loads, and the GTX 570 in particular pretty much aces these tests. Few modern games make use of tessellation to the degree that these synthetic benchmarks do, however.

HAWX 2

We already commented pretty extensively on the controversy surrounding tessellation and polygon use in HAWX 2, so we won’t go into that again. I’d encourage you to read what we wrote earlier, if you haven’t yet, in order to better understand the issues. We have included scores from the HAWX 2 benchmark in our tests below for your information, but be aware that this test’s results are the subject of some dispute. We’re keeping this one around mostly in anticipation of AMD’s Cayman architecture potentially making things interesting.

Lost Planet 2

Our next stop is another game with a built-in benchmark that makes extensive use of tessellation, believe it or not. We figured this and HAWX 2 would make a nice bridge from our synthetic tessellation benchmark and the rest of our game tests. This one isn’t quite so controversial, thank goodness.

This benchmark emphasizes the game’s DX11 effects, as the camera spends nearly all of its time locked onto the tessellated giant slug. We tested at two different tessellation levels to see whether it made any notable difference in performance. The difference in image quality between the two is, well, subtle.

Given the results from our directed tests and, uh, whatever you make of HAWX 2, the results of our first undisputed game test aren’t exactly surprising. The noteworthy outcomes include the fact that the GTX 570 really is substantially faster than the GTX 460 1GB, even the 810MHz version we tested, and the Radeon HD 6870. If you’re stepping up to the 570, you really are stepping up. Also, going dual with a pair of 6850s will net you higher performance than the GTX 570 for nearly the same price—not really news given that we’ve found dual 6850s will outperform a GTX 580, as well.

Civilization V

In addition to the compute shader test we’ve already covered, Civ V has several other built-in benchmarking modes, including two we think are useful for testing video cards. One of them concentrates on the world leaders presented in the game, which is interesting because the game’s developers have spent quite a bit of effort on generating very high quality images in those scenes, complete with some rather convincing material shaders to accent the hair, clothes, and skin of the characters. This benchmark isn’t necessarily representative of Civ V‘s core gameplay, but it does measure performance in one of the most graphically striking parts of the game. As with the earlier compute shader test, we chose to average the results from the individual leaders.

We’ve not often seen the Radeons’ large theoretical edge in shader arithmetic performance translate into higher frame rates in real games, but I’d wager that’s what’s happening here. The GTX 570 ends up falling behind the Radeon HD 6870 as a result.

Another benchmark in Civ V focuses, rightly, on the most taxing part of the core gameplay, when you’re deep into a map and have hundreds of units and structures populating the space. This is when an underpowered GPU can slow down and cause the game to run poorly. This test outputs a generic score that can be a little hard to interpret, so we’ve converted the results into frames per second to make them more readable.

Here’s our first look at two GTX 570 cards in SLI, by the way. In this particular game, AMD’s multi-GPU scaling appears to be superior to Nvidia’s. Even though a single 6870 isn’t nearly as quick as a GTX 570, two 6870s in CrossFireX are easily the fastest overall solution tested.

StarCraft II

Up next is a little game you may have heard of called StarCraft II. We tested SC2 by playing back a match from a recent tournament using the game’s replay feature. This particular match was about 10 minutes in duration, and we captured frame rates over that time using the Fraps utility. Thanks to the relatively long time window involved, we decided not to repeat this test multiple times, like we usually do when testing games with Fraps.

We tested at the settings shown above, with the notable exception that we also enabled 4X antialiasing via these cards’ respective driver control panels. SC2 doesn’t support AA natively, but we think this class of card can produce playable frame rates with AA enabled—and the game looks better that way.

This game is also, comparatively speaking, tough sledding for the GeForce cards. The GTX 570 doesn’t distinguish itself against the much older Radeon HD 5870, which is based on a smaller chip.

Battlefield: Bad Company 2
BC2 uses DirectX 11, but according to this interview, DX11 is mainly used to speed up soft shadow filtering. The DirectX 10 rendering path produces the same images.

We turned up nearly all of the image quality settings in the game. Our test sessions took place in the first 60 seconds of the “Heart of Darkness” level.

Boy, those older Radeons are hard to shake. The two 5870s we tested again bracket the GTX 570.

Metro 2033

We decided to test Metro 2033 at multiple image quality levels rather than multiple resolutions, because there’s quite a bit of opportunity to burden these GPUs simply using this game’s more complex shader effects. We used three different quality presets built into the game’s benchmark utility, with the performance-destroying advanced depth-of-field shader disabled and tessellation enabled in each case.

Among the single-GPU options, the GTX 570 comes out looking pretty good here, especially at the two higher image quality levels. The GTX 570 brings a marked improvement over a whole host of cheaper cards based on smaller chips, including the Radeon HD 5870 and 6870 and the GeForce GTX 460 1GB at 810MHz. The 470 arguably fits into that same group of slower solutions, although it’s not based on a smaller chip.

Aliens vs. Predator
AvP uses several DirectX 11 features to improve image quality and performance, including tessellation, advanced shadow sampling, and DX11-enhanced multisampled anti-aliasing. Naturally, we were pleased when the game’s developers put together an easily scriptable benchmark tool. This benchmark cycles through a range of scenes in the game, including one spot where a horde of tessellated aliens comes crawling down the floor, ceiling, and walls of a corridor.

For these tests, we turned up all of the image quality options to the max, with two exceptions. We held the line at 2X antialiasing and 8X anisotropic filtering simply to keep frame rates in a playable range with most of these graphics cards.

The performance of the 570 is more or less up to expectations here, and the dual-GPU SLI rig scales up almost perfectly, as well.

DiRT 2: DX9

This excellent racer packs a scriptable performance test. We tested at DiRT 2‘s “ultra” quality presets in both DirectX 9 and Direct X 11. The big difference between the two is that the DX11 mode includes tessellation on the crowd and water. Otherwise, they’re hardly distinguishable.

DiRT 2: DX11

Our final game test is DiRT 2, and I had really expected to see a larger gap here between the GTX 570 and GTX 480 thanks to the GF110’s ability to filter FP16 texture formats at twice the rate of the GF100. Most likely, the GTX 570’s lower memory bandwidth holds it back a bit, though.

Power consumption

We’ll kick off our testing with power and noise. Notice that the cards marked with asterisks in the results below have custom cooling solutions that may perform differently than the GPU maker’s reference solution.

Remember what I said about the GTX 570’s power consumption being similar to the 470’s? Yeah, here’s why. Happily, these results are pretty decent, given the performance we’ve seen from the 570. We’re talking about a reduction of over 50W of total system power draw compared to the GeForce GTX 480, with extremely similar overall performance.

Noise levels and GPU temperatures

Nvidia’s custom cooler dissipates the heat produced by the GTX 570 relatively quietly while delivering middle-of-the-pack GPU temperatures. Much like the GTX 580, the cooler on the 570 sounds even better to my ears than its numbers on the sound level meter might suggest. The noise produced has a smoother, less unpleasant texture than what emanates from most other coolers in this class.

The value proposition

Now that we’ve stuffed you full of benchmark results, we’ll try to help you make some sense of the bigger picture. We’ll start by compiling an overall average performance index, based on the highest quality settings and resolutions tested for each of our games, with the notable exception of the disputed HAWX 2. We’ve excluded directed performance tests from this index, and for Civ V, we included only the “late game view” results.

With this performance index established, we can consider overall performance per dollar by factoring price into the mix. Rather than relying on list prices all around, we grabbed our prices off of Newegg where possible. The one exception was the GTX 570 itself, where we had to take Nvidia at its word about the card’s $349.99 suggested price. Here’s hoping that’s accurate!

Generally, for graphics cards with reference clock speeds, we simply picked the lowest priced variant of a particular card available. For instance, that’s what we did for the GTX 580. For the cards with custom speeds, such as the Asus GTX 460 768MB and 6850, we used the price of that exact model as our reference.

AMD card Price Nvidia card
$149.99 GeForce GTX 460 768MB
Radeon HD 6850 $189.99
$214.99 GeForce GTX 460 1GB 810MHz
Radeon HD 6870 $249.99
$259.99 GeForce GTX 470
Radeon HD 5870 $279.99
$349.99 GeForce GTX 570
$429.99 GeForce GTX 480
Radeon HD 5870 2GB $499.99
Radeon HD 5970 $499.99
$509.99 GeForce GTX 580

A simple mash-up of price and performance produces these results:

The lower-priced solutions tend to bubble to the top whenever you look at raw price and performance like that.

We can get a better sense of the overall picture by plotting price and performance on a scatter plot. On this plot, the better values will be closer to the top left corner, where performance is high and price is low. Worse values will gravitate toward the bottom right, where low frame rates meet high prices.

Nvidia’s newest is actually pretty well positioned on the scatter plot, with only the mid-range multi-GPU solutions occupying obviously better real-estate. Note that the GTX 570 is a very straightforward improvement over the GeForce GTX 480, which has slightly lower performance yet costs quite a bit more.

Another way we can consider GPU value is in the context of a larger system purchase, which may shed a different light on what it makes sense to buy. The GTX 570 is definitely an enthusiast-type part, so we’ve paired it with a proposed system config that’s similar to the hardware in our testbed system but a little more economical.

CPU Intel Core i7-950 $294.99
Cooler Thermaltake V1 $51.99
Motherboard Gigabyte GA-X58A-UD3R $209.99
Memory 6GB Corsair XMS3 DDR3-1333 $74.99
Storage Western Digital Caviar Black 1TB $89.99
Asus DRW-24B1ST $19.99
Audio Asus Xonar DG $29.99
PSU PC Power & Cooling Silencer Mk II 750W $129.99
Enclosure Corsair Graphite Series 600T $159.99
Total $1,061.91

That system price will be our base. We’ve added the cost of the video cards to the total, factored in performance, and voila:

Multi-GPU solutions occupy the top six slots in the bar chart once we factor in total system price, amazingly enough. In fact, the most expensive setup we tested, the GTX 580 SLI config, ties for the top spot. Clearly, we’ve weighted things in favor of performance more than price once we add a thousand-dollar system to the equation. In that context, the GTX 570 lands in the middle of the pack—just as it did when we factored in GPU price alone, and the other solutions shift positions almost completely.

The scatter plot is a little less fickle, and it tells a similar story to the last one. The GTX 570 isn’t an outstanding value, but it’s pretty good, especially among the single-GPU options.

These results would look very different with a more or less expensive system, so your mileage may vary.

Conclusions

The lowdown on the GeForce GTX 570 is pretty straightforward: performance on par with a GeForce GTX 480, power consumption and noise levels on par with a GTX 470, and a price tag of 350 bucks. The card’s stock cooler is nice and quiet, too.

We would have liked to see the GTX 570 separate itself from last year’s Radeon HD 5870 in several games where it simply could not, such as Bad Company 2 and StarCraft II. Still, in the overall picture, the GTX 570 is clearly a notch above cheaper solutions like the GeForce GTX 460 1GB and the Radeon HD 6870. At high resolutions and visual quality levels in some of the most demanding DX11 games, the GTX 570 cranks out appreciably higher frame rates. I’m not convinced making the leap up to a GTX 570 is worth doing if you’re running a single display that’s two megapixels or less—including the incredibly popular 1080p resolution. A single Radeon HD 6870 or GTX 460 is probably all you need for a couple of megapixels. If you’re planning on playing games on a four-megapixels monster like the 30″ Dell on our GPU test rig—and I highly recommend doing so, if you have the means—then the 570 is worthy of a long, hard look.

You can get higher performance for just a little more money out of a pair of Radeon HD 6850s in CrossFireX, and I suppose that’s AMD’s closest competing product offering right now. But again, the foibles of multi-GPU configs will be in play, as will the stark fact of looming obsolescence with the Radeon HD 6900 series imminent. We said a week ago that we wouldn’t pull the trigger on a pair of 6850s right now, and we remain in that holding pattern.

Our final take on the GTX 570 will have to remain a work in progress for the same reason. Stay tuned to this same channel for the next episode of GPU Wars, when the truth about the 2010 crop of graphics chips will finally be revealed.

Comments closed
    • End User
    • 9 years ago

    Thanks for the review. I’ve been waiting to replace my GTX 260 and that wait is almost over. I bought two GTX 570’s from NCIX and they are on their way.

    • honoradept
    • 9 years ago

    i believe that there already are 69xxs in damage lab, waiting to see the review before making a decision to marry one of them.

    • samuelmorris
    • 9 years ago

    It’s awkward having to consistently nitpick, but you guys really need to revise your power consumption test methodology. Some of the results are clearly wrong, and haven’t been spotted, or even commented on.

    The GTX480/580 figures are lower than they are most of the time, and the dual-GPU figures are laughable, the difference between one HD5870 and two DC-side works out at barely even 100W. Not much point testing power for two cards if the second card is idle…

      • flip-mode
      • 9 years ago

      I curious, could you elaborate more. You want power consumption tested where – at the wall or after the PSU or just at the card?

      Regardless of where you test it, isn’t it really a matter of preference? None of those methods delivers “bad data”. I suppose I prefer the “at the wall” data because it tells me what size PSU I need for the whole system and it includes a “factor of safety” too due to the PSU consuming a little. The “at the card” method doesn’t really tell me anything that I can turn around and actually use – sure, it tells me the power consumption of the card exactly, but how does that help me select a PSU? How can I actually use that data?

      The nice thing about the internet is that you can get multiple perspectives of a thing pretty quickly. I know there are other sites out there that test “at the card” for power consumption – Xbit maybe or Hexus (?) – and, bringing in another conversation, Anand tests with Crysis. I think people should not expect to get every single angle on a product from one single site.

        • kamikaziechameleon
        • 9 years ago

        I thought they where fine though your requests have some validity to them, I think they could put a little perspective on the respective systems these would be squeezed into.

          • samuelmorris
          • 9 years ago

          It calls into question whether the dual GPU systems are working effectively, when the extra card uses far fewer watts than it is known that they use.

    • ultima_trev
    • 9 years ago

    Great review, although such is expected of TR no doubt.

    This video card truly is the sweet spot, price-performance wise. I think I know what I’m gonna be spending my Xmas bonus on. So long CF HD 5850s, hello GTX 570!!

    • kilkennycat
    • 9 years ago

    Wednesday, December 8, 11:00AM Pacific Time

    FYI:
    Newegg is currently listing the eVGA Superclocked (797MHx vs 732MHz) GTX570 IN STOCK at $369 ( $20 premium over the vanilla version) with $7.87 3-day shipping.

      • henfactor
      • 9 years ago

      Oh my, that is a decent overclock there, worth the $20 I’d say.

        • flip-mode
        • 9 years ago

        Yeah, does that not already top any overclock you can get on a 68xx card?

    • chip
    • 9 years ago

    The ‘Overall performance per dollar – full system prices’ scatterplot is redundant. None of the relationships between the cards are changed in a statistically meaningful way – adding $1000 to the price of everything just shifts them all a bit further away from 0.

      • khands
      • 9 years ago

      Actually it makes them more vertical and shows better (albeit in this day and age a bit high) overall system performance/$ instead of simply GPU performance/$, because by including an equal base price it skews the performance/$ spent percentage difference.

      • JustAnEngineer
      • 9 years ago

      Adding an extra cost to the charts is a way to try to justify the extreme expense of cutting edge hardware. If you just looked at the cost and performance of the graphics card by itself, you wouldn’t consider spending more than $500 on graphics cards.

        • paulWTAMU
        • 9 years ago

        [quote]you wouldn’t consider spending more than [s]$500[/s]$250-300 on graphics cards[/quote]

        Now it’s a true statement for most of us.
        You with your huge monitor *grumble grumble*

    • Sunburn74
    • 9 years ago

    Excellent looking card. Only problem is, gee a 5850 is all I need for my 1920×1080 monitor. Anything more is just overkill.

      • NarwhaleAu
      • 9 years ago

      Your maths is wrong. If you only got 3 FPS per $100, none of us would be playing games.

        • Meadows
        • 9 years ago

        Indeed, $333 for gaming at 10 fps. Yuck.

        • The Wanderer
        • 9 years ago

        Are you sure? I’ve looked back over the charts, and I don’t see anything wrong with his math offhand.

        Remember, that “FPS per $100” figure is for the entire system price. The 3.5 FPS/$100 is actually for a system costing over $2000, and getting an average of 74 FPS across TR’s benchmark suite; that’s 74 / (2081.89 / 100), which comes out to 3.55-something by my math. The figures for just the graphics cards in that particular example come to 74 / (1019.98 / 100), or just over 7.255.

        For comparison (in case it isn’t obvious), the reverse math is:

        3.5 * 2000 / 100 =
        3.5 * 20 =
        70

        which is a pretty respectable FPS rating, and a close match to the 74 FPS they started out with.

        If you do similar calculations for your own gaming computer, you might be surprised by the results you get… I’m thinking I might do something along those lines myself tonight, after I get out of work.

          • khands
          • 9 years ago

          Everyone was thinking you were just talking about per $100 /[

    • phez
    • 9 years ago

    I’m still not getting why there’s no consistency in your resolutions, why some games are only uber-high resolution while others are not. I understand you want to test the technology but its not really providing us with useful consumer information.

    Considering the very polls here on TR, where most people were running on 1 monitor, most less than 1920-resolutions, most people in the market for $150-250 graphic cards; even the steam survey lists 1680 x 1050 as the most popular resolution.

    It would be nice to have a “more complete” review across different resolutions so we have a one-stop-shop to compare and decide what may be best for us. To be honest, I’ve been going to other websites for this information for some time now.

      • Buzzard44
      • 9 years ago

      Because all the cutting edge cards tested in this review would have >60 fps, and most people’s monitors are 60Hz, therefore that data is trivial.

    • glynor
    • 9 years ago

    Great article, Scott. And I loved the little Ferris reference in the conclusion!

    • End User
    • 9 years ago

    No GTX 570 SLI power consumption info?

    Edit: Ah, the second 570 arrived late. Never mind.

    • south side sammy
    • 9 years ago

    Why do you guys tend to leave the “older” games behind when you do these benchmarks ? I bet most of us, being familiar with 1 or 2 year old games only have those to use in comparison with all the cards being tested. I don’t think I own but one of those games. Not much for me to go on. What happened to Crysis and FarCry2 ? “OLD” games but they do put cards to the test.

      • flip-mode
      • 9 years ago

      Just substitute Metro 2033 for Crysis.

        • south side sammy
        • 9 years ago

        not the same.

          • Meadows
          • 9 years ago

          Of course not, because it’s better.

        • sigher
        • 9 years ago

        From metro I have much more an impression it suffers from poor design, adding tons of stuff that seem to only be added to tax the GPU rather than to be an attempt to make the game interesting/pretty, stuff that they didn’t streamline either, whereas with games like crysis it seems more that they just added a lot of new stuff.
        From my cursory observation.

        I’m not saying there isn’t a lot of the stuff in metro isn’t designed right too, it just seems like they finished it then somebody came round and added stuff just for the sole purpose to make it run slower.

      • EsotericLord
      • 9 years ago

      Newer games are of course going to stress newer cards more. If you are playing older games, you probably arent in the market for a new graphics card. Even if you are, pretty much any of the test cards will suit your needs.

      Still dont have any games on my PC (besides Metro 2033) that tap out my OC’ed 460 1GB.

        • south side sammy
        • 9 years ago

        ummmm……… no they’re not. Too many are ported from consoles for one and even COD series are still DX9.

          • DancinJack
          • 9 years ago

          Farcry 2 would stress these cards? Meh, I guess at some ridiculously high resolution MAYBE. A GTX260 runs FC2 ~50fps @1920×1200 with 4xAA and 16xAF. Not much to discuss there.

          • Farting Bob
          • 9 years ago

          Yea, and yet COD looks pretty damn good, i certainly cant tell the difference between that and any DX10/11 FPS. Sure you might get slightly more leaves flapping on a tree or water might look slightly more watery but if it means dropping my FPS in half to achieve those differences ill stick to a DX9 game that my 2 year old card can handle just fine.

      • flip-mode
      • 9 years ago

      Why? What will Crysis tell you that these newer games don’t?

        • south side sammy
        • 9 years ago

        I have the game that’s what. Not to mention it hammers cards in it’s own way as does FarCry2.

          • flip-mode
          • 9 years ago

          Ah, that’s what I thought; TechReport should always include one of the games YOU have in their benchmark suite. Right. OK. Super.

            • south side sammy
            • 9 years ago

            Yeah, exactly. All upper mid range and high end video cards for the past 2 years too. That way I’ll know if it’s smart to buy another one or come to the realization that what they’re putting out isn’t any better than what I already have. Cool huh ?

            • flip-mode
            • 9 years ago

            q[

            • south side sammy
            • 9 years ago

            grammar 101 is on another site.

            • flip-mode
            • 9 years ago

            I know and I hate going there but your sentences aren’t even meeting the minimum that is required to communicate a thought. And you’re defending that too?

            • south side sammy
            • 9 years ago

            I’ve been up for 37 hours. It makes perfect sense to me.

            • DancinJack
            • 9 years ago

            Have you been stressing your graphics card with FC2 and Crysis all day?

            • Meadows
            • 9 years ago

            I hate to say it, but you’re a moron. You started a train of thought, and he connected to it. You said TR should *[

            • poulpy
            • 9 years ago

            q[

            • flip-mode
            • 9 years ago

            As petty as people get in these exchanges (not you poulpy) I wanted to avoid making assumptions. Meadows is plain wrong – the sentence was lacking much, that much is simple fact and it’s odd to the the captain of the grammar nazis come to its defense. My apologies, Meadows, for attempting to “follow the train” of a horribly incomplete sentence on a site where people often misinterpret complete, well formed sentences. Lollers.

            Anyway, something weird often happens when you try to talk reason to people in certain situations. SSS somehow thinks it’s reasonable that tech sites should replicate his specific computer configuration and his specific game suite when benchmarking. Of course that’s absurd and one feels rather silly to find one’s self explaining such an obvious thing.

            Peace out ๐Ÿ˜€

            • Meadows
            • 9 years ago

            Of course it’s not reasonable, and he accordingly missed your sarcasm. I’m not sure how much more clear to put it, but you sarcastically remarked that “Suuure, TR should include your specific library of games”, then he took it seriously, and added the sentence “Yes, and certain videocards too.” (Not with these exact words, of course.) The sentence itself isn’t lacking details, it’s just that you somehow lost *[

            • flip-mode
            • 9 years ago

            You are the decider as to when grammar is important or not: I get it! My bad.

            • Meadows
            • 9 years ago

            No, I still don’t think you do.

            • flip-mode
            • 9 years ago

            Whatever you decide.

            • south side sammy
            • 9 years ago

            Sorry, not true. Ever notice when a new card or new group of cards come out the last generation seems to disappear into oblivion ? Wouldn’t it be nice to see how your last generation card performed against these new and “improved” cards…. or would they not perform any better ? 2 FPS isn’t going to make me want to buy a new card but if the public is not informed about how soso the new cards are in comparison to 6 month old cards…… shame on the review site.
            As far as the games go………. I don’t buy every game that comes out because it’s the next “greatest” thing…. I wouldn’t touch metro 2033 because of the linear game play. I do have games that really test hardware ( so what if they’re “last gen games” ) they still hit cards hard in there own way and would be nice to see something that I and I’m sure others are familiar with.
            If this site or any other site is inclined to make positive reviews because of some kind of financial gain it hurts the consumer… that’s you and me… ( grammar police ) And really doesn’t let us know, to enough degree if the cards are worthy of purchase or if the company’s are putting out soso junk with a new label on it just to get our money. Be kinda nice to know.

            • cegras
            • 9 years ago

            How could you not understand what he was saying? Do you have any notion of context?

            • south side sammy
            • 9 years ago

            Sorry, next time I’ll look for the “crayon” font so you’ll be able to understand it better.

      • kamikaziechameleon
      • 9 years ago

      There are two older games that really stress the heck out of GPU cards still, Crysis and GTA 4 Episodes from liberty city.

      I can play pleanty of newer games just fine but those two knock my pc on its Ass in a big way.

    • dpaus
    • 9 years ago

    l[

      • Damage
      • 9 years ago

      Um, did you read the next sentence? ๐Ÿ™‚

        • dpaus
        • 9 years ago

        Oh, is /[

        • Silus
        • 9 years ago

        Nice review Scott!

        I’m wondering what happened to your supposed change in how you test power consumption, which was highlighted in the GTX 580 review, where different loads tell a different story.

        Is this intended for future reviews only or is it dismissed entirely ? Or was it used for this review’s power consumption tests ?

          • Damage
          • 9 years ago

          That’s something I expect to change in the future, but we’re still building on the same set of results, so I haven’t re-tested with different procedures yet.

    • flip-mode
    • 9 years ago

    Nice performing card. The thought of this thing dipping below $300 is really exciting.

    This could be my next card… I’m only at 1600×1200, but looking at the Metro and AvP benchmarks, this card seems to be what is necessary to run demanding games for the next couple of years.

    I think the conclusion’s suggestion that this card is too much for lower resolutions could prove troublesome in the not-so-distant future. Lesser cards cannot run Metro 2033 or AvP with high settings even at low resolutions. I’m hoping more games are coming with such levels of visual goodness, and if thats the case the buying a lesser card than a GTX 570 means I potentially just wasted $250 to save $100. I’d like to think I’m getting wiser as I’m getting older, and my gut is telling me not to “lay up” on my next video card purchase.

    I’m hoping that Cayman kicks arse in big ways, if for no other reason than to knock down the price of this wonderful card. But I’d be completely thrilled if a Radeon 6950 outperformed this card by a smidge and was priced lower at the same time. AMD knows all it needs to know about the competition now, so it would be frustrating to see AMD come with unattractive price or performance.

    • 5150
    • 9 years ago

    GPU innovations seem anything but innovative anymore.

      • BlackStar
      • 9 years ago

      Large changes generally come every two generations, in general: Nvidia 6×00 to 8×00 to GT2xx or Ati 9×00 to X1xx0 to 4xx0 to 5xx0 (the latter being kind of an outlier).

      Ati 7xx0 series should be more interesting, as should Nvidia GT6xx series (unless they start renaming again… :/)

      • shank15217
      • 9 years ago

      Uhuh.. you’re blaming the wrong team.

    • KamikaseRider
    • 9 years ago

    Where are the 6900 series???? C’mon AMD, you can do it!

    I smell a really good fight between this and Cayman.

    • kamikaziechameleon
    • 9 years ago

    I’m suprised it isn’t one of the following:

    Cheaper

    or

    More powerful

    The lack of a distiguishing quality brings into question the relevance of this piece of hardware. I’m still happy with my choice of a 460 seeing as none of these offerings are dramatically dropping prices on existing stock.

      • Buzzard44
      • 9 years ago

      Power and noise reduction are what this card brings to the table.

      Not bad, not outstanding. I’m waiting to see how 69xx looks before I make any judgements. Also, 68xx CF still looks amazing compared to anything else currently out there.

        • BlackStar
        • 9 years ago

        Except, possibly, the 460 SLI configuration, which is great value for money.

          • Buzzard44
          • 9 years ago

          Yeah, the 460 SLI is a great value, but usually I see the 6850 CF pulling marginally ahead, and is slightly cheaper compared to the 810MHz 460s. Of course, in actual use I doubt you could tell the difference between 460s in SLI and 6850s in CF in a blind test, unless you knew which games the GeForces shined in and which games the Radeons did.

            • kamikaziechameleon
            • 9 years ago

            460 SLI is more appealing to me since Nvidia tends to be more vigilant on drivers and they are so crucial to Mutli GPU setups. ATI isn’t as good with maintaining older cards in crossfire configs for newer games or newer cards on older games witih regards to Multii GPU configs.

            I guess I do see some of the appeal with this card, but in the end I’m sad Nvidia didn’t drop the MSRP by 50 bucks. I think that would have given them a good competative edge. Especially when you consider how much raw power the 69XX cards should have out the gate. They need a next gen card to capture the 200-300 price point. Right now this sits just above that and it kinda doesn’t help offset the 6870 demand as it could.

      • khands
      • 9 years ago

      It will be interesting to see where the 6950 fits in here (I expect it to perform about the same, price will likely be the distinguishing factor).

    • BlackStar
    • 9 years ago

    Last year’s single-card performance for $350. Not too shabby but I think I’ll wait to see how the 69×0 series match up.

      • rUmX
      • 9 years ago

      I agree. Bring on Cayman!

    • TaBoVilla
    • 9 years ago

    nice power reduction there!

    this, the performance, and the $350 price tag makes it real tempting!

    • Usacomp2k3
    • 9 years ago

    Cool. Looks nifty.
    EDIT: It almost seems like this is a direct replacement for the 480. Wonder if they’ll sell what they have and not make any more.

      • sweatshopking
      • 9 years ago

      I don’t know why they would. same cost to produce, and crappier. Nobody should be buying the 480 anymore. you’d have to be dumb, or not a nerd.

Pin It on Pinterest

Share This