Nvidia’s GeForce GTX 560 Ti graphics processor

I don’t wish to alarm you, but two very powerful things are converging upon us simultaneously. First, we have the introduction of a new video card from Nvidia aimed smack dab at the soft, chewy center of the price-performance curve. Second, and even more consequentially, we’re on the cusp of a potentially huge upgrade cycle, prompted by one Miss Sandy Bridge, Intel’s sparkling new mid-range CPU. We recently polled you, our faithful readers, about your upgrade plans for Sandy Bridge, and more than a quarter of voters said they intend to upgrade either immediately or “soon.” Those are astonishing numbers, if you think about it.

Mash those two facts together, and you have an inevitable outcome: a number of interested parties would really like to sell you one of these new video cards—or something like it—to go with your new system. AMD, Nvidia, and their partners are pumping up their performance, hacking away at prices, and doing everything else they can do grab your attention. Happily, that means some really nice choices should soon be available to you via your favorite online retailer.

One of those choices is our headliner today, the GeForce GTX 560 Ti graphics card. And yes, the name ends in “Ti”—that’s not a typo. To decode it, look not toward the Texas-based chip company or the rapper. Instead, think periodic table. This is the GeForce GTX 560 “Titanium,” believe it or not, a name that hearkens waaaay back to 2001 and the GeForce3 Ti graphics card. (Yes, in a shocking example of career stagnation, I was reviewing graphics cards back then just as am I today.) We’ll explain the reasons behind this peculiar naming choice shortly, but first, let’s consider the revamped GPU that drives the GTX 560 Ti.

Please welcome the GF114

If you’ve been following the veritable truckload of new GPU releases over the past four months, you’ll know that Nvidia has been following up its famously late-to-market GeForce GTX 400-series graphics processors with a reworked GTX 500 series that’s arrived in more timely fashion. The somewhat shaky GeForce GTX 480 gave way to the world-beating GTX 580, based on a very similar chip with higher clock speeds, more units enabled, and lower power consumption.

The GF114 hides out under a big metal cap

The GTX 560 Ti’s release follows the same basic template. The card is based on the GF114 graphics processor, a reworked version of the GF104 graphics processor that lies under the heatsink of every GeForce GTX 460. That reworking has involved tuning the chip’s design to better fit TSMC’s 40-nm fabrication process. To improve performance and lower power consumption, Nvidia has used faster transistors in the speed-sensitive paths on the chip while deploying low-leakage transistors elsewhere. Beyond those tweaks, the GF114’s architecture is essentially the same as the GF104’s, with no other notable changes. (The GF100-to-GF110 transition included an upgrade to the texture filtering hardware to allow full-rate filtering of FP16 formats, but the GF104 already had that capability.)

ROP

pixels/

clock

Texels

filtered/

clock

(int/fp16)

Shader

ALUs

Rasterized

triangles/

clock

Memory

interface

width (bits)

Estimated

transistor

count

(Millions)

Approximate

die
size

(mm²)

Fabrication

process node

GF114 32 64/64 384 2 256 1950 360 40 nm
GF110 48 64/64 512 4 384 3000 529* 40 nm
Barts 32 56/28 1120 1 256 1700 255 40 nm
Cayman 32 96/48 1536 2 256 2640 389 40 nm
*Best published estimate

At long last, Nvidia has relented from its policy of trying to keep die sizes obscured. The GF114’s die size is, officially, 360 mm². Chip size isn’t a terribly important metric for most folks to know, but it does give us a sense of what a GPU costs to manufacture. The GF114 looks to be just a little smaller than AMD’s Cayman chip but considerably larger than the Barts GPU used in the Radeon HD 6800 series.

So why the Ti?

If you know the history of the GeForce GTX 460, then Nvidia’s decision to bring back the Titanium designation just might make some sense to you. The GTX 460 started life at a rather modest 675MHz clock speed last July, but later versions crept up to well in excess of 800MHz last fall when Nvidia needed an answer to the Radeon HD 6870.

The fact that such a wide range of performance was available under a single product name caused some consternation in various quarters, a problem that was compounded by video card makers’ tendency to refer to these higher-clocked parts as “overclocked,” a word that does not truly apply. After all, the chips have been through a rigorous binning process, qualified for the speed used, and shipped in a boxed product with a full warranty. Those cards are approximately as much overclocked as I am a potential Chippendale’s dancer, which is to say not at all.

Anyhow, the tweaks to the GF114 have given Nvidia some additional performance headroom in several ways. All units on the chip are enabled, whereas one of the eight SM cores on the GF104 is disabled in the GTX 460. Thus, clock for clock, the GTX 560 Ti has more shader and texturing power and more polygon throughput than the GTX 460. Also, clock speeds are up. The GTX 560 Ti’s stock baseline frequency is 822MHz, and its gigabyte of GDDR5 memory runs at 4 GT/s, versus 675MHz and 3.6 GT/s in the original GTX 460.

Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear texel

filtering rate

(Gtexels/s)

Peak shader

arithmetic

(GFLOPS)

Peak

rasterization

rate

(Mtris/s)

Peak

memory

bandwidth

(GB/s)

GeForce GTX 460
1GB
16.8 39.2 941 1400 88.3
GeForce GTX 560
Ti
26.3 52.6 1263 1644 128.3
GeForce GTX 570 29.3 43.9 1405 2928 152.0
GeForce GTX 580 37.1 49.4 1581 3088 192.0

All in all, the new card is a higher class of product than the GTX 460, and Nvidia wanted to make that clear. But apparently, you know, not too clear. Rather than grabbing the obvious next number in the series, say GeForce GTX 565, to indicate higher performance, Nvidia somehow decided to reach deep into its bag of tricks and brush off the Titanium name.

Which is, after all, shiny.

GPU

clock

(MHz)

Shader

ALUs

Textures

filtered/

clock

ROP

pixels/

clock

Memory

transfer

rate

(Gbps)

Memory

interface

width

(bits)

Peak

power

draw

Suggested

e-tail

price

GeForce GTX 460 1GB 675 336 56 32 3.6 256 160W $199.99
GeForce GTX 560 Ti 822 384 64 32 4.0 256 170W $249.99
GeForce GTX 570 732 480 60 40 3.8 320 219W $349.99
GeForce GTX 580 772 512 64 48 4.0 384 244W $499.99

Interestingly enough, the GTX 560 Ti doesn’t directly replace the GTX 460. The 560 Ti will list for $249.99 at online retailers, while the 460 will soldier on at a lower price point with lower performance. My sense is that eventually Nvidia will introduce a GTX 560 non-Ti based on the GF114 that properly replaces the 460.

Although GTX 560 Ti clocks start out much higher than the 460’s, Nvidia claims it’s not just eating up overclocking margin because this is a substantially re-engineered product. In keeping with its usual practice, the company says it has left ample room for board makers to produce higher-clocked variants of the 560 Ti—and for enthusiasts to overclock their own cards, if they wish. Leaving such headroom has been part of Nvidia’s business model for some time now, and that tradition apparently continues.

Pictured above is a card based on Nvidia’s GeForce GTX 560 Ti reference design. Although it may look similar to the GTX 460, this is in fact a new, longer PCB (now 9″) attached to a larger, heavier cooler with a trio of heatpipes embedded. The output ports are standard Nvidia for this generation: two dual-link DVIs and one mini-HDMI connector. As you can see, the board requires two 6-pin aux power inputs. Max power draw is rated for 170W, and Nvidia recommends a 500W power supply unit.

We’d expect to see some cards based on this reference design selling at online retailers today for right around the $249.99 suggested price. As with the GTX 460, though, we can expect higher clocks and tremendous variety in board and cooling designs very soon. For instance….

Three kings

We have three different examples of retail versions of the GeForce GTX 560 Ti for comparison, from the three major enthusiast motherboard manufacturers, all with custom coolers and board designs and all clocked higher than Nvidia’s baseline frequency.

The first such board to arrive in Damage Labs was Gigabyte’s GTX 560 Ti SOC, and it’s pretty remarkable thanks to a dizzying 1GHz GPU clock and 4580 MT/s memory. Gigabyte has left Nvidia’s base 822MHz and 4008 MT/s speeds in the dust, yet this card is slated to sell for only $269.99 when it hits online retailers next week. Those clock speeds mean the SOC will threaten the product sitting at the next rung up Nvidia’s stack, the GeForce GTX 570, in a number of key graphics throughput rates.

Like most of these card makers, Gigabyte cites several factors that purportedly contribute to its product’s superiority over the average reference design. Those include higher-quality components, rigorous testing of chips during their sorting into different speed grades, a custom board design (9.5″ long, or half an inch beyond the reference card), and a bitchin’ cooler with quad heatpipes and dual fans.

In fact, Gigabyte’s next-generation “Windforce” cooler has fans angled slightly outward in a way that the firm claims reduces turbulence (and thus noise.) We like the fact that the gap in between the two fans ought to leave some room for air intake, even with a card installed in the adjacent slot. Many of these fan-based coolers perform poorly in SLI (or in the company of a TV tuner or sound card, for that matter.)

The real secret to the SOC’s ridiculously high clock speeds, though, may be the component pictured above: a Proadlizer film capacitor from NEC/TOKIN. I don’t believe we’ve seen one of these on a graphics card before. It sits on the back side of the card right between the VRMs and the GPU and memory chips, purportedly providing “excellent noise absorption performance” and “high switching frequency.” The card’s six-phase power and other attributes may contribute to its ability to sustain higher frequencies, as well. Whatever the case, Gigabyte’s SOC is a pretty vivid illustration of a board maker taking the GTX 560 Ti to another level, and since it hit our labs first, we were able to run it through our full suite of performance tests.

Aesthetically, Asus’ GTX560 Ti Direct CU II TOP is my favorite GTX 560 Ti card so far. Whereas Gigabyte’s effort comes from the heavy-metal-flames school of industrial design, Asus opts for the more understated red-and-black 1985 VW GTI approach. The matte paint and racing stripes work pretty well, in my opinion.

Asus’ cooler has one less heatpipe, but all three of the pipes snake across the surface of the GPU, making direct contact, which purportedly results in better heat conduction. This card is unique among the group in sporting a metal brace running the length of the PCB to prevent warping. Asus is also proud of its selection of “super alloy” components for this board’s power delivery circuitry, which it claims provides superior performance and higher overclocking headroom than the reference design.

You may have to test that out for yourself, though, because the clock speeds on this card are a little more understated than the Gigabyte’s, with a 900MHz GPU core and 4.2 GT/s memory. Much like the Gigabyte, this card is priced at $269.99 and slated for availability at online retailers in 7-10 days. Asus also has a stock-clocked version of the GTX 560 Ti that should be available immediately for $249.99.

MSI’s choice of a brushed-metal-and-chrome approach rounds out our sampling of industrial design schools in PC hardware nicely. Nvidia’s reference card covers the black-and-neon theme, so I believe all we’re missing is a military/camo scheme to finish off all of the major schools.

Anyhow, MSI’s rendition of the GTX 560 Ti is adorned with the company’s familiar “Twin Frozr II” dual-fan cooler with quad heatpipes. This cooler performed well for us aboard MSI’s high-clocked (810MHz) version of the GTX 460, and we expect similarly good things here. Like the other guys, MSI touts its component selection—these are claimed to be “military class”—and custom cooler as sources of superiority to the poor, battered reference design.

At 880MHz and 4200 MT/s, this card is a little slower than the Asus and a lot slower than the Gigabyte, but MSI will only ask $259.99 for this offering—and it should be available immediately. Versions with 900 and 950MHz clock speeds, still at 4.2 GT/s memory, are planned, as well.

A pair of aces from AMD

You didn’t think AMD would let this momentous occasion pass without injecting a little excitement of its own into the conversation, did you? The Radeon guys have several responses to the GTX 560, all of which make things more interesting for us today.

The first response is, perhaps inevitably, a higher-clocked version of the Radeon HD 6870 with, you guessed it, a custom cooler that has dual fans and triple heat pipes. XFX’s Radeon HD 6870 Black Edition runs at 940MHz with 4.6 GT/s memory, up from the 6870’s stock speeds of 900MHz and 4.2 GT/s. Currently, it’s listed for $259.99 at Newegg, which is a little steep considering some other things we’re about to unload on you. There is a $30 mail-in rebate attached, for those who enjoy that peculiar form of abuse. (We’re not fans of mail-in rebates, whose perverse business model depends on keeping redemption rates low.) I believe this is the highest-clocked variant of the Radeon HD 6870 available, and it should provide a more suitable challenge for the GTX 560 Ti.

The next prong of AMD’s counterattack is a lower-priced variant of the Radeon HD 6950 whose memory size has been cut from 2GB to 1GB. This is perhaps the product that AMD should have introduced first, had it not been angling to press its advantage on multiple display support and Eyefinity gaming. One gigabyte of video RAM should be sufficient for the vast majority of users who have a single display with a resolution of 1920×1200 or less.

The card pictured above is just an engineering sample from AMD, but already, there’s a Sapphire-branded version of the Radeon HD 6950 1GB listed at Newegg for AMD’s suggested price of $259.99. If that price holds steady over time, the Radeon HD 6950 1GB should present formidable competition to higher-clocked versions of the GTX 560 Ti.

As if that weren’t enough, AMD is also dropping the suggested prices on a couple of other offerings. The stock Radeon HD 6870 is down to $219.99, and the Radeon HD 6950 2GB drops ten bucks to $289.99, with rebates adding potential savings beyond that. (Maybe.)

Along with the GTX 560 Ti’s introduction, these changes add up to a nice downward shift in overall pricing. As I said, GPU and video card makers want in on some of that sweet Sandy Bridge upgrade action. We’ll do our best to help you decide whether to upgrade by testing all of these cards against the incumbents in this price range, along with a couple of older GeForces that may be similar to a card you already own.

The first of those is an original GeForce 8800 GT 512MB. This card was a favorite of ours several years ago and remains in many enthusiasts’ systems today. We’ll let it be our proxy for cards of that era. If you own a GeForce 8800 GTS, GeForce 9800 GT, or a Radeon HD 3850 or 3870, you own roughly the same class of GPU.


Asus’ GTX 260 gives us our camo design theme FTW!

The second older card we’ve tested is the “reloaded” version of the GeForce GTX 260, introduced over two years ago. The GTX 260 had an unusually long run as an attractive video card option due to shortages of DX11-class GPUs and the stagnation of GPU requirements in PC games. If you own a GeForce GTX 280 or a Radeon HD 4870 1GB, you have the same basic class of GPU and can probably consider your need to upgrade based on this card’s performance.

These older DirectX 10 cards wouldn’t run every single game at the settings we used, so we occasionally had to leave them out.

Test notes

With its latest Catalyst 11.1a drivers, AMD has introduced some performance-tuning features to its user control panel, including an interesting new slider to limit tessellation levels used in DirectX 11 games. Since these options are becoming fairly complex, we’ll show you how we had the Radeons configured.

We believe these settings are the closest match to Nvidia’s defaults. We’ll have to play with the tessellation slider at a later date, when we have more time to devote to it. As the “AMD optimized” checkbox indicates, AMD intends to introduce profile-based tessellation reduction in future driver revisions (perhaps as a means of addressing things like the HAWX 2 controversy), but it tells us that no applications have been profiled yet.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core
i7-980X
Motherboard Gigabyte EX58-UD5
North bridge X58 IOH
South bridge ICH10R
Memory size 12GB (6 DIMMs)
Memory type Corsair Dominator CMD12GX3M6A1600C8

DDR3 SDRAM at 1600MHz

Memory timings 8-8-8-24 2T
Chipset drivers INF update 9.1.1.1025

Rapid Storage Technology 9.6.0.1014

Audio Integrated ICH10R/ALC889A

with Realtek R2.51 drivers

Graphics
Asus Radeon HD 6850 1GB

with Catalyst 11.1a drivers

Sapphire Radeon HD 6870 1GB

with Catalyst 11.1a drivers

XFX  Radeon HD 6870
Black Edition 1GB

with Catalyst 11.1a drivers

Radeon HD
6950 1GB

with Catalyst 11.1a drivers

Radeon HD
6950 2GB

with Catalyst 11.1a drivers

Radeon HD
6970 2GB

with Catalyst 11.1a drivers

Asus GeForce 8800
GT 512MB

with ForceWare 266.58 drivers

Asus  GeForce
GTX 260 896MB

with ForceWare 266.58 drivers

GeForce GTX 460
1GB

with ForceWare 266.58 drivers

MSI Hawk Talon Attack GeForce GTX 460 1GB 810MHz

with ForceWare 266.58 drivers

GeForce GTX
560 Ti 1GB

with ForceWare 266.56 drivers

Gigabyte
GeForce GTX 560 Ti 1GB SOC

with ForceWare 266.56 drivers

Zotac GeForce GTX 570 1280MB

with ForceWare 266.58 drivers

Hard drive WD RE3 WD1002FBYS 1TB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition

DirectX runtime update June 2010

Thanks to Intel, Corsair, Western Digital, Gigabyte, and PC Power & Cooling for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

Some further notes on our methods:

  • Many of our performance tests are scripted and repeatable, but for some of the games, including Battlefield: Bad Company 2, we used the Fraps utility to record frame rates while playing a 60-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We raised our sample size, testing each Fraps sequence five times per video card, in order to counteract any variability. We’ve included second-by-second frame rate results from Fraps for those games, and in that case, you’re seeing the results from a single, representative pass through the test sequence.
  • We measured total system power consumption at the wall socket using a Yokogawa WT210 digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

    The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Battlefield: Bad Company 2 at a 1920×1080 resolution with 8X AA and 16X anisotropic filtering. We test power with BC2 because we think it’s a solidly representative peak gaming workload.

  • We measured noise levels on our test system, sitting on an open test bench, using an Extech 407738 digital sound level meter. The meter was mounted on a tripod approximately 10″ from the test system at a height even with the top of the video card.

    You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

  • We used GPU-Z to log GPU temperatures during our load testing.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Key rates and performance in directed tests

Peak pixel

fill rate

(Gpixels/s)

Peak bilinear

integer texel

filtering rate

(Gtexels/s)

Peak bilinear

FP16 texel

filtering rate

(Gtexels/s)

Peak shader

arithmetic

(GFLOPS)

Peak

rasterization

rate

(Mtris/s)

Peak

memory

bandwidth

(GB/s)

GeForce GTX 460
1GB
16.8 39.2 39.2 941 1400 88.3
GeForce GTX 460 1GB 810MHz 25.9 47.6 47.6 1089 1620 124.8
GeForce GTX 560
Ti
26.3 52.6 52.6 1263 1644 128.3
GeForce GTX 560
Ti SOC
32.0 64.0 64.0 1536 2000 146.6
GeForce GTX 570 29.3 43.9 43.9 1405 2928 152.0
GeForce GTX 580 37.1 49.4 49.4 1581 3088 192.0
Radeon HD 6850 25.3 37.9 19.0 1517 790 128.0
Radeon HD 6870 28.8 50.4 25.2 2016 900 134.4
Radeon HD 6870
Black Edition
30.1 52.6 26.3 2106 940 147.2
Radeon HD 6950 25.6 70.4 35.2 2253 1600 160.0
Radeon HD 6970 28.2 84.5 42.2 2703 1760 176.0

The theoretical peak numbers in the table above will serve as a bit of a guide to what comes next. Different GPU architectures achieve more or less of their peak rates in real-world use, depending on many factors, but these numbers give us a sense of how the various video cards compare.

One match-up to watch: Gigabyte’s GeForce GTX 560 Ti SOC versus the much pricier GeForce GTX 570. The 560 Ti SOC is faster in all but two categories: rasterization rate and memory bandwidth. Could this version of the 560 Ti knock off its elder sibling?

This color fill rate test tends to be limited primarily by memory bandwidth rather than by ROP rates. That’s why the GTX 560 Ti SOC is a little slower than the GTX 570.

Despite a lower peak theoretical rate, the Radeon HD 6870 outperforms the GTX 560 Ti slightly in the integer filtering test. However, when we switch to FP16 texture formats, the GTX 560 Ti has a major advantage, both in theory and in delivered performance.

Notice the curious case of the Radeon HD 6970 here, which is slower than the 6950 in both tests. The cause of that anomaly is the PowerTune power capping feature in the Radeon HD 6900-series GPUs. The 6970, for whatever reason, is hitting its power cap and being limited more than the 6950. To demonstrate what happens without the power cap, I raised the PowerTune limit on the 6970 by 20% via AMD’s Overdrive utility. As you can see, that put the 6970 firmly out ahead of the 6950, as one would expect.

The first tool we can use to measure delivered pixel shader performance is ShaderToyMark, a pixel shader test based on six different effects taken from the nifty ShaderToy utility. The pixel shaders used are fascinating abstract effects created by demoscene participants, all of whom are credited on the ShaderToyMark homepage. Running all six of these pixel shaders simultaneously easily stresses today’s fastest GPUs, even at the benchmark’s relatively low 960×540 default resolution.

Up next is a compute shader benchmark built into Civilization V. This test measures the GPU’s ability to decompress textures used for the graphically detailed leader characters depicted in the game. The decompression routine is based on a DirectX 11 compute shader. The benchmark reports individual results for a long list of leaders; we’ve averaged those scores to give you the results you see below.

Finally, we have the shader tests from 3DMark Vantage.

Clockwise from top left: Parallax occlusion mapping, Perlin noise,

GPU cloth, and GPU particles

Clearly, the question of which GPU architecture’s shader performance is better depends heavily on the sort of workload involved. To oversimplify a bit, we know that the more vertex-shader-intensive tests like 3DMark’s cloth and particles tests are generally dominated by the GeForces, while the Radeons perform better in the more pixel-shader-intensive tests.

Cross-brand comparisons are difficult for that reason, but interestingly enough, we have a couple of sibling rivalries worth watching. The aforementioned battle between the GTX 560 Ti SOC and the GTX 570 remains fairly tight, but the GTX 570 has a clear edge in most cases. Meanwhile, the Radeon HD 6870 Black Edition somehow outperforms the theoretically much superior Radeon HD 6950 in couple of tests. We’d chalk that up to the PowerTune cap on the 6950, which the 6870 lacks—and which is much more likely to be an issue in these synthetic tests than when running a typical gaming workload.

We can measure geometry processing speeds pretty straightforwardly with the Unigine Heaven demo. This demo doesn’t really make good use of additional polygons to increase image quality at its highest tessellation levels, but it does push enough polys to serve as a decent synthetic benchmark.

Although the theoretical rasterization rates for the Radeon HD 6950 and the GeForce GTX 560 Ti are very similar, good tessellation performance involves much more than just rasterization—and Nvidia has an undeniable architectural advantage in this generation of GPU in terms of overall geometry processing throughput. These results reflect that.

F1 2010
F1 2010 steps in and replaces CodeMasters’ previous effort, DiRT 2, as our racing game of choice. F1 2010 uses DirectX 11 to enhance image quality in a few, select ways. A higher quality FP16 render target improves the game’s high-dynamic-range lighting in DX11. A DX11 pixel shader is used to produce soft shadow edges, and a DX11 Compute Shader is used for higher-quality Gaussian blurs in HDR bloom, lens flares, and the like.

We used this game’s built-in benchmarking facility to script tests at multiple resolutions, always using the “Ultra” quality preset and 4X multisampled antialiasing.

In formulating the testing regimen for this review, I vowed to focus on the wildly popular 1920×1080 resolution as much as possible. In this case, the GTX 560 Ti is generally a little slower than the stock Radeon HD 6870 at that resolution, as it is across the board. At the two lower resolutions, the GTX 560 Ti SOC looks lethal, outperforming the 6950 1GB, but that changes at 2560×1600.

The thing is, this is largely bickering over comparative numbers, which is helpful as a part of the overall performance picture, but consider this fact: even the slowest card in the test, the original GTX 460 1GB, delivers an average of 42 frames per second and a minimum of 39 FPS. In other words, any of these cards will run this game just fine, even at the “Ultra” quality preset we’re using.

Civilization V

In addition to the compute shader test we’ve already covered, Civ V has several other built-in benchmarks, including two we think are useful for testing video cards. One of them concentrates on the world leaders presented in the game, which is interesting because the game’s developers have spent quite a bit of effort on generating very high quality images in those scenes, complete with some rather convincing material shaders to accent the hair, clothes, and skin of the characters. This benchmark isn’t necessarily representative of Civ V‘s core gameplay, but it does measure performance in one of the most graphically striking parts of the game. As with the earlier compute shader test, we chose to average the results from the individual leaders.

The high-quality pixel shaders used in these leader scenes appear to map well to AMD’s shader architectures, both the older Barts vec5 and the newer Cayman vec4, than they do to Nvidia’s Fermi shaders.

Another benchmark in Civ V focuses, rightly, on the most taxing part of the core gameplay, when you’re deep into a map and have hundreds of units and structures populating the space. This is when an underpowered GPU can slow down and cause the game to run poorly. This test outputs a generic score that can be a little hard to interpret, so we’ve converted the results into frames per second to make them more readable.

When it comes to core gameplay, the GeForce cards tend to be a little faster than the Radeons in Civ V. For whatever reason, there’s very little separation between the 6870, 6950, and 6970, while the GeForces tend to scale up in performance as expected from one model to the next. Perhaps the large number of detailed models on screen is causing the Radeons to hit a geometry throughput bottleneck? That might also explain why the Radeons aren’t much faster here, at 1920×1080, than they were in our past tests of the same game at 2560×1600.

We’ve included the GeForce GTX 260 and 8800 GT in these results even though the game has to fall back to DirectX 10 in order to run on them. Obviously, even the GTX 260 is overmatched at these quality settings.

StarCraft II

Up next is a little game you may have heard of called StarCraft II. We tested SC2 by playing back an epic eight-player match using the game’s replay feature. This particular match was about 30 minutes in duration, and we captured frame rates over that time using the Fraps utility. Thanks to the relatively long time window involved, we decided not to repeat this test multiple times, like we usually do when testing games with Fraps. After we’d captured all of the data, we decided to focus our sample period on the last 500 seconds of the game, when the action was most intense. We’ve focused our frame-by-frame graphs on an even smaller 200-second portion of that period in order to keep them readable.

We tested at the settings shown above, with the notable exception that we also enabled 4X antialiasing via these cards’ respective driver control panels. SC2 doesn’t support AA natively, but we think this class of card can produce playable frame rates with AA enabled—and the game looks better that way.

Notice that we broke our rule and tested at 2560×1600 in this game. We did so because StarCraft II isn’t really GPU-bound with any of these newer graphics cards at 1920×1080. Only at 2560×1600 could we really stress these GPUs, although, sadly, going to this resolution caused the GeForce 8800 GT to hit the limits of its 512MB frame buffer.

The Radeons have a pretty pronounced advantage in this game, as they have every time we’ve tested SC2. Any of the newer cards would be a nice upgrade from the Georce GTX 260, though.

Battlefield: Bad Company 2
BC2 uses DirectX 11, but according to this interview, DX11 is mainly used to speed up soft shadow filtering. The DirectX 10 rendering path produces the same images.

We turned up nearly all of the image quality settings in the game. Our test sessions took place in the first 60 seconds of the “Heart of Darkness” level.

The GTX 560 Ti and the 6870 Black are in a dead heat here, while the GTX 560 Ti SOC is looking like a giant-killer by placing in between the Radeon HD 6970 and the GTX 570, both much more expensive cards.

Metro 2033

We decided to test Metro 2033 at multiple image quality levels rather than multiple resolutions, because there’s quite a bit of opportunity to burden these GPUs simply using this game’s more complex shader effects. We used three different quality presets built into the game’s benchmark utility, with the performance-destroying advanced depth-of-field shader disabled and tessellation enabled in each case.

We’ve included the two older, DX10-only cards here, even though they can’t handle DX11 tessellation, simply for comparison. Although they’re doing less work, they don’t gain any major advantage over the newer DX11 cards in terms of frame rates, as you’ll see.

In our past testing with this game, the trend has been that the Radeons grow relatively stronger as the quality level rises, and that trend appears to hold once more, though not in terribly pronounced fashion. Generally, the GTX 560 Ti matches the Radeon HD 6870 Black, while the GTX 560 Ti SOC closely shadows the 6950 1GB.

Aliens vs. Predator
AvP uses several DirectX 11 features to improve image quality and performance, including tessellation, advanced shadow sampling, and DX11-enhanced multisampled anti-aliasing. Naturally, we were pleased when the game’s developers put together an easily scriptable benchmark tool. This benchmark cycles through a range of scenes in the game, including one spot where a horde of tessellated aliens comes crawling down the floor, ceiling, and walls of a corridor.

For these tests, we turned up all of the image quality options to the max, with two exceptions. We held the line at 2X antialiasing and 8X anisotropic filtering simply to keep frame rates in a playable range with most of these graphics cards.

The mini-trend of near parity between two pairs of competitors, the 6870 Black-GTX 560 Ti and the 6950 1GB-GTX 560 Ti SOC, continues here. The relative standings are almost identical, regardless of the display resolution.

Power consumption

Now for some power and noise testing. We’ve included all three of the GTX 560 Ti cards with custom coolers and clock speeds for comparison.

By the way, we’ve changed our workload for the “load” tests. This time around, we’re using Battlefield: Bad Company 2 at 1920×1080 to load up the system. We’ve found that our test system draws more power while running this game than most others, making it a solid choice for the job, but these results aren’t directly comparable to our past articles where we used Left 4 Dead 2.

With the exception of the much higher clocked Gigabyte SOC card, all of the systems with GTX 560 Ti cards draw less power at idle than those with Radeons installed. The situation is reversed when running a game, as the GTX 560 Ti-based systems require more power than the competition. XFX’s 6870 Black Edition is particularly efficient.

One slight puzzle here is the case of the Radeon HD 6950 1GB, which is hungrier for power than its 2GB variant. AMD tells us the 6950 1GB may use older, less dense DRAM chips built on a larger fabrication process, which could explain some of the higher power draw. Factor in things like VRM and PSU inefficiencies, and that may go a long way toward explaining the 18W difference in total system power draw. However, it’s possible our early review sample from AMD, which isn’t a finished, shipping product, has other issues.

Another interesting item: notice how the GTX 560 Ti SOC and the GTX 570 have very similar power draw to go along with their fairly closely matched performance. Funny how two cards based on chips of different sizes can arrive at the same basic power-performance balance, isn’t it? Once you push past a certain point with higher clock speeds on a smaller chip, the exponential increases in power draw required to push further begin to cause problems. Gigabyte’s 560 Ti SOC isn’t at a bad place at all, but the GF114 looks to be near its practical limits here for a consumer product.

Noise levels and GPU temperatures

The combination of noise levels and GPU temperatures offered by the GTX 560 Ti reference card looks nearly ideal, and the XFX 6870 Black Edition performs so much like it, we had to do a double-take. Both are tuned for low noise levels, obviously, yet they don’t allow GPU temperatures to rise very high at all, as these things go.

Clearly, the three big motherboard makers are tuning their thermal solutions with an eye toward keeping GPU temperatures low, even if that means higher noise levels under load. As we’ve said before, we think that’s unfortunate. We’d prefer a little tolerance for higher temperatures if it means the fans will be quieter under load.

Of those three, the most impressive cooling performance comes from the Gigabyte SOC card. Despite the fact that its 1GHz GPU has substantially more heat to dissipate (as its readings on the watt meter indicate), the Gigabyte cooler keeps the chip at 63° C while producing less noise than the stock GTX 570 and 6970 coolers, both respectably quiet solutions.

MSI’s twin-fan cooler also looks to be very potent without producing too much noise. We just wish we could trade an increase of 15° C or so in GPU temperature for whatever that would give us on the decibel meter. This could well be the quietest card of the group with different tuning. It’s still quite good as it is, though.

Asus’ DirectCU II cooler is the loudest in the entire bunch, despite the fact that several other products have more heat to dissipate. Rather than hiss innocuously like most cooling fans, the fans on the Asus cooler emit a mid-pitched whine that registers strongly on our decibel meter. That’s unfortunate, because we really liked Asus’ first-generation DirectCU cooler. This one evidently needs some work in order to be competitive.

Conclusions

As you’ve no doubt gathered, this is a ridiculously close contest between some formidable competitors. Sorting out which one is best isn’t going to be easy, but I have a few definite thoughts on these matters.

Let’s begin by going to our price-performance scatter plot, generated by averaging the frame rates across all six of our real-world gaming tests. We’ve used the results from the 1920×1080 resolution where possible, since that is our focus for this review. Our prices come from two sources. For any new products or cards with revised pricing, we’ve taken AMD and Nvidia at their word. For the rest, we’ve used the lowest prevailing price—that is, the lowest price with multiple listings—at Newegg. The one exception: cards like the 810MHz version of the GTX 460 1GB are unique, so we just used the exact listed price for such products. As always, mail-in rebates were not factored into the mix.

Generally, the best deals tend to gravitate toward the top left corner of the plot, while the worst deals will be closer to the lower right corner.

Now we have a nice visual on the price-performance equation, and I can distract you by talking about other things, as well. I’d like to start explaining a practical reality for those folks using a single monitor at 1920×1200, 1920×1080, or less. For the most part, with today’s games, any of the products in the scatter plot above will serve you well. For our tests, we’ve intentionally chosen games with particularly graphics-intensive workloads, and we’ve tested them at peak quality levels with high degrees of antialiasing and the like. You saw how we had to bump StarCraft II up to 2560×1600 just to stress the GPUs. You should also know that we ruled out testing some really popular games, including the shooters Medal of Honor and Call of Duty: Black Ops, simply because they ran constantly at 90 FPS on most of these video cards.

Although the Radeon HD 6850 and GTX 460 1GB have frame-rate averages in the mid-30s in the scatter plot above, they’re going to be more than competent for most contemporary games at 1920×1080 with just a few minor compromises in image quality, such as dropping down from 8X to 4X antialiasing. Either of them would be a massive upgrade from a GeForce 8800 GT or something like it, as our tests made abundantly clear. They’re quite a bit more capable than a GeForce GTX 260, as well. You can spend less than $200 and do quite well for yourself in today’s market.

The Radeon HD 6870 is a great deal at $220, too, and kind of ruins the stock-clocked GTX 560 Ti’s coming out party, given how close the two are in overall performance.

There’s something to be said for spending a little more to get the best option available, though, and in this broad price class, the best options are at $259 to $269, in my view. The higher-clocked variants of the GeForce GTX 560 Ti and the 1GB version of the Radeon HD 6950 stand out.

On the AMD side, stepping up from Barts to Cayman gets you higher geometry throughput and the new EQAA antialiasing modes. These additions have taken a bit of the shine off of the 6870, since they’re features that every DirectX 11 GeForce has included from the beginning. (EQAA mirrors Nvidia’s CSAA by producing higher quality edge antialiasing without much of a performance hit.) The 6950 1GB is a steal at $259. Unless you’re planning on driving a four-megapixel display or multiple monitors via Eyefinity, you’re not likely to miss the second gigabyte of video RAM in this version of the card, either.

For ten bucks more, Gigabyte’s SOC version of the GTX 560 Ti looks to be a singularly good deal, with even higher performance than the 6950 1GB in our overall index. The GTX 560 Ti SOC is a more polished product than the our early 6950 1GB sample, with measurably lower noise levels. Going with a GeForce will get you even higher geometry throughput than Cayman, too, a feature that may have contributed to the 560 Ti’s performance advantage in Civilization V and could matter more in future games. Unfortunately, you’ll have to wait a week before this particular card becomes available, and if it proves too popular, it might be hard to find.

Had we fully tested the MSI version of the GTX 560 Ti at 880MHz and $259, it would probably sit right on top of the 6950 1GB in our price-performance scatter plot—and the MSI card is slated to become available today. All told, the MSI GTX 560 Ti “OC” and the Radeon HD 6950 1GB are incredibly close competitors, and you could probably do well simply by taking your pick between the two.

Yes, all of this hoopla has again come down to small slivers of difference between the best options from AMD and Nvidia, amazingly enough. These two companies and their partners have a very good handle on the competitive situation, and they’ve positioned their products accordingly. We’re left to sort things out, and sometimes, the best answer is “take your pick.”

One thing we do know: these new entries at $259-269 are where you want to be. The GeForce GTX 570, Radeon HD 6950 2GB, and Radeon HD 6970 don’t deliver enough additional performance to justify their prices—at least, not at the 1080p display resolution on which we’ve focused today.

Comments closed
    • Bensam123
    • 9 years ago

    Good article. I’m surprised there is no mention of unlocking the extra shaders in the 6950 to make it ridiculously competitive with any of the above solutions. 😡

    Also, a best fit line on the price/performance graph would be very nice for visualization.

    • Xcamas
    • 9 years ago

    What if i buy the 6950 1GB and OC it to 1GHz or something close to it?

    • WillBach
    • 9 years ago

    Scott, do the TR labs have any plans to review or benchmark 3D Vision? I have a 3D TV and I’m building a new box. I’m curious to find out if Empire: Total War or Civilization V look better in 3D. I would be buying the glasses for 3D anyway, because my wife and I are fans of the 3D effects in Up and How to Train Your Dragon.

    Thanks!

      • Damage
      • 9 years ago

      Well, there is this:

      [url<]https://techreport.com/articles.x/16313[/url<] ..but it's been a while. I am hoping to play with 3D Vision Surround soon, time permitting. Just.. not sure on the timing of that exactly.

        • WillBach
        • 9 years ago

        Thanks! Very helpful. I’ll be looking forward to any other 3D news as well 🙂

    • GokuSS2
    • 9 years ago

    Asus GTX560 Fan Noise..

    The more reviews of that card I read the more I think there are some bad cards going around.

    Some site are reporting the card dead quiet
    [url<]http://www.techpowerup.com/reviews/ASUS/GeForce_GTX_560_Ti_Direct_Cu_II/27.html[/url<] and others (like Techreport) are reporting it loud.. Bad fan control firmware?

      • Damage
      • 9 years ago

      I don’t think it was the firmware. The fans were at like 43% of peak speed under load.

      I also don’t think we had bad fans, in that they didn’t make any of the classic clicking or rough-friction noises you’d normally hear in such cases.

      I was surprised by the noise and asked Asus about it, and I supplied them with the fan speed and temperature info from GPU-Z, but I’ve not heard anything definitive back yet indicating our card has an unusual problem. They did tell me this cooler is a new design, with newly selected fans, though. Could be that they need to iron out some wrinkles still. Still hoping to hear more from Asus.

    • CaptTomato
    • 9 years ago

    Seems like much a do about nothing.

    • nunifigasebefamilia
    • 9 years ago

    1) Do you guys know if Nvidia is planning on adding support for driving more than 2 monitors at the same time?
    2) Does somebody have an experience with running 2 different video cards one from ATI and one from Nvidia? I know that theoretically Win7 supports such configuration, but has somebody really tried it? Any complications or issues?

      • Damage
      • 9 years ago

      Nvidia supports gaming with more than two monitors via SLI. They need to change the display capabilities of their chips (or add an external display driver on the card, perhaps) before they can support three or more monitors with a single GPU. That change is sure to come with the next generation of products but not before.

    • l33t-g4m3r
    • 9 years ago

    The card I really wanted to see this compared against (470) was not included.
    Not that I couldn’t find it in other reviews, and the 470 is faster than the 560, and cost less when I bought it.

    • NeronetFi
    • 9 years ago

    Awesome review : ) the 560Ti has peaked my interest and I plan on upgrading soon 🙂 Hope I win the card in the contest 😛

    • flip-mode
    • 9 years ago

    Thanks for the article Scott. Nice to see you posting comments too.

    Seems like we are in the midst of another graphics card golden age – good choices and good value at all price points. The 560 looks darn sweet. The 6950 1GB looks darn sweet too.

    • ztrand
    • 9 years ago

    Excellent review. Extra points for including older cards for reference, and not testing SLI. SLI results only messes up the charts and makes them harder to read imo.

      • Voldenuit
      • 9 years ago

      SLI (and XF) are also heavily dependent on game and driver optimisations, so immediately after launch is usually not the best time to test them.

      Since the 560Ti is a tweaked 460, driver optimisations probably won’t be as critical as if it were a new architecture, but profile identification issues might still arise. I agree though that multi-GPU testing is usually best left to a separate article after a bake-in period for drivers and profiles to catch up.

        • ztrand
        • 9 years ago

        true. I rember some ATI guy talking about releasing a tool for users to create their own CF profiles. Wonder whetever happened to that.

    • matnath1
    • 9 years ago

    Why didn’t you use a Sandy Bridge CPU for this review after Playing the Sandy Bridge Upgrade Card? Ok so this is a GPU review..Well it would have been cool.

      • derFunkenstein
      • 9 years ago

      Probably because they’d have to do the entire benchmark suite over again for the older cards, something that would be no small task. Something they’ll eventually have to do, yes, but I can’t blame them right now.

      • Krogoth
      • 9 years ago

      Precisely because it is a “GPU review”. Throwing a Sandy Bridge in there would have little or no impact on the performance in most of the benches. Because, most games at 2Megapixels with AA/AF are GPU-bound in the cards in the review.

    • Kamisaki
    • 9 years ago

    Thank you so much for including the older cards in this comparison. I have a 9800 GT currently, and up until now I was able to convince myself that I didn’t really “need” to upgrade. I mean, it’s still more powerful than the consoles that most games are ported from anyway, right? When you follow the tech regularly, it can be easy to brush off the incremental upgrades you see every time a new graphics card is released as insignificant. But man, seeing the vast performance difference between the 8800 GT and any of the current generation cards definitely was enough to sway me.

    • d0g_p00p
    • 9 years ago

    Sweet, a new great card at a great price. Now after reading the review I get to read the comments and read all the cheapskates complain about how expensive it is and how their Radeon X1300 plays every game at the highest rez and anyone who buys this card has too much money and no brains

    • jthh
    • 9 years ago

    What about the value of getting a 6950 and flashing it to a 6970?

    • elnad2000
    • 9 years ago

    Is it me or are the price of the 560ti kind of high? I recently bought a MSI GTX 460 768mb for 133$ CA shipping and tx included. A second one is coming as soon as my new Sandy Bridge 2500k and Asus P8P67 Pro is on the way. I’m pretty sure I will not pay a lot more than 140$ for the second one (pretty sure I will get it for 130$ TOP). So for 270$ CA, I’ll have 2 GTX 460 768mb that I’m pretty sur offer a lot more performance than one 560ti for 250$. Is my logic not good or do people really don’t want SLI?

      • sweatshopking
      • 9 years ago

      It doesn’t scale nearly as well as crossfire, and spending 2x the price for a 0-60% gain isn’t worth it. I had 8800gt’s in SLI, and after switching to a 4850, didn’t really notice a difference. With ATI doing 100% or even above, crossfire is a better way to go.

        • elnad2000
        • 9 years ago

        Wow, you could not have a worst answer to my question. I should not buy 2 GTX 460 because it doesn’t scale as well as Crossfire!!! 2 GTX 460 768mg easily beat a 570 or a 480. But since it don’t scale as well as ATI, I should not save money and have the same performance. WOW, just WOW.

        My original question was: Why don’t a lot more people goes the SLI (or Crossfire) way if the performance are better and the 2 cards are cheaper to buy than a single high-end card? And my point stand that 2 GTX 460 768mb are almost at the same price than a 560 and can easily beats it in any benchmark. So why should a 560 be a better buy?

          • SNM
          • 9 years ago

          SLI is louder. It takes more power. It’s dependent on proper profiles being set up for the games. (A few times a year there’s a big game where SLI/Crossfire just flat out don’t work until the profiles come out several months later.) On cards with 768MB of RAM you begin to run into memory constraints where SLI can’t help you. Etc.

            • elnad2000
            • 9 years ago

            One last question, if I go the SLI way, does the drivers use the RAM of the two cards or is it like years ago and it use only the RAM of the first card? I would then try to buy a GTX 460 1GB and put it on the first slot and the 768mb would go for the GPU power only on the second slot. Would it works?

            Thanks for the info by the way. And for the profiles, I wouldn’t mind since I buy my games on STEAM for 15$ or less 1-2 years after they are out. But the noise is kind of a problem. I know that my MSI GTX 460 Cyclone is very silent and I like it this way. I just hope that with two cards, it will stay kind of the same. It’s great to work on a silent PC.

            • SNM
            • 9 years ago

            SLI has always used the RAM in both cards, but they aren’t pooling memory — each card needs to keep its working set of textures/models/etc in on-board memory.

            Buying a 1GB card and pairing it with a 768MB card wouldn’t be helpful. As far as I know both cards would just use 768MB, and even if that particular limitation has been overcome you’re still going to be limited by the 768MB card.

          • Flying Fox
          • 9 years ago

          Power and heat (and the intangible hassle that you need to deal with those) will eventually reverse your cost argument.

      • travbrad
      • 9 years ago

      I don’t necessarily think the price is too high, but it’s also not low by any means. It doesn’t really change the value equation much/at all. You basically get what you pay for. Really no matter what your budget is, you should end up with a pretty good card.

      I got a 1GB GTX460 (clocked at 763mhz) for $210 (plus a small rebate) a couple months ago. While the 560Ti is certainly faster, I’d still buy buy a similar card today (or maybe the 6870).

      As for SLI/CF, that is typically a better deal money wise than a single card (in almost all cases). It uses a lot more power, generates a lot more heat, and isn’t consistent from game to game though. I’ve noticed most people with driver bugs/problems seem to be running dual-GPUs.

      • coldpower27
      • 9 years ago

      2x 460 768MB in SLI wouldn’t be good at the where the GTX 570 is aimed at and that is 2560×1600 gaming…

      [url<]https://techreport.com/articles.x/20088/7[/url<] Look at how poorly the GTX 460 SLI 768MB does there.. Alot of the time you run into the 768MB bottleneck.. or you get lower minimum FPS with SLI... You will beat a single GTX 560 TI for sure...but you need a SLI capable Intel motherboard.. which are typically a tad more pricey...then the SLI non capable versions. I rather not have to deal with SLI and just have performance that is just consistent across the board...

    • Krogoth
    • 9 years ago

    Great competition at the mid-range segment. So many choices at great prices. It is insane that you need to crank-up the in-game eye candy, AA and AF in order to stress out the current generation of mid-range cards. That used to be only be applicable to the ultra-high segment. If I were to get a new GPU today, the 6950 and 560 Ti would be my top choices. However, my current 4850 CF setup still works for my gaming needs.

    BTW, Ti brand makes me feel nostalgic. It goes back to time when Nvidia was a monster, 3Dfx died, ATI was struggling to get rid of its horrid history with Rage line and S3 was still making sub-par products.

      • Flying Fox
      • 9 years ago

      From a “more balanced” product in terms of price, performance, noise, and power consumption, I feel that the price-adjusted 6870 may have a slight edge. Especially with the focus of 1920×1080. If Nvidia wants to comprehensively win this round, I would think $239 would be the hot price for the 560 Ti.

      What do you think?

      • Meadows
      • 9 years ago

      “[i<]segiment[/i<]" Pardon me? Also, "the time when S3 was still making sub-par products"? Why, have they stopped making products that are sub-par? I had not noticed.

        • Krogoth
        • 9 years ago

        Context, my friend, context.

        I was refering to the past, not the present. 🙄

          • Meadows
          • 9 years ago

          English, my friend, English. You said “the time when […] S3 was still making sub-par products.”

          That literally means they’re not doing it anymore today. But the sad fact is, THEY ARE. For example, research the Chrome 540 GTX a little.

          I know you will keep claiming that you’re from the US, but without such basic grasp on “your” language, it’s becoming easier to doubt with every passing day.

            • Krogoth
            • 9 years ago

            Mr. Grammar, what would be the proper format?

            Anyway, it seems you are one of those perfectionist, “neat-freaks” would cannot stand minor mistakes and have an uncontrollable urge to point them out. Because, you have nothing else to argue about. 😉

    • xmoox00
    • 9 years ago

    well i just got pwned just bought a gtx 570 2 weeks ago should of waited would of saved me 100 bucks

    • dpaus
    • 9 years ago

    I know it`s not relevent to everyone, but I have four monitors on my desk (hey, I [i<]work[/i<], not play games!). Nvidia`s inability to drive them all and thus requiring me to buy a second card dramatically skews the value proposition in AMD`s favour.

      • sweatshopking
      • 9 years ago

      then buy a 6950. and send me whatever you’re currently using…

      • Kurotetsu
      • 9 years ago

      I honestly don’t understand Nvidia’s reluctance to embrace DisplayPort like AMD and Intel have. The lack of DisplayPort on Nvidia’s cards is the only reason they can’t support greater than 3 monitors without SLI.

        • SNM
        • 9 years ago

        It’s not just about the physical space constraints. Driving more monitors requires more on-card silicon to supply the outputs.

    • Meadows
    • 9 years ago

    “[i<]XFX's Radeon HD 6870 Black Edition runs at 940MHz with 4.6 GT/s memory, up from the stock speeds of 900MHz and 4.2 GT/s.[/i<]" Truly amazing. How the chip can handle the screaming 4% increase in clock rate is sincerely beyond me.

      • sweatshopking
      • 9 years ago

      I know. I can’t handle running at 4% faster. I’d die.

        • ermo
        • 9 years ago

        Joking aside, how much would you think would be a reasonable overclock on the 40nm process?

        The 4850 on 55nm ran at 625MHz-650Mhz stock and I have mine (based on the reference design) set to run at 690MHz with the highest voltage the 4850 allows and aftermarket cooling. The 4870 ran at what, 750 MHz stock and could be taken to 800+ safely only if you had rather decent cooling?

        Or in other words, how close is the 6870 really to its maximum potential at present, considering that it’s made on the relatively mature 40nm process? Isn’t it true that for each process tech, there’s usually a certain point on from which it gets really hard to extract a higher frequency no matter the voltage?

        If anything, I get the feeling that even if NVidia are on the conservative side with their stock clock speed for the 560Ti, they probably did a better job with the GF114 than AMD did with Barts on tweaking the design to allow for higher clocks down the road, given that most of the factory OCed 6870 cards are only up 15-40 MHz from the launch speed. Surely that’s no coincidence?

        *sigh* Only time will tell, I suppose.

          • Meadows
          • 9 years ago

          If nVidia, with their slower-but-sturdier performance characteristic (for years now, they’ve been making cards with lower MHz than AMD, yet still faster) can create powerful 1 GHz videocards without breaking the power supply, then I don’t think AMD videocards should stop at 940 MHz only to avoid high failure rates.

          Something has got to be wrong there – either their design is bad (or already overstrained), or comically, nVidia has just mastered TSMC’s process better than they have.

          In fact, I believe AMD has been setting clock rates higher than they should optimally be just to compensate for nVidia’s performance, which is – in the long run – bad rap for the company because overclocking then suffers. And when more and more enthusiasts find that out, they’ll consider switching too; then putting the green brand in the new family/friend/coworker PCs, and word will spread that “Brand A is better than Brand B because”. That’s not good for the red brand.

            • sweatshopking
            • 9 years ago

            I think that amd needs to perhaps increase the size of their dies to roughly the same size as nvidia. at that level, they would be faster, and cost roughly (nvidia gets a discount) the same to make. as it is, they’re so much smaller, they need much higher clocks.

      • Silus
      • 9 years ago

      This is my favorite!

      [url<]http://www.fudzilla.com/graphics/item/21229-asus-rolls-out-its-own-caymans[/url<] 10 Mhz over the default clocks! A guy in the comments says it best: "pathetic overclock"

        • sweatshopking
        • 9 years ago

        and you’re insane. so whatever.

        • Krogoth
        • 9 years ago

        Nah, it is just from spoiled people who had too much fun with overengineered products.

        GPU overclocking never made much sense. It only good for epenis points. It is smarter to just wait and get a newer GPU design down the road which would typically have more memory bandwidth, architectural improvements and GPU resources that would easily leapflog any gain from an overclock.

      • Krogoth
      • 9 years ago

      Silicon is running out of steam.

      The days of large leaps in transistor count and clockspeed are long over. Intel was the first to find that out with Prescott. GPU guys found out with GT2xx/R6xx generations. Nvidia learnt the hard way with GF100.

        • flip-mode
        • 9 years ago

        Vague statement is vague.

          • Krogoth
          • 9 years ago

          How so?

          You have to be ignorant or blind not to notice that the semiconductor industry has been struggling to increase transistor budget and clockspeed on their chips without making them into blast furnaces.

            • Meadows
            • 9 years ago

            You have to be ignorant or blind to not notice that flip-mode said something completely different to you.

            • derFunkenstein
            • 9 years ago

            TSMC is the only one really struggling at this point. Intel doesn’t seem to have a problem ramping up transistor density and still raising speeds, and AMD’s 6-core parts have much higher transistor budgets in the same thermal envelope.

            • Krogoth
            • 9 years ago

            I would beg the differ.

            They all struggling with it. TSMC is currently getting most of the noise due to their problems with 40nm. Intel and AMD/GF have been conservative with their transistor budget and clock speed ramps. Their current apporach is trying to improve efficiency without cranking up the MEGAHURTZ.

            • travbrad
            • 9 years ago

            If they aren’t struggling now, they certainly will be in the relatively near future. The laws of physics are kind of difficult to get around.

            • Meadows
            • 9 years ago

            Stupid-style capitalisation of “MHz” [b<]every single goddamn time you say it[/b<] won't make you look cool. In fact, the opposite. Also, I already corrected "I would beg the differ" in your comments in past years (I don't remember when) - it would be nice if you remembered anything about any time you were schooled. TSMC is cheap, that's what it is. Half of their goddamn wafers are broken and their chips scream if you want to switch them to "1 GHz" within any kind of a good voltage limit. Let's ignore them and discuss the better players. As for AMD's conservativeness, in the past 4 years they've raised their "average quadcore processor speed" from something like 2 GHz up above 3 GHz now. I know this doesn't seem like much of a change compared to the last children of the K8 which also had speeds exceeding 3 GHz, but keep in mind, this is 4 cores now, clock-for-clock efficiency is up by over 25% compared to K8, and we're essentially still in the same power envelope - it took AMD years to get where they are today. I'm sure they could release yet another "requires 140 W support" processor like they did with the original K10 black editions, except this time with 3.5 or 3.6 GHz as a full quadcore factory frequency (or heck, bundle hand-picked 4 GHz Phenom II's together with 800 W power supply purchases), but what's the point? They can't push past mid-range pricing, so there's only so far that product segmentation makes sense. As for CPU speed development, Intel have pulled off similar feats. To accuse [i<]either[/i<] company of hitting roadblocks and stagnating at large would be dumb at this juncture. And AMD are NOT trying to improve efficiency because of frequency troubles - they're doing it because intel's chip efficiency remains so grandiose that they have to catch up one way or another.

            • Krogoth
            • 9 years ago

            Epic facepalm is epic……..

            • Meadows
            • 9 years ago

            If you’re going to insist on having the last word like that, at least insert something of actual substance.

            • Dashak
            • 9 years ago

            Look everyone! A double standard!

    • phez
    • 9 years ago

    The return of 1920×1080 ! Hurrah !

    • glynor
    • 9 years ago

    This is not about the article itself, per-say. However, I became INCREDIBLY frustrated when trying to read this article this afternoon.

    Guys… I love TR, and have for a long time. Overall, I really like the new site design. However, you really, really, [b<]really[/b<] need to get a CSS up that better formats the site for reading articles on mobile devices. I have an iPhone 4 and it is possible to read the articles using it in "portrait mode", but certainly not pleasant. But, perhaps more importantly, I can't even imagine if I had a lower-resolution device or didn't have 20/20 vision. It is a better experience in landscape mode, of course, but I'd still say it is only marginal, and TR is literally the [i<]only[/i<] site I visit regularly that has this sort of issues on my phone. It has already started to impact the frequency with which I read the articles here from "cover to cover", and I'm a pretty big fan of the site. I imagine that many other people probably close the "tab" on their phones and never come back. I don't think a big, complex, mobile-specific "app-like" experience is needed at all, at least not now. But you do need the text to be legible for the non-eagle-eyed among us out there. Throw another CSS up on the server that loads by default for mobile browsers, which makes the text width 1/2 of what it is for desktop browsers, and everything would be fine for now.

      • Damage
      • 9 years ago

      We are but a few, poor people trying to survive in a world dominated by Tom’s Hardware and AnandTech. Money is tight and resources are scarce. But a mobile template is on our to-do list for 2011. Let us recover from the push for the new site and Metal (and Sandy Bridge and the march of a thousand GPUs), and we’ll get that planned and going.

        • dpaus
        • 9 years ago

        REFUND?!!? (clutches at chest and falls over)

        But, I totally echo the sentiment….

        • glynor
        • 9 years ago

        Great! Glad to hear it is planned. I can, of course, completely understand the need to triage.

        Sorry if my previous missive came off aggressive about it… I was just “hot” with frustration at the time, and wanted to accurately report my issue. It certainly isn’t the end of the world, and rolling it out subsequently is a perfectly reasonable plan.

        I just hadn’t seen any real mention of that as a plan and I wanted [i<]someone[/i<] to know that it was a fairly serious issue to consider with the new design.

        • Voldenuit
        • 9 years ago

        Glad to hear a mobile template is in the works.

        Also props for including the factory-overclocked Radeons and GeForces in this review. They give a useful perspective for what’s available in the market, and also provide targets for the DIY OC crowd to aim for, should they choose.

        It’s good to see AMD and nvidia trading blows again on an equal footing this round. It’s good news for the consumer. Hopefully, BD will provide intel the same challenge next quarter.

      • Krogoth
      • 9 years ago

      Protip: Smartphones aren’t the best form factor for web surfing.

        • End User
        • 9 years ago

        It is THE form factor when you are reading TR in the line at Tim Hortons in the morning.

          • Kurotetsu
          • 9 years ago

          So…don’t read it in line at Tim Horton’s if it looks so bad? The website isn’t going anywhere, you can read it when you get to a screen that doesn’t make it look like ass.

            • SNM
            • 9 years ago

            Any sentence that starts “Don’t read it” is the wrong answer for a site that depends on ad impressions and clicks to survive. 😉

            • Meadows
            • 9 years ago

            Unless you finish reading past those 3 words.

        • glynor
        • 9 years ago

        Like the best camera is the one you have with you, the best web browsing device is the one I have with me.

      • Rakhmaninov3
      • 9 years ago

      I’m using a Samsung Vibrant with Android 2.1–it has an 800×480 screen, and I was able to get it to resize just fine with 2 taps on the page; easily readable in portrait. Not sure what TR could do to make it any better than it was……

        • glynor
        • 9 years ago

        It is legible. I can, and did, read the entire article on my phone. That’s not the point. It certainly isn’t as nice of an experience as reading the articles on my monitors. Many sites have an equal or superior experience for me on my mobile device.

          • Farting Bob
          • 9 years ago

          I think one reason that reading this article on a 3.5″ display is not as good as reading on a 21″ display is that you are reading on a 3.5″ display.
          Anything with alot of text (most of the world are unable to deal with more than 160 characters at a time these days…) and high res photos is going to be difficult to format right for such a tiny screen.

            • glynor
            • 9 years ago

            Look, I don’t really want to get into a big discussion on the merits of reading web articles on a mobile device. That’s tiresome and really irrelevant. Like I said above, the best browser you have is the one you have with you. Often when I have time to sit and read an article like this one, it is NOT when I’m sitting in front of my monitor. When I’m in front of my monitor, I have work to do.

            However, I will say this: Reading large articles with lots of text and high-res images on my phone is fine (equal to reading them on my computer monitor or HTPC) for the vast majority of other websites, including places like Anandtech.com, Ars, HardOCP, and pretty much any other tech site you can think of out there. None of those sites except Ars have a fancy “skin” specifically for mobile devices (and Ars works well even if you go to the “non mobile site”).

            The issue is really simple. The new TR site design is justified just a little too wide for reading on a small screen. It looks nice on most modern computer monitors (where many of those other sites look oddly narrow and column-like when viewed on a 1920×1200 monitor). The problem is that because the text is a fixed width, the zoom level you can use is locked to a max of the width of the column of text. The old TR site was borderline but not really much worse than many other sites out there. The new one has a wider minimum page width (or maybe other font characteristics make it feel that way in practice), so the text is that-much smaller. If the CSS detected a mobile browser and kept everything exactly the same except that the text column width on-screen was about 2/3rds the current size, it would be perfect.

            Now, I can certainly compensate by simply holding the device closer to my face. I do, and that works in many situations (especially at night in bed). But when I’m trying to read in the lunchroom at work or in the passenger seat of the car (where the device bumps around a lot as the car moves) it can be very challenging to read TR now and not look weird while you’re doing it.

    • PixelArmy
    • 9 years ago

    Yay, relatively short PCBs (and PCI-E connectors on top of the non-reference cards)!

      • Kurotetsu
      • 9 years ago

      Yeah, one thing I like about this new generation (meaning the GTX460, GTX500-series, and the midrange HD6800s) is that they are actually getting shorter (or at least keeping a reasonable length). Whereas quite recently it was the opposite with each new release.

    • duke_nukem_3D
    • 9 years ago

    An overclocked 560 basically (nearly) mirrors the 570 in performance and power consumption!!??….this might cannibalize sales of the 570 no!?….I totally should’ve waited before buying my 570 last month….

      • steelcity_ballin
      • 9 years ago

      That’s the way it looks. You could always overclock the 570, no?

        • duke_nukem_3D
        • 9 years ago

        looks like the 560 (as with the 460) overclocks better than any other nvidia cards at present….still, given the price difference, I think the 560 is much more attractive than a 570 (for the gamer)….

      • Chrispy_
      • 9 years ago

      In the 570 you’re paying for a big expensive die which includes a large transistor budget on GPGPU compute logic for the workstation market.

      Yes, performance of the 570 is still higher, but in reality the GF 104/114 are Nvidia’s most powerful gaming-specific cards and by cutting out a lot of that GPGPU logic, they spent the transistor budget on making your games run much better given the same amount of silicon.

        • duke_nukem_3D
        • 9 years ago

        I agree about the engineering differences between the two GPU cores….but I think 90+% of 570 cards end up in the hands of gamers for whom the emphasis is on FPS per dollar….in this regard, the 560 owns the 570 for your average gamer….thus the potential for 90+% of 570 sales being cannibalized by the 560….when the 460 cannibalized sales of the 465 and even the 470 last cycle, it made sense as nvidia was under the microscope to be competitive….but to release the 560 with performance potential that bites into highly acclaimed 570 territory is probably not the best business move for them….but a clear win for consumers!

      • CasbahBoy
      • 9 years ago

      As if the life of an early adopter! These things will always happen. Don’t think about it much and let yourself enjoy the card.

    • kravo
    • 9 years ago

    It’s time to say good bye to my good ol’ 8800GT!
    I wonder if a q6600 @3Ghz will be good enough for a 560Ti…whichever version I can get my hands on.

      • mcforce0208
      • 9 years ago

      Yeah i think your Q6600 will suffice for the gtx560. I have one clocked in a 3.6ghz and it completely satisfies my 5870 (my card is maxed out and cpu not quite at 100%). I have 2 in crossfire at the mo, and it cant supply enough data to the 2 of them though. So if SLI is on the cards i would upgrade!!….Wait for bulldozer though!!

    • BoBzeBuilder
    • 9 years ago

    First!!!

    You can’t prove I’m not.

      • derFunkenstein
      • 9 years ago

      Yes, I can. Your timestamp is 2:24PM. rxc6 was here more than a full hour before you.

      • sweatshopking
      • 9 years ago

      -12? seriously you guys? what’s with the hate?

        • UberGerbil
        • 9 years ago

        The hate will continue until the pre-pubescent behavior stops.

          • Meadows
          • 9 years ago

          Prepubescent hate against prepubescent behaviour, a sure-fire strategy I believe.

    • I.S.T.
    • 9 years ago

    What I’m curious about is why in the last two tests the 1 gig version of the 6950 was faster than the 2 gig version…

      • HisDivineShadow
      • 9 years ago

      Perhaps the cards with less memory have better memory and the cards with more memory are getting inferior memory to cut costs?

    • south side sammy
    • 9 years ago

    Like to get one but learned a while back that 1gig memory just doesn’t cut it for me any more. Here’s looking towards the future and other releases. ………

      • kravo
      • 9 years ago

      I don’t want to advertise any manufacturers, but why don’t you check out gainward’s website. Just a tip.

    • passive
    • 9 years ago

    (Weird, I have an account already).

    A couple of notes.
    1) Why is the 2GB 6950 slightly lower in performance on the price/perf analysis? Is it intentional?
    2) The other reviews I’ve read have had the 6870 Black at $229 and the 6950 2GB at $269, which would adjust their relevance in your price/perf analysis. Is this just a case of you being unable to find them at those prices?

    Just generally, I’m finding it interesting that certain sites (this one and [H], so far) have the stock 560 Ti trading blows with the 6870, but others (Anands) have it on even footing with the 6950.

      • Damage
      • 9 years ago

      1) I dunno. The 1GB card AMD sent was a little faster than the 2GB model. Couldn’t tell you why.

      2) The card we tested, the XFX Black Edition, is listed at Newegg for $259 (linked in the review). XFX has another Black Edition at a lower price, I believe, but we didn’t test that one. I dunno what they are thinking, but that is the price of the product.

      As for different sites getting different results, I can’t speak for others, but test methods and results tend to vary. We do our best to document our methods, so folks can know what we did and perhaps why the results came out as they did.

        • I.S.T.
        • 9 years ago

        Do you plan on doing a review eventually with these new cards set to highest texture quality? I think it’d be a worthy thing to investigate how the highest settings impact performance.

          • Damage
          • 9 years ago

          I do plan on writing about texture quality issues (had hoped to include it here), but that is a moving target with AMD making changes to improve quality in the Cat 11.1a drivers. That change means the only difference between Quality and High Quality is the removal of a trilinear optimization. Since NV uses a similar trilinear optimization by default, I didn’t think ramping the Radeons up to HQ was necessary.

          To be clear, in specific cases with very high noise textures, Nvidia has visibly superior filtering quality, but setting the Radeons to HQ doesn’t discernibly improve their filtering in such cases.

            • I.S.T.
            • 9 years ago

            I was more referring to the performance impact. I’m curious to see how much performance I will lose when I turn that up to max(I always have texture stuff turned to max. Always) on Cayman and Fermi. I’m looking to build a system soon, and I’d like to see which one would be better in those cases.

            • Damage
            • 9 years ago

            So… Nvidia filtering quality at the defaults is higher than AMD’s HQ setting. Seems like that would be the key thing you’d want to know, if you wanted the best image quality possible.

        • SNM
        • 9 years ago

        Could we get a little info from AMD or some tests or something on why the 6950 1GB performance was higher?
        My initial thought was that with less RAM the OverDrive numbers worked out better for it. But when you said that the sample you got was pre-production, I became worried that maybe they hadn’t implemented OverDrive at all, or something else was going to change that might impact its performance negatively. Do we have any assurances from AMD that that won’t happen?

          • Damage
          • 9 years ago

          I am checking with AMD on the power differences between the two cards. But the performance difference is like 1-2 FPS between the 6950 1GB and 2GB–when there is one. What sort of assurances exactly do you need to save you from the terrible fate of performing like the 2GB version?

            • SNM
            • 9 years ago

            I was more concerned that AMD might suddenly discover that they’d messed up their power consumption formula for this card and it was going to lose 5% or 10% of performance, which given the tight groupings here would make a difference.
            Thanks for checking!

        • Jigar
        • 9 years ago

        Latency may be ?

    • glacius555
    • 9 years ago

    I can’t help wondering, why is the table with specs on page 2 is posted twice, not to mention such close distance on the page?

      • Damage
      • 9 years ago

      Whoops. Fixed it for you.

    • Thrashdog
    • 9 years ago

    Completely unrelated to the topic at hand, but…

    Scott reads BaT? Sweet!

    • steelcity_ballin
    • 9 years ago

    I just bought an EVGA 560Ti from newegg. I’m building a complete Sandy Bridge system in the next 2 weeks, but got the card now because A) I can use it, B) I’m selling my old one, and C) I don’t want to risk selling out!

    • derFunkenstein
    • 9 years ago

    Man, two things:

    1.) I didn’t realize an uncrippled GF104 would be that much better, and they mostly justify the (to me) crazy $250 price.
    2.) I don’t really want to pay that price.

    The lower-priced GTX 460 1GB cards appears to be “where it’s at”, for me at least. (edit: MSI 460 1GB for [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814127551&cm_re=GTX_460_1gb-_-14-127-551-_-Product<]$170[/url<]? That's more like it.) They're considerably faster than my 5770, though at the same time are they FAST ENOUGH faster for me to replace my card, given that I'm at 1920x1080? Maybe not enough to get me to bust out the plastic. I'm kind of gauging these things against the GTX 260 as they seem to run neck and neck, extrapolating across several reviews.

      • HisDivineShadow
      • 9 years ago

      They bumped the clockspeed up a lot on the “uncrippled GF104,” which accounts for part of that performance jump.

      I think if you give it a few weeks, prices will begin to drop. I think nVidia will want to prevent AMD from getting a foothold in their bread and butter, $200-300 sweetspot. I suspect the margins are great on the 560 and that it’s costing AMD more to put the 6950 in competition with it than they’re willing to admit.

        • derFunkenstein
        • 9 years ago

        Probably true on all counts, though I don’t see this thing hitting that $175 range for quite a while, as long as the 460 is available. No big deal, I’ll wait. 😀

          • Flying Fox
          • 9 years ago

          A non-Ti or “SE” version of the 560 may come out with the slightly defective or lower binned chips. That may be the ticket. Or they may call that the 550…

            • derFunkenstein
            • 9 years ago

            No, i’m pretty sure the 550 will be a 450 with the extra ROP/memory controller enabled, like the 460M.

            Even then, we’re still talking about the 550 being considerably faster than my 5770. Though I have no doubt we’ll see renamed, crippled parts in the 500 series.

      • Krogoth
      • 9 years ago

      GPUs benefit far more from gaining addition texture pipelines, shaders units and ROPs than simple clockspeed boosts.

        • flip-mode
        • 9 years ago

        How do you even correlate that? 1 shader to 1 MHz? 1 bunch (1 shader, 1 rop, 1 tex pipe) to 1 MHz? Or are you going by percentages? 1% more shaders to 1% more hertz? Surely you didn’t just say something meaningless!

          • Meadows
          • 9 years ago

          You would be surprised.

          • Krogoth
          • 9 years ago

          Please take graphical hardware 101.

          I cannot believe you are trying this hard to make me facepalm.

            • Meadows
            • 9 years ago

            You didn’t answer him.

            • Krogoth
            • 9 years ago

            Because, the answer is painfully apparent if you had a grasp on how GPU works.

            In a nutshell, it has been easier for GPU guys to engineer a design with more resources in order to increase performance than simply crank up the MEGAHURTZ. It is their primarly method of creating different market segments. They usually design a single architrectual platform (GT2xx, G8x, R6xx, R7xx, GF1xx, Cypress) . They remove proportions of basic design due to yields and economics. This is what gives you mid-range and budget parts.

            You know why mid-range and budget parts simply cannot reach move up a tier or two, despite being based on the same platform? They have far less resources at their disposal. An aggressive GPU overclock rarely makes up the difference. This review makes it painfully obivious with the cherry-picked, aggressively factory OC 560 Ti. It barely tails behind the vanilla 570. The normal 560s run short of the 570.

            • Meadows
            • 9 years ago

            There is no answer, because you said something stupid. You said GPUs benefit more from X than Y, first.

            But NOW, you say GPU [i<]engineers[/i<] benefit more from using X as opposed to Y. You never answered flip-mode about your dumb first statement. You gave no baseline. How do you compare execution units to switching frequency? How do you decide which is "more benefit" to any particular GPU? You didn't even understand his question, and continue to fail to answer. Also, do you mean the aggressively clocked 560 Ti that was within 5% of the 570's performance, while only using 2% more power at peak times (due to inefficiency)? I'd take that deal any time, at the price point.

            • Krogoth
            • 9 years ago

            I have no idea what flip-mode is getting at.

            I had merely stated the fact that GPUs typically benefit more from an increase in ROP, shaders, texture units than just ramping up the clock speed. It is due to the aforementioned reasons.

            To make my point more clear. I will use a silly analogy.

            Imagine that you are operating a factory and you want to ramp up productivity. You have two primary means, either you hire more workers (more shaders/etc.) or have each worker in your existing pool work faster (ramping up the clock speed). The first option requires more space and wages (larger, more expensive GPU), but will yield more productivity. The second option saves on space/wages and increases productivity. However it has limited headroom (workers can only go so fast). If you goal is increase productivity regardless of cost, then hiring more workers is the way to go. Making your existing pool working faster only makes sense if you have limited resources and want to increase productivity (common practice with mid-range and budget GPUs).

            How does this translate to GPU performance?

            For a 400 shader part to meet the performance of a 800 shader part. You have to clock the 400 shader part at double the clockspeed. This is much easier said than done. The 400 shader part will most likely never hit an 100% OC. Let alone a 50% OC with heavy overvolting. Depending on the part and process that it is build on. A 5-30% OC is more realistic. It still falls short of achieving the 800 shader part’s performance.

            • Meadows
            • 9 years ago

            “[i<]the 400 shader part most likely never will hit an 100% OC, let alone 50% with heavy overvolting[/i<]" Completely wrong word order. Have you ever gone to school?

            • Krogoth
            • 9 years ago

            You are fitting this archetype to the T.

            [url<]http://redwing.hutman.net/~mreed/warriorshtm/grammarian.htm[/url<] 🙄

            • Meadows
            • 9 years ago

            Better than you.
            [url<]http://redwing.hutman.net/~mreed/warriorshtm/ferouscranus.htm[/url<]

            • Krogoth
            • 9 years ago

            Bravo, you have the most insightful retort! Your talents and intellectual prowess are among the greatest assets of the human species! I am in complete awe of your achievements! Hail to the greatest champion of the internet! You win 10^100 Internets!

        • derFunkenstein
        • 9 years ago

        not necessarily true, depending on % additional shaders and % additional clock speed.

    • rxc6
    • 9 years ago

    Been waiting for this!! off to read XD

    Well, for me the 6850 1GB looks like an extremely sweet card. That might be my next card.

Pin It on Pinterest

Share This