AMD’s Radeon HD 6950 and 6970 graphics processors

2.6 billion. Six. The first figure is the number of transistors in AMD’s new Cayman graphics processor. The second is the number of days we’ve had to spend with it prior to its release. Today’s GPUs are incredibly complex beasts, and the companies that produce them don’t waste any time in shoving ’em out the door once they’re ready. Consequently, our task of getting a handle on these things and relaying our sense of it to you… isn’t easy. We’re gonna have to cut some corners, leave out a few vowels and consonants, and pare back some of the lame jokes in order to get you a review before these graphics cards go on sale.

“What’s all the fuss?” you might be asking. “Isn’t this just another rehashed version of AMD’s existing GPU architecture, like the Radeon HD 6800 series?” Oh, but the answer to your question, so cynically posed, is: “Nope.”

As you may recall, TSMC, the chip fabrication firm that produces GPUs for both of the major players, upset the apple cart last year by unexpectedly canceling its 32-nanometer fabrication process. Both AMD and Nvidia had to scramble to rebuild their plans for next-generation chips, which were intended for 32-nm. At that time, AMD had a choice: to push ahead with an ambitious new graphics architecture, re-targeting the chips for 40 nanometers, or to play it safe and settle for smaller, incremental changes while waiting for TSMC to work out its production issues.

Turns out AMD chose both options. The safer, more incremental improvements were incorporated into the GPU code-named Barts, which became the Radeon HD 6850 and 6870. That chip retained the same core architectural DNA as its predecessor, but it added tailored efficiency improvements and some new display and multimedia features. Barts was also downsized to hit a nice balance of price and performance. At the same time, work quietly continued—at what had to be a breakneck pace—on another, larger chip code-named Cayman.

Many of us in the outside world had heard the name, but AMD did a surprisingly good job (as these things go) of keeping a secret, at least for a while—Cayman ain’t your daddy’s Radeon. Or even your slightly older twin brother’s, perhaps. Unlike Barts, Cayman is based on a fundamentally new GPU architecture, with improvements extending from its graphics front end through its shader core and into its render back-ends. The highlights include higher geometry throughput, more efficient shader execution, and smarter edge antialiasing. In other words, more goodness abounds throughout.

So when we say our task of cramming a review of Cayman into a few short days isn’t easy, that’s because this chip is the most distinctive member of the recent, bumper crop of new GPUs.

Cayman.. Ca-aa-ay-man

A logical block diagram of the Cayman GPU architecture. Source: AMD.

Our hardware reviewer’s license stipulates that we must include a block diagram in page one of any review of a new GPU, and so you have it above. This view from high altitude gives us a sense of the architecture’s overall layout, although it has no doubt been retouched by AMD marketing to add whiter teeth and to remove any interesting wrinkles.

Cayman’s basic layout will be familiar to anyone who knows recent Radeon GPUs like Barts and Cypress. The chip has a total of 24 SIMD engines in a dual-core configuration. (Both Cypress and Barts are dual-core, too, with dual dispatch processors as in the diagram above, although AMD didn’t reveal this level of detail when it first rolled out Cypress.) Each SIMD engine has a texture unit associated with it, along with an L1 texture cache. Cayman sticks with the tried-and-true formula of four 64-bit memory interfaces, each with an L2 cache and dual ROP units attached. In short, although it’s a little larger than Cypress, Cayman remains the same basic class of GPU, with no real changes to key differentiators like memory interface width or ROP count.

ROP

pixels/

clock

Texels

filtered/

clock

(int/fp16)

Shader

ALUs

Rasterized

triangles/

clock

Memory

interface

width (bits)

Estimated

transistor

count

(Millions)

Approximate

die
size

(mm²)

Fabrication

process node

GF104 32 64/64 384 2 256 1950 331* 40 nm
GF110 48 64/64 512 4 384 3000 529* 40 nm
RV770 16 40/20 800 1 256 956 256 55 nm
Cypress 32 80/40 1600 1 256 2150 334 40 nm
Barts 32 56/28 1120 1 256 1700 255 40 nm
Cayman 32 96/48 1536 2 256 2640 389 40 nm
*Best published estimate; Nvidia doesn’t divulge die sizes

Above is a look at the Cayman chip itself, along with some key comparative specs. Cayman is a bit of a departure from recent AMD GPUs because it’s decidedly larger, but it’s not a reticle buster like some of Nvidia’s bigger creations. In terms of transistor count and die area, Cayman appears to land somewhere between Nvidia’s two closest would-be competitors, the GF104 and GF110.

A new, narrower SPU

The big adjustment in Cayman comes at such a minute level, it isn’t even visible in the big block diagram. Inside of each of the chip’s SIMD shader processing engines is an array of 16 execution units or stream processing units (SPUs). In every AMD GPU architecture dating back to the R600, the fundamental SPU layout has been essentially the same, with four arithmetic logic units (ALUs) of equal capability and a fifth “fat” ALU capable of handling special functions like transcendentals. These execution units play a key part in the larger GPU symphony. Instructions for the ALUs are grouped together into a single, very long instruction word, and then all 16 of the SPUs in a SIMD engine execute the same instructions on different data simultaneously.

Scheduling instructions in VLIW5 groups like that can be a challenge, since the real-time compiler in AMD’s graphics drivers must ensure that one operation’s output isn’t needed as input for another operation. If such dependencies are present, the compiler may not be able to schedule instructions on all five ALUs at once, and some ALUs may be left idle. The fact that only the one, “fat” ALU can handle transcendentals further complicates matters.

Thus, Cayman introduces a new, slimmer SPU block with four ALUs. Each of those four ALUs has absorbed the capabilities of the old “fat” ALU, so they can all handle special functions. Both the symmetrical nature of the ALUs and the narrower VLIW4 instruction word should simplify compiler scheduling and allow fuller utilization of the ALUs. It should also ease register management and make performance more predictable, especially for non-graphics applications. AMD claims a 10% improvement in performance per square millimeter over the prior VLIW5 design. However, AMD Graphics CTO Eric Demers, who was chief architect on Cayman back when the project started and was also deeply involved in R600, said almost wistfully that AMD would have retained the five-wide ALU if graphics workloads were the only consideration. Obviously, GPU computing performance was a big impetus behind the change.

In fact, some of the enhancements in Cayman apply almost exclusively to GPU computing applications and may affect AMD’s FireStream lineup more directly than its consumer Radeon graphics cards. Among them: the ratios for double-precision floating-point math have improved somewhat, since DP math operations happen at one-quarter the single-precision rate, rather than one-fifth in prior designs. Cayman has taken another step toward the data center by incorporating ECC protection for external memories, much like Nvidia’s Fermi architecture. Unfortunately, unlike Fermi, internal memories and storage aren’t protected. Of course, ECC protection won’t be used in consumer graphics cards, regardless.

Cayman’s support for processing multiple compute kernels simultaneously is more robust, as well. According to Demers, Cypress could execute multiple kernels, but with only one pipe into the chip, their entry into the GPU had to be serialized. Cayman now has three entry points, with the possibility for more in future GPUs. Each kernel has its own command queue and virtual address domain, so they should be truly independent from one another.

The laundry list of compute-focused changes goes on from there, encompassing dual, bidirectional DMA engines for faster communication with the host system; the coalescing of shader read operations; and the ability to fetch data directly into the local data share attached to each SIMD. Many of these capabilities may sound familiar because Nvidia added them to its Fermi architecture. Clearly, AMD is on a similar architectural trajectory, toward making its GPU into a very competent general-purpose and data-parallel processor.

More tessellation from the, uh, tessellinator?

One of the flash points in DirectX 11 GPU architecture discussion has been the question of geometry throughput. Tessellation—the ability to take a low-polygon mesh and some additional information and transform it into a much more detailed, high-poly mesh on the GPU—is one of DX11’s highest-profile features. Add the fact that Nvidia has taken a much more sweeping approach to parallelizing geometry processing, and you have the makings of a good argument or three.

The underlying issue here is that polygon throughput rates in GPUs haven’t risen at nearly the rate other forms of graphics power have. There’s more to it, but the fact that setup and rasterization rates didn’t, for ages, eclipse one triangle per clock cycle is a good indicator of the problem. Without parallel geometry processing, the limits were fairly static. GPU makers are finally pushing past those limits, with Nvidia quite clearly in the lead. The GF100 and GF110 GPUs can rasterize up to four triangles per clock cycle, for example.

AMD created some confusion on this front when it introduced Cypress by claiming the chip had dual rasterizers. In reality, Cypress was dual core “from the rasterizers down,” as a knowledgeable source put it to me recently. What Cypress had was dual scan converters—a pixel-throughput optimization for large polygons—but it lacked the setup and primitive interpolation rates to surpass one triangle per clock cycle.

Caymans’ dual graphics/vertex engines. Source: AMD.

By contrast, Cayman has the ability to setup and rasterize two triangles per clock cycle. I’m not sure it quite tracks with what you’re seeing in the simplified diagram above, but Cayman has two copies of the logic block that does triangle setup, backface culling, and geometry subdivision for tessellation. Load-balancing logic distributes DirectX tiles between these two vertex engines, and the processed tiles are then fed into one of Cayman’s two 12-SIMD shader blocks. Interestingly, neither vertex engine is tied to a single shader block, nor vice-versa. Future variants of this architecture could have a single vertex engine and dual shader blocks—or the reverse.

Of course, two triangles per clock is the max theoretical rate, but delivered performance will be a little lower. I’m told AMD has measured Cayman’s throughput at between 1.6 and 1.8 triangles per clock.

That’s a big improvement over prior Radeons, but by comparison, Nvidia’s biggest chip, the GF110, has four raster engines; 16 “PolyMorph engines” for setup, transform, and geometry expansion; and a four-triangle-per-clock theoretical peak.

On the edge: better antialiasing

The render back-ends haven’t been overlooked in Cayman’s wide-ranging overhaul. Several new capabilities should raise performance and image quality.

Among those is native support in the ROP units for some additional color formats, including 16-bit integer (snorm/unorm) and 32-bit floating-point. AMD claims antialiasing with these color formats should be 2-4X faster than before, which is true largely because those formats are no longer handled in software—that is, in the shader core rather than in the ROPs.

The biggest news, though, is the introduction of a new antialiasing capability known as EQAA (which I believe stands for enhanced quality antialiasing.) The intriguing thing here is that EQAA is more or less a clone of the coverage sampled AA (CSAA) feature Nvidia first introduced in the G80, its first-gen DX10 GPU. At that time, AMD was touting its custom-filtered antialiasing modes as an alternative to CSAA. Now, CFAA has all but disappeared, with both the wide and narrow tent filters from prior generations having been excised from the 6800/6900-series drivers. Only the edge-detect filter remains, although it is an interesting option.

Sadly, we don’t have time to explain multisampled antialiasing (or quantum physics, for that matter) in this space, but for those who are familiar, EQAA simply stores fewer color samples than it does coverage samples, thereby increasing accuracy (and thus image quality) with a minimal increase in the memory footprint or performance cost. We’ve found Nvidia’s corresponding feature, CSAA, to deliver visibly superior edge AA quality without slowing frame rates much at all. Cayman’s ROPs can be programmed to store a different number of color and coverage samples, so many things are possible, but AMD has largely replicated Nvidia’s CSAA modes, with one notable addition. Also, AMD’s naming scheme for the different EQAA modes is a little more modest, since it’s based on the number of color samples rather than coverage samples. I’ve mapped the names and sample sizes to clear up any confusion. Included are the traditional multisampled AA modes for reference.

Radeon

mode

Texture/

shader

samples

Color

samples

Coverage

samples

GeForce

mode

2X MSAA 1 2 2 2X MSAA
2X EQAA 1 2 4
4X MSAA 1 4 4 4X MSAA
4X EQAA 1 4 8 8X CSAA
8X MSAA 1 8 8 8xQ CSAA
1 4 16 16X CSAA
8X EQAA 1 8 16 16xQ CSAA
1 8 32 32X CSAA

AMD’s new mode, as you can see, is 2X EQAA, which captures two only color samples but four coverage samples. This mode could be a nice choice, especially in situations where performance is marginal—perhaps less likely to be an issue in Cayman than in a smaller derivative, but you get the picture.

Purported EQAA sample patterns. Source: AMD.

The EQAA sample patterns from the AMD presentation above are apparently only for illustrative purposes. We’ve captured the texture/shader (green dots), color (gray dots), and coverage (small red dots) sample patterns from Cayman and the GF110 with some simple tools, and they don’t really correspond with AMD’s presentation.

Radeon

mode

Sample

pattern

GeForce

mode

Sample

pattern

4X EQAA 8X CSAA
8X EQAA 16xQ CSAA

In reality, AMD’s sample patterns are quite a bit funkier. In 8X EQAA, one color and coverage sample is taken from the very top left corner of the pixel space. In the bottom right corner, you can see that same color sample point intruding from the pixel below.

MSAA EQAA
2X
4X
8X

EQAA’s effects are very evident in this simple test pattern. You have to like that 2X EQAA mode, which looks nearly as good as 4X multisampling.

I had hoped to include a lot more information on EQAA, including robust image quality comparisons with Nvidia’s CSAA and some performance data, but we’ll have to circle back and do that at a later date. We’re quite pleased to see AMD adding this feature, because it offers the possibility of direct performance comparisons between GeForces and Radeons in high-quality AA modes like 4X EQAA/8X CSAA. In fact, since we tend to prefer the image quality and performance of these AA methods, they may soon become our new de facto standard for testing, supplanting 4X multisampling.

Cayman does retain one other interesting antialiasing option, the morphological AA capability introduced with the Radeon HD 6800 series. MLAA is a post-process filter that lacks sub-pixel accuracy, so it’s a decidedly lower quality option than multisampling or EQAA—especially in motion, where its deficiencies are more evident than in static screen captures—but it has the great virtue of working properly with a wide range of games, including those that use deferred shading methods that don’t play well with MSAA and its derivatives. Again, this feature deserves more attention than we can give it presently, but we have it on our hit list for later.

PowerTune, somehow, isn’t for electric guitars

Speaking of features that deserve more attention than we can give them, Cayman introduces a novel power containment scheme known as PowerTune, whose stated goal is to keep the GPU from exceeding its maximum power rating (or TDP) in “outlier” applications that are much more power-intensive than the typical game. Nvidia added a similar feature in its GeForce GTX 580 and 570 graphics cards just recently, but AMD claims its approach is better on several fronts. For one, Cayman contains an integrated power control processor that monitors power draw constantly. This processor then algorithmically adjusts clock speeds for various logic blocks on the GPU in order to enforce the product’s stated TDP limit.

Any such mechanism that reduces clock speeds has the potential to impact performance. The picture becomes more complicated from there very quickly, though. PowerTune is, in a sense, kind of the inverse of the Turbo Boost capability built into the latest Intel CPUs. Turbo Boost will opportunistically raise clock speeds in order to grab more performance when available, whereas PowerTune limits clock frequencies when the chip draws too much power. AMD tell us PowerTune generally shouldn’t kick in during normal use—but it adds a caveat: especially with antialiasing in the mix. Of course, antialiasing isn’t always in use. PowerTune will reduce performance in some measurable ways—and not just in FurMark or the like. Even 3DMark Vantage’s Perlin Noise test, which has lots of shader arithmetic, will cause PowerTune to kick in.

AMD is very open about the implications of this feature, even going so far as to point out that default GPU clocks for its products will no longer have to be constrained by “outlier” applications. Taken another way, that’s a straightforward admission that GPU clock frequencies will be set higher and allowed to bump up against the TDP limits. That’s a departure from the usual approach, say in the CPU world, in which a buying a certain product generally guarantees the user a certain level of performance and the invocation of throttling generally means a cooling problem has occurred. Intel has struck a very different compromise by offering its users some extra, non-guaranteed performance in the form of Turbo Boost. The question, we suppose, is how far AMD will push on binning and power capping its products over time—and whether users will decide to push back.

AMD tells us its PowerTune algorithm for each video card model will be tuned for the worst-case scenario, to accommodate the leakiest, most power-hungry chips that fall into that particular product bin. As a result, performance should not vary substantially from one, say, Radeon HD 6950 to the next, even if ASIC quality does. AMD claims this steadiness from chip to chip is a contrast to Nvidia’s power-limiting scheme, which is based directly on power draw at the 12V rail. Since Nvidia claims its cards shouldn’t clamp power during normal use, though, we’re unsure whether (or how much) that distinction matters.

The presence of the PowerTune controller opens up some tweaking options, which AMD has decided to expose to the end user. A slider in the Catalyst Control Center will allow users to raise or lower their video cards’ TDP limits by 20%. The possibilities here are several. The user could raise the TDP limit alone to get less frequency clamping and higher performance in some cases. He could overclock his GPU but leave the TDP clamp in place, capturing additional performance where possible while ensuring his video card’s power consumption doesn’t exceed its limits. He might choose to raise both clock speeds and power limits to achieve maximum performance. Or he might decide to lower the TDP limit in, say, a home-theater PC to ensure modest noise levels and power draw.

I suppose one could also overclock the snot out of the thing and plunge the PowerTune slider to negative 20% just to create confusion about how the card will perform in any given situation. Whee!

With that said, we’re about ready to close the book on Cayman’s architectural enhancements and move on to the specifics of the new Radeon cards. Before we do so, though, we should point out that Cayman inherits all of the display and multimedia goodness already familiar from the Radeon HD 6800 series, including DisplayPort 1.2, a considerable array of display outputs compatible with the Eyefinity multi-monitor gaming scheme, and AMD’s UVD3 video processing block.

You’re totally getting carded

GPU

clock

(MHz)

Shader

ALUs

Textures

filtered/

clock

ROP

pixels/

clock

Memory

transfer

rate

Memory

interface

width

(bits)

Idle/peak

power

draw

Suggested

e-tail

price

Radeon HD 6850 775 960 48 32 4.0 Gbps 256 19W/127W $179.99
Radeon HD 6870 900 1120 56 32 4.2 Gbps 256 19W/151W $239.99
Radeon HD 6950 800 1408 88 32 5.0 Gbps 256 20W/200W $299.99
Radeon HD 6970 880 1536 96 32 5.5 Gbps 256 20W/250W $369.99

The table above shows the key clock rates and specifications for the two new Cayman-based graphics cards, alongside their younger cousins in the Radeon HD 6800 series. We have a couple of bombshells in the memory department, one of which is the sheer-panic clock frequencies AMD has achieved for Cayman’s GDDR5 interface and external memories. Nvidia’s GeForce GTX 580 has a wider memory interface, but it tops out at just 4 Gbps. The other surprise on the memory front: both the 6950 and 6970 are packing 2GB of RAM by default. Even the GTX 580, a $500 video card, has only 1536MB. This higher RAM amount should allow the 6950 and 6970 to drive to some very high resolutions via Eyefinity and multiple displays without running out of space.

AMD says these two new cards should be available for sale today at online retailers. At $369.99, the 6970 is priced just above the GeForce GTX 570, whose suggested price (and current street price) is $349.99. Meanwhile, the 6950 has very little direct competition at the $300 mark, since Nvidia doesn’t currently have a similar offering. AMD tells us it expects its partners to introduce 1GB variants of the Cayman cards that will sell for less, too. We think a 1GB version of the 6950 could be a very attractive offering at around $279. Here’s hoping it happens.

The dark side of the 6970

From outside, the 6950 and 6970 are difficult to distinguish

The one obvious difference between the two: the 6970 has one eight-pin aux power input

From left to right: Radeon HD 6970, 6950, 6870, 6850

A naked Radeon HD 6950 card

The cooler includes a vapor chamber-based heatsink with a copper base—similar to GTX 500-series GeForces

That final picture above deserves some comment. First, notice that the Cayman cards have dual CrossFireX connectors, unlike the 6800 series. That means three- and four-way CrossFireX configurations should be possible. Second, check out that minuscule switch on the right. The 6900-series cards come with dual video BIOSes, so that the user can switch to a protected, backup BIOS should a bad flash scramble the main one. The switch allows the user to select which video BIOS to use. That’s a nifty little safety provision, and it should pay off for AMD’s partners in the form of lower RMA rates.

Our testing methods

Many of our performance tests are scripted and repeatable, but for some of the games, including Battlefield: Bad Company 2, we used the Fraps utility to record frame rates while playing a 60-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We raised our sample size, testing each Fraps sequence five times per video card, in order to counteract any variability. We’ve included second-by-second frame rate results from Fraps for those games, and in that case, you’re seeing the results from a single, representative pass through the test sequence.

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-965 Extreme 3.2GHz
Motherboard Gigabyte EX58-UD5
North bridge X58 IOH
South bridge ICH10R
Memory size 12GB (6 DIMMs)
Memory type Corsair Dominator CMD12GX3M6A1600C8

DDR3 SDRAM at 1600MHz

Memory timings 8-8-8-24 2T
Chipset drivers INF update 9.1.1.1025

Rapid Storage Technology 9.6.0.1014

Audio Integrated ICH10R/ALC889A

with Realtek R2.51 drivers

Graphics
Radeon HD 4870 1GB

with Catalyst 10.10c drivers

Asus Radeon HD 5870 1GB

with Catalyst 10.10c drivers

Asus Radeon HD 5870 1GB + Radeon HD 5870 1GB

with Catalyst 10.10c drivers

Asus ROG Matrix Radeon HD 5870 2GB

with Catalyst 10.10c drivers

Radeon HD 5970 2GB

with Catalyst 10.10c drivers

Asus Radeon HD 6850 1GB

with Catalyst 10.10c drivers

Dual Asus Radeon HD 6850 1GB

with Catalyst 10.10c drivers

XFX Radeon HD 6870 1GB

with Catalyst 10.10c drivers

Sapphire Radeon HD 6870 1GB + XFX Radeon HD 6870 1GB

with Catalyst 10.10c drivers

Radeon HD
6950 2GB

with Catalyst 8.79.6-101206a drivers

Dual Radeon
HD 6950 2GB

with Catalyst 8.79.6-101206a drivers

Radeon HD
6970 2GB

with Catalyst 8.79.6-101206a drivers

Dual Radeon
HD 6970 2GB

with Catalyst 8.79.6-101206a drivers

GeForce 8800
GTX 768MB

with ForceWare 260.99 drivers

XFX GeForce
GTX 280 1GB

with ForceWare 260.99 drivers

Asus GeForce GTX 460 768MB

with ForceWare 260.99 drivers

Dual Asus GeForce GTX 460 768MB

with ForceWare 260.99 drivers

MSI Hawk Talon Attack GeForce GTX 460 1GB 810MHz

with ForceWare 260.99 drivers

MSI Hawk Talon Attack GeForce GTX 460 1GB 810MHz +

EVGA GeForce GTX 460 FTW 1GB 850MHz

with ForceWare 260.99 drivers

Galaxy GeForce GTX 470 1280MB GC

with ForceWare 260.99 drivers

GeForce GTX 480 1536MB

with ForceWare 260.99 drivers

GeForce GTX 570 1280MB

with ForceWare 263.09 drivers

Zotac GeForce GTX 570 1280MB +

GeForce GTX 570 1280MB

with ForceWare 263.09 drivers

GeForce GTX 580 1536MB

with ForceWare 262.99 drivers

Zotac GeForce GTX 580 1536MB +

Asus GeForce GTX 580 1536MB

with ForceWare 262.99 drivers

Hard drive WD RE3 WD1002FBYS 1TB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition

DirectX runtime update June 2010

Thanks to Intel, Corsair, Western Digital, Gigabyte, and PC Power & Cooling for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

Some further notes on our methods:

  • We measured total system power consumption at the wall socket using a Yokogawa WT210 digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

    The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead 2 at a 1920×1080 resolution with 4X AA and 16X anisotropic filtering. We test power with Left 4 Dead 2 because we’ve found that the Source engine’s fairly simple shaders tend to cause GPUs to draw quite a bit of power, so we think it’s a solidly representative peak gaming workload.

  • We measured noise levels on our test system, sitting on an open test bench, using an Extech 407738 digital sound level meter. The meter was mounted on a tripod approximately 10″ from the test system at a height even with the top of the video card.

    You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

  • We used GPU-Z to log GPU temperatures during our load testing.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Pixel fill and texturing performance

Peak pixel

fill rate

(Gpixels/s)

Peak bilinear

integer texel

filtering rate

(Gtexels/s)

Peak bilinear

FP16 texel

filtering rate

(Gtexels/s)

Peak

memory

bandwidth

(GB/s)

GeForce GTX 460 768MB 16.8 39.2 39.2 88.3
GeForce GTX 460 1GB 810MHz 25.9 47.6 47.6 124.8
GeForce GTX 470 GC 25.0 35.0 17.5 133.9
GeForce GTX 480 33.6 42.0 21.0 177.4
GeForce GTX 570 29.3 43.9 43.9 152.0
GeForce GTX 580 37.1 49.4 49.4 192.0
Radeon HD 6850 25.3 37.9 19.0 128.0
Radeon HD 6870 28.8 50.4 25.2 134.4
Radeon HD 5870 27.2 68.0 34.0 153.6
Radeon HD 6950 25.6 70.4 35.2 160.0
Radeon HD 6970 28.2 84.5 42.2 176.0
Radeon HD 5970 46.4 116.0 58.0 256.0

The theoretical peak numbers in the table above will serve as a bit of a guide to what comes next. Different GPU architectures achieve more or less of their peak rates in real-world use, depending on many factors, but these numbers give us a sense of how the various video cards compare.

Versus its most direct rival, the GeForce GTX 570, the Radeon HD 6970 has comparable rates all around. Although the GTX 570 has a wider 320-bit memory interface, the 6970’s amazing GDDR5 clock speeds more than make up the deficit. The fact that the GTX 570 can filter FP16 textures at its full rate, rather than half, is no obstacle for the 6970, either, since Cayman’s higher unit count and clock frequency allows it to reach similar FP16 filtering rates, at least in theory.

The closest “competitor” to the Radeon HD 6950 is last year’s model, the Radeon HD 5870. The 6950 is only a little faster than the 5870 across the board—and that’s the stock model. We’ve also tested a slightly overclocked version of the 5870 with 2GB of RAM, which should provide us with an interesting and very direct comparison between the Cayman and Cypress architectures in which key rates are nearly equal and efficiency becomes the question.

This color fill rate test tends to be limited primarily by memory bandwidth rather than by ROP rates. True to form, the 6970 and 6950 outperform the GeForce GTX 570 here.

Notice, also, that I’ve tested a trio of older cards for historical interest, including the Radeon HD 4870, the GeForce GTX 280, and the oldest DX10 chip on the planet, the GeForce 8800 GTX. They can only participate in a subset of our test since they’re not DX11-capable, but they should be fun to watch and compare.

3DMark’s texture fill test doesn’t involve any sort of texture filtering. That’s unfortunate, since texture filtering rates are almost certainly more important than sampling rates in the grand scheme of things. Still, this is a decent test of FP16 texture sampling rates, so we’ll use it to consider that aspect of GPU performance. Texture storage is, after all, essentially the way GPUs access memory, and unfiltered access speeds will matter to routines that store data and retrieve it without filtering.

AMD’s raw sampling rates were already quite a bit faster than Nvidia’s, and Cayman’s higher unit count puts some additional distance between the two.

Cayman’s much higher theoretical texture filtering rates work out to somewhat higher measured throughput in RightMark, but nothing like the 2X advantage the 6970 has over the GTX 570 on paper. Then, in our FP16 filtering test, the 6970 doesn’t deliver on nearly as much of its promise as the GTX 570 does—and the GTX 580 is faster still.

Shader and geometry processing performance

Peak shader

arithmetic

(GFLOPS)

Peak

rasterization

rate

(Mtris/s)

Peak

memory

bandwidth

(GB/s)

GeForce GTX 460 768MB 941 1400 88.3
GeForce GTX 460 1GB 810MHz 1089 1620 124.8
GeForce GTX 470 GC 1120 2500 133.9
GeForce GTX 480 1345 2800 177.4
GeForce GTX 570 1405 2928 152.0
GeForce GTX 580 1581 3088 192.0
Radeon HD 6850 1517 790 128.0
Radeon HD 6870 2016 900 134.4
Radeon HD 5870 2720 850 153.6
Radeon HD 6950 2253 1600 160.0
Radeon HD 6970 2703 1760 176.0
Radeon HD 5970 4640 1450 256.0

Theoretical shader performance is an even trickier subject than the graphics rates we covered on the last page, for reasons we discussed when considering Cayman’s VLIW4 SPU design. Scheduling efficiency and utilization will count for a lot, as will other quirks of the individual architectures. In theory, the 6970’s peak FLOPS rates are nearly double the GeForce GTX 570’s, but Nvidia has a very different approach to shader design involving fewer units, doubled clock frequencies (versus the GPU core clock), and very efficient sequential, scalar scheduling. Also, Cayman’s dual vertex engines give it a nice boost in peak rasterization rate over the 5870, but the 6970’s theoretical peak rate is still less than two-thirds of the GTX 570’s.

The first tool we can use to measure delivered pixel shader performance is ShaderToyMark, a pixel shader test based on six different effects taken from the nifty ShaderToy utility. The pixel shaders used are fascinating abstract effects created by demoscene participants, all of whom are credited on the ShaderToyMark homepage. Running all six of these pixel shaders simultaneously easily stresses today’s fastest GPUs, even at the benchmark’s relatively low 960×540 default resolution.

Yep, Nvidia’s GPUs are faster here, despite their much lower theoretical peak FLOPS counts. Go past that and focus on the question of Cypress’ VLIW5 shaders versus Cayman’s VLIW4 design for a second, though. In theory, the Radeon HD 5870 can deliver 2.72 GLFOPS to the 6970’s 2.7 GFLOPS. In practice, though, the 6970 is over 10% faster, even in this all-graphics workload. That’s progress, even if it’s not revolutionary.

Up next is a compute shader benchmark built into Civilization V. This test measures the GPU’s ability to decompress textures used for the graphically detailed leader characters depicted in the game. The decompression routine is based on a DirectX 11 compute shader. The benchmark reports individual results for a long list of leaders; we’ve averaged those scores to give you the results you see below.

It’s not awful, but Cayman performs relatively poorly in this test, all things considered. The 6950 falls behind the Barts-based Radeon HD 6870, which has no advantage on paper that would predict this outcome. One possible reason for this result is that AMD’s driver-based real-time compiler far Cayman may still be fairly immature. There’s another possibility, too, which we’ll explore in a sec.

Finally, we have the shader tests from 3DMark Vantage.

Clockwise from top left: Parallax occlusion mapping, Perlin noise,

GPU cloth, and GPU particles

The 6900-series cards generally perform as expected in three of these tests, offering minor incremental improvements over the Radeon HD 5870. In a fourth, the Perlin noise test, the 5870 is markedly faster. Why? I’m pretty sure we’re seeing Cayman’s PowerTune power cap taking effect. AMD specifically mentioned 3DMark’s Perlin noise as an application that bumps up against the limits, and the performance would seem to indicate that clock speeds are being lowered.

Even so, notice that the 6970 remains quite a bit faster than the GTX 570 in this benchmark, just as it is in the parallel occlusion mapping test. Both of those are pixel shader-intensive tests, and as we’ve mentioned, Perlin noise is very arithmetic-heavy. The final two 3DMark tests, however, emphasize vertex shader performance, and the Fermi architecture’s distributed geometry processing capabilities give it a clear win. Note that Nvidia’s pre-Fermi G80 and GT200 chips (in the 8800 GTX and GTX 280, respectively) don’t fare nearly as well, relatively speaking, against the Radeon HD 4870.

Geometry processing throughput

We can measure geometry processing speeds pretty straightforwardly with a couple of tools. The first is the Unigine Heaven demo. This demo doesn’t really make good use of additional polygons to increase image quality at its highest tessellation levels, but it does push enough polys to serve as a decent synthetic benchmark.

The Radeon HD 6970 performs as well here as two Cypress chips aboard the Radeon HD 5970, so that’s progress. Still, Cayman is no match for the GF110’s quad rasterizers and 16 vertex engines.

We can push into even higher degrees of tessellation using TessMark’s multiple detail levels.

Hmm. TessMark uses OpenGL rather than Direct3D to access the GPU, and apparently AMD’s OpenGL drivers aren’t yet fully aware of Cayman’s expanded geometry processing capabilities. Frustrating.

HAWX 2

As we transition from synthetic benchmarks that measure geometry processing throughput to real-world gaming tests, we’ll make a stop at the curious case of HAWX 2.
We already commented pretty extensively on the controversy surrounding tessellation and polygon use in HAWX 2, so we won’t go into that again. I’d encourage you to read what we wrote earlier, if you haven’t yet, in order to better understand the issues. Suffice to say that this game pushes through an awful lot of polygons, but it doesn’t necessarily do so in as efficient a way as one would hope. The result is probably something closer to a synthetic test of geometry processing performance than a typical deployment of DX11 tessellation.

The question is: can Cayman’s revamped tessellation capabilities make the Radeons more competitive in this strange case?

Well, again, this is progress, but the Radeon HD 6970 still trails the much cheaper GeForce GTX 460 1GB. Suffice to say that four or so years ago, when AMD and Nvidia architects began envisioning these GPU architectures, they had very different visions about what sort of polygon throughput should be required. Then again, in defense of the Radeons and of HAWX 2‘s developers, the Cayman cards are achieving easily playable frame rates at this four-megapixel resolution, so the point really is academic.

Lost Planet 2

Our next stop is another game with a built-in benchmark that makes extensive use of tessellation, believe it or not. We figured this and HAWX 2 would make a nice bridge from our synthetic tessellation benchmark and the rest of our game tests. This one isn’t quite so controversial, thank goodness.

This benchmark emphasizes the game’s DX11 effects, as the camera spends nearly all of its time locked onto the tessellated giant slug. We tested at two different tessellation levels to see whether it made any notable difference in performance. The difference in image quality between the two is, well, subtle.

This contest is a little closer, but the GTX 570 still has the upper hand on the 6970 here. The 6970 and 6950 are faster than the 5870, but not by a lot.

Civilization V

In addition to the compute shader test we’ve already covered, Civ V has several other built-in benchmarking modes, including two we think are useful for testing video cards. One of them concentrates on the world leaders presented in the game, which is interesting because the game’s developers have spent quite a bit of effort on generating very high quality images in those scenes, complete with some rather convincing material shaders to accent the hair, clothes, and skin of the characters. This benchmark isn’t necessarily representative of Civ V‘s core gameplay, but it does measure performance in one of the most graphically striking parts of the game. As with the earlier compute shader test, we chose to average the results from the individual leaders.

The Radeons dominate in this test of pixel shading prowess, and Cayman even improves on Cypress’ performance somewhat.

Another benchmark in Civ V focuses, rightly, on the most taxing part of the core gameplay, when you’re deep into a map and have hundreds of units and structures populating the space. This is when an underpowered GPU can slow down and cause the game to run poorly. This test outputs a generic score that can be a little hard to interpret, so we’ve converted the results into frames per second to make them more readable.

The tables turn here, as the GTX 570 outduels the 6970. One bright spot for the Radeon camp is multi-GPU performance, where the Nvidia cards seem to struggle.

StarCraft II

Up next is a little game you may have heard of called StarCraft II. We tested SC2 by playing back a match from a recent tournament using the game’s replay feature. This particular match was about 10 minutes in duration, and we captured frame rates over that time using the Fraps utility. Thanks to the relatively long time window involved, we decided not to repeat this test multiple times, like we usually do when testing games with Fraps.

We tested at the settings shown above, with the notable exception that we also enabled 4X antialiasing via these cards’ respective driver control panels. SC2 doesn’t support AA natively, but we think this class of card can produce playable frame rates with AA enabled—and the game looks better that way.

The cheaper Radeon HD 6950 essentially ties the GeForce GTX 570, while the 6970 is a few FPS ahead of them both. The dual 6970 CrossFireX config takes top honors overall, with the highest average frame rate and an FPS minimum over our eight-minute test period that’s above the 60Hz refresh rate common to most LCDs. Impressive.

Battlefield: Bad Company 2
BC2 uses DirectX 11, but according to this interview, DX11 is mainly used to speed up soft shadow filtering. The DirectX 10 rendering path produces the same images.

We turned up nearly all of the image quality settings in the game. Our test sessions took place in the first 60 seconds of the “Heart of Darkness” level.

The Radeon HD 5870 has long performed relatively well in this game at these settings, and I had hoped to see Cayman improve on that tradition. I’m not sure two FPS qualifies as an improvement, though. Two FPS is also the difference between the 6970 and the GTX 570, our marquee matchup. Again, not much to write home about. I shouldn’t complain, though. With frame rate minimums in the mid-30s, even the 6950 is more than fast enough to handle this one.

Metro 2033

We decided to test Metro 2033 at multiple image quality levels rather than multiple resolutions, because there’s quite a bit of opportunity to burden these GPUs simply using this game’s more complex shader effects. We used three different quality presets built into the game’s benchmark utility, with the performance-destroying advanced depth-of-field shader disabled and tessellation enabled in each case.

Dude, so, yeah. At the lower quality settings, the GeForces’ higher geometry throughput with tessellation like totally puts them on top of the older Radeons. The situation evens out with higher-quality pixel shaders. But, check it, man. The Cayman cards are like handing it to the GeForces even at the lower quality levels. Righteous. You can feel that extra tessellation goodness.

Oh, and even though the older DX10 cards can’t do tessellation at all and it’s, like, totally unfair, they’re still way slower.

Aliens vs. Predator
AvP uses several DirectX 11 features to improve image quality and performance, including tessellation, advanced shadow sampling, and DX11-enhanced multisampled anti-aliasing. Naturally, we were pleased when the game’s developers put together an easily scriptable benchmark tool. This benchmark cycles through a range of scenes in the game, including one spot where a horde of tessellated aliens comes crawling down the floor, ceiling, and walls of a corridor.

For these tests, we turned up all of the image quality options to the max, with two exceptions. We held the line at 2X antialiasing and 8X anisotropic filtering simply to keep frame rates in a playable range with most of these graphics cards.

Here’s another case where Cayman is incrementally faster than Cypress, but that proves to be enough to put the 6970 ahead of the GeForce GTX 570.

DiRT 2: DX9

This excellent racer packs a scriptable performance test. We tested at DiRT 2‘s “ultra” quality presets in both DirectX 9 and Direct X 11. The big difference between the two is that the DX11 mode includes tessellation on the crowd and water. Otherwise, they’re hardly distinguishable.

DiRT 2: DX11

This final game test doesn’t do much to decide the contest between the 6970 and GTX 570: at the highest quality settings, only one FPS separates the two.

Power consumption

Now for some power and noise testing. Notice that the cards marked with asterisks in the results below have custom cooling solutions that may perform differently than the GPU maker’s reference solution.

The 6950 and 6970 draw a few watts less at idle than the GTX 570, fitting for a smaller chip. When running Left 4 Dead 2, however, the 6970 actually pulls a little more juice than the GTX 570. (We expect one might find different results with a different sort of graphics workload, but we think L4D2 is a good, representative game with relatively high power draw.) The 6950 is kind of in a class of its own, but its power draw is relatively low under load, only slightly more than the GeForce GTX 460 1GB’s.

Noise levels and GPU temperatures

Nearly all of the single-GPU solutions are pretty quiet at idle, and most are perilously close to the noise floor for the rest of our system’s components. Still, the 6950 and 6970 prove to be exceptional citizens, among the quietest solutions we tested. Dropping in a second 6970 does raise the noise levels at idle a bit, likely due to the obstruction of airflow into the primary card’s blower.

Under load, Nvidia’s stock coolers simply outperform AMD’s. The GTX 570’s GPU temperature exactly matches the 6970’s, the two cards’ power draw is only 5W apart, yet the GTX 570 is 2.5 dB quieter.

The value proposition

Now that we’ve stuffed you full of benchmark results, we’ll try to help you make some sense of the bigger picture. We’ll start by compiling an overall average performance index, based on the highest quality settings and resolutions tested for each of our games, with the notable exception of the disputed HAWX 2. We’ve excluded directed performance tests from this index, and for Civ V, we included only the “late game view” results.

Holy moly, we have a tie. The GTX 570 and 6970 are evenly matched overall in terms of raw performance. With the results this close, we should acknowledge that the addition or subtraction of a single game could sway the results in either direction.

With this performance index established, we can consider overall performance per dollar by factoring price into the mix. Rather than relying on list prices all around, we grabbed our prices off of Newegg where possible. The exception: out of necessity, we’re trusting AMD that its suggested prices for the 6900 cards will translate into similar street prices.

Generally, for graphics cards with reference clock speeds, we simply picked the lowest priced variant of a particular card available. For instance, that’s what we did for the GTX 580. For the cards with custom speeds, such as the Asus GTX 460 768MB and 6850, we used the price of that exact model as our reference.

AMD card Price Nvidia card
$169.99 GeForce GTX 460 768MB
Radeon HD 6850 $179.99
$214.99 GeForce GTX 460 1GB 810MHz
Radeon HD 6870 $239.99
$259.99 GeForce GTX 470
Radeon HD 5870 $289.99
Radeon HD 6950 $299.00
$349.99 GeForce GTX 570
Radeon HD 6970 $369.00
$429.99 GeForce GTX 480
Radeon HD 5870 2GB $499.99
Radeon HD 5970 $499.99
$509.99 GeForce GTX 580

A simple mash-up of price and performance produces these results:

The lower-priced solutions tend to bubble to the top whenever you look at raw price and performance like that.

We can get a better sense of the overall picture by plotting price and performance on a scatter plot. On this plot, the better values will be closer to the top left corner, where performance is high and price is low. Worse values will gravitate toward the bottom right, where low frame rates meet high prices.

Either way you slice it, the GTX 570 looks to be a better value than the Radeon HD 6970 for a simple reason: equivalent performance and a $20 price gap in the GTX 570’s favor. Happily for AMD, the Radeon HD 6950 looks to be a better value than either of them, albeit at a lower performance level.

Another way we can consider GPU value is in the context of a larger system purchase, which may shed a different light on what it makes sense to buy. The 6900-series Radeons are definitely enthusiast-type parts, so we’ve paired it with a proposed system config that’s similar to the hardware in our testbed system but a little more economical.

CPU Intel Core i7-950 $294.99
Cooler Thermaltake V1 $51.99
Motherboard Gigabyte GA-X58A-UD3R $194.99
Memory 6GB Corsair XMS3 DDR3-1333 $74.99
Storage Western Digital Caviar Black 1TB $89.99
Asus DRW-24B1ST $19.99
Audio Asus Xonar DG $29.99
PSU PC Power & Cooling Silencer Mk II 750W $119.99
Enclosure Corsair Graphite Series 600T $159.99
Total $1,036.91

That system price will be our base. We’ve added the cost of the video cards to the total, factored in performance, and voila:

Factor in the price of a complete system, and guess what? That $20 gap between the 6970 and GTX 570 pretty much melts into irrelevance. In fact, the 6970’s ever-so-teeny performance advantage must justify the additional 20 bucks.

Remember that these results would look very different with a more or less expensive system, so your mileage may vary.

Conclusions

I’ll let you in on a little secret. Those of you who just skip to the conclusions of these articles truly aren’t seeing our best work. We write the conclusions after, you know, everything else. Right now, I’ve barely moved from this spot in six days, my blood-caffeine level must be five times any sane legal limit, and I can’t feel my legs.

What’s more, I have almost no idea how you choose between our two marquee contestants, the Radeon HD 6970 and the GeForce GTX 570. Overall, their performance is equivalent. They both end in -70. Could be a wash!

Let’s make the case for both and see where we land.

In the GTX 570’s favor are a host of traditional winning attributes for a graphics card. It’s 20 bucks cheaper, draws a little less power under load, and generates less noise. What’s more, although most games aren’t yet taking advantage of it, the GTX 570 has measurably and markedly superior geometry processing throughput. That may be a forward-looking architectural feature, and the question of whether and how much that matters is far from settled. Given the performance parity elsewhere, though, it’s hard to ignore.

Cayman’s main advances in antialiasing capabilities, geometry processing, and shader scheduling efficiency move AMD closer to what Nvidia has offered in its Fermi architecture for the better part of 2010. That doesn’t really grant Cayman a minty fresh scent of newness. Cayman is an incremental change—an improvement, no doubt—that makes these new Radeons more, not less, like the competition.

On the other hand, the Radeon HD 6970 has 2GB of RAM onboard, can support three or more displays from a single card, and will allow you to play games across them via Eyefinity. You’ll need two GTX 570 cards in order to partake of Nvidia’s competing Surround Gaming feature, and even then, GTX 570 cards have 1280MB of memory, which could be limiting at six or more megapixels. In this sense, the Radeon HD 6970 outclasses the GTX 570. We can see paying the extra 20 bucks for that, if you aspire to multi-display gaming—or even if you think you might someday.

With no direct competitors and a nice price of $300, the Radeon HD 6950 gives us no sense of conflict about its merits. It would be an ideal step up from a cheaper offering like the Radeon HD 6870. Indeed, because of Cayman’s many improvements, we’d be very tempted to make the leap if we were deciding between the two. The fact that the 6950 has the same set of display outputs and 2GB of memory makes it an intriguing candidate for an Eyefinity setup, too. There really is nothing else in its class.

Comments closed
    • TheTechReporter
    • 9 years ago

    So, unless I’m missing something, the Radeon HD 6970 is a little _slower_ than the Radeon HD 5970?
    It’s official, I don’t understand video card numbering.

      • BlackStar
      • 9 years ago

      6990 is supposed to be the replacement to the 5970.

    • beck2448
    • 9 years ago

    Totally underwhelmed.

    • LoneWolf15
    • 9 years ago

    I’m thinking that there’s at least a possibility that additional AMD driver progression will be better optimized for the new VLIW4 architecture, and add to the 69xx cards’ performance. It will be interesting to see what we get in a few months with the 10.13 and 10.14 Catalyst drivers.

    • burntham77
    • 9 years ago

    I am a little disappointed that the 4870 was missing in some test, like the power draw test. Otherwise, great article. A 6000 series Radeon will most likely be in my upgrade future later this year.

    • urbain
    • 9 years ago

    The 6970 is fabulous,it matches the GTX580 in metro 2033 and unigin heaven at max setts,which means AMD has finaly made a 40% smaller card with the same geometry power as the GTX580;thats an achievement
    On the other hand it seems kinda strange that the 6970 and 5870 are almost identical in BFBC2,i’m guessing those current drivers ain’t completely utilizing the newly released 6900s in BBC2 and Dirt2!
    overall 40% smaller and 46% cheaper for the same performance as GTX580 in most cases,makes the 6970 a definite buy for any filthy rich gamer.

      • michael_d
      • 9 years ago

      Significant performance increase on 6970 over 5870 could be related to usage of tessellation in certain titles.

        • urbain
        • 9 years ago

        Yeah that might be,but what about HawX2 that game according to reviewers and ubisot has good amount of Geometry,but still the 6970 is no where near GTX580 in that game!
        and GTX580 according to Scott is barely 15-20% faster than GTX480 which was is on par with 5870 in most games.
        I think the reason 6970 is behind GTX580 in Dirt2 and BFBC2 is the AA and some other software related issues.

          • travbrad
          • 9 years ago

          Depends on your definition of “good amount” (many people would say “too much” is more accurate). A lot of the tessellation in HAWX is completely unnecessary and is generally a poor use of tessellation. They basically didn’t even bother making normal meshes/models for parts of the world. They just made a very low poly mesh (like the kind of stuff you saw in Quake 1/2) and tessellated it.

          It’s still a real game so it’s a valid benchmark, but you probably won’t see those kind of major differences except in a select few Nvidia sponsored games or synthetic benchmarks. In any case HAWX2 runs at high framerates on any of these new cards.

            • WaltC
            • 9 years ago

            I think the issue of “this much tessellation” versus “that much tessellation”–I think that particular aspect of things has been way overblown with respect to HAWX. It’s much the same case with the synthetic benchmark TR used in the 6970/50 review that portends a large discrepancy in the “amount” of tessellation an ATi gpu can process versus the “amount” of tessellation a nVidia gpu can process, in a given amount of time.

            I think the root of this problem is that certain developers are going to nVidia and saying, “How should we code our game/benchmark to get the maximum in tessellation performance?”, and nVidia is more than willing to tell them–of course, to follow a code design that arbitrarily advantages nVidia architecture and drivers. I don’t fault nVidia for doing that–because ATi would be doing it, too, if asked.

            I fault both game developers and hardware reviewers for missing it, however, and I think this is a pretty big “it” to miss. Like FSAA and Anisotropic filtering, among many other things, all tessellation is simply *not equal* in terms of the way differing gpu hardware designed by differing teams views the issue of tessellation and implements it in hardware.

            Thus, if you want to derive the best result for benching ATi’s 69xx Cayman gpus, you need to be talking to ATi about just how to do that. Listening to nVidia say “Well, if AMD’s tessellation works like it *should* under the DX11 API, then their hardware should run exactly like ours,” is a bad mistake and it is so bad that reviewers, developers and gamers alike will just miss the boat.

            This is a much more complex issue than just saying “The amount of tessellation is unnecessarily high,” in a given benchmark or game. But AMD is attempting to simplify it by simply saying the /[

    • sweatshopking
    • 9 years ago

    Let’s get it to 222!

      • BoBzeBuilder
      • 9 years ago

      I’m with the troll.
      He who gets 222 buys a round of beer.

        • flip-mode
        • 9 years ago

        Didn’t you guys read #204?

          • BoBzeBuilder
          • 9 years ago

          Here’s your chance flip. Take it.

            • flip-mode
            • 9 years ago

            222 for the win! Gah! Noooo! I so wasted it! Never do #222 as a reply!

            • BoBzeBuilder
            • 9 years ago

            LOL.

    • BoBzeBuilder
    • 9 years ago

    200th! TAKE THAT TROLLS.

      • flip-mode
      • 9 years ago

      first one to 222 wins. I’ve had two 222s. A third would be epic.

        • BoBzeBuilder
        • 9 years ago

        I don’t think this thread is going to make it to 222, unless you want to argue how overrated The Dark Knight is.

          • flip-mode
          • 9 years ago

          You never know. This may be the top article all through the holidays. People could comment just cuz it’s there.

    • indeego
    • 9 years ago

    Got myself a 460 for $150 and Metro 2033($10). Game is gorgeous@ Directx11+high quality, plays fine on my system @ 1920g{<.<}g Reminds me of a combination of fear+max payne+Doom IIIg{<.<}g

    • ultima_trev
    • 9 years ago

    Taking benchmark data from TR and AnandTech, I’ve compared the results of SLI VS Crossfire (including Hawx 2), here AMD seems to put up a much better fight. I should also note in tests where there were no GTX 570 SLI results, I used GTX 480 SLI results in its place:

    HAWX 2 Average Minimum
    Gtx 580 sli 186 N/A
    Gtx 570 sli 171 N/A
    Hd 6970 cfx 131 N/A
    Hd 6950 cfx 121 N/A
    Hd 6870 cfx 97 N/A

    Lost Planet 2
    Gtx 580 sli 61 N/A
    Gtx 570 sli 54 N/A
    Hd 6970 cfx 44 N/A
    Hd 6950 cfx 40 N/A
    Hd 6870 cfx 40 N/A

    Civ 5
    Gtx 580 sli 44 N/A
    Gtx 570 sli 43 N/A
    Hd 6970 cfx 51 N/A
    Hd 6950 cfx 51 N/A
    Hd 6870 cfx 49 N/A

    StarCraft II
    Gtx 580 sli 87 49
    Gtx 570 sli 83 41
    Hd 6970 cfx 89 69
    Hd 6950 cfx 83 68
    Hd 6870 cfx 68 49

    Bad Company 2
    Gtx 580 sli 88 77
    Gtx 570 sli 87 74
    Hd 6970 cfx 77 63
    Hd 6950 cfx 79 69
    Hd 6870 cfx 70 60

    Metro 2033 @ 1920 VH
    Gtx 580 sli 59 N/A
    Gtx 570 sli 50 N/A
    Hd 6970 cfx 55 N/A
    Hd 6950 cfx 50 N/A
    Hd 6870 cfx 36 N/A

    AVP
    Gtx 580 sli 60 N/A
    Gtx 570 sli 52 N/A
    Hd 6970 cfx 57 N/A
    Hd 6950 cfx 50 N/A
    Hd 6870 cfx 39 N/A

    Dirt 2
    Gtx 580 sli 119 102
    Gtx 570 sli 101 87
    Hd 6970 cfx 102 89
    Hd 6950 cfx 93 81
    Hd 6870 cfx 85 74

    crysis WH @ 1920 gamer+enthusiast shaders
    Gtx 580 sli 93 57
    Gtx 570 sli 88 55
    Hd 6970 cfx 89 66
    Hd 6950 cfx 87 65
    Hd 6870 cfx 74 54

    Battle Forge
    Gtx 580 sli 118.7 N/A
    Gtx 570 sli 103 N/A
    Hd 6970 cfx 95.3 N/A
    Hd 6950 cfx 84.6 N/A
    Hd 6870 cfx 77.3 N/A

    average overall / avg fps / min fps
    Gtx 580 sli / 91.57 / 63
    Gtx 570 sli / 83.2 / 56.8
    Hd 6970 cfx / 79.03 / 61.8
    Hd 6950 cfx / 73.86 / 60.6
    Hd 6870 cfx / 63.53 / 51.4

    Check out those minimums! CFX scaling has become so beastly since HD 68xx… I can’t wait for HD 6990!! Considering the price/performance ratio of CFed 6950s, there’s no reason to step up to a more powerful multi-GPU solution what-so-ever!

      • michael_d
      • 9 years ago

      Trust me CrossFire is not worth the extra money due to these factors: noise, power consumption, driver support, heat. I wish they would make a 700mm sq. GPU with 4 billion transistors instead of those dual-GPU solutions.

    • killadark
    • 9 years ago

    and here i was being all excited tht it could beat gtx580 alas have to do with this performance power consumption seems high though

    • cynan
    • 9 years ago

    Someday AMD graphics cards will once again “Kick more ass than a pair of donkeys in a cage match”. Sadly that day isn’t today.

    Bring on the next gen cards!

      • shank15217
      • 9 years ago

      Actually, it does kick more ass than it’s previous generation and it did it on the same process node to boot.

    • Bensam123
    • 9 years ago

    Did anyone else get the vibe that Powertune is actually a new way to further segment their product and allow them to increase their yields by utilizing GPUS that wouldn’t normally pass QA?

    Curiously, I don’t understand why you would want to throttle your GPU as long as you have a decent power supply to it. If it isn’t putting the rest of the system at risk and the cooling can keep up with it, you should get what you paid for.

    Enhanced power states I can understand, as when you aren’t using something it draws less power and produces less noise. When you want it to give you all it has, then you don’t want it to hold back… unless you’re constrained by the aforementioned reasons.

    The whole thing sounds like a gimmick to me. It would’ve been nice to see the 6970 run without the TDP limit in place or turned down all the way to see how it is currently affecting the benchmarks.

    Thanks for adding slower video cards BTW.

    • glynor
    • 9 years ago

    Scott, this is BY FAR the best review of the new cards out there on the web. Thank you, especially, for including that unbelievable onslaught of comparison cards in the “stats”. That must have been a monumental task. Great job.

    It is things just like this that keep me coming back to TR when I REALLY want to know how a product stacks up.

    • ThorAxe
    • 9 years ago

    Scott I really appreciate you testing the 8800GTX.

    Though I have a pair of 6870s and a 4870X2, I have been curious to see how the old girl performs these days.

    I would also really like to see how an 8800GTX SLI rig performs with the latest drivers. No doubt they are unlikely to be optimized for G80 but it would still be interesting.

    • derFunkenstein
    • 9 years ago

    Man, people that jumped on the 5850 right at release are probably feeling pretty good about themselves and their purchases. Best deal in gaming since the GeForce 8800GT 512MB

      • odizzido
      • 9 years ago

      As a 5850 owner that about sums it up

        • derFunkenstein
        • 9 years ago

        I feel just a twinge of envy.

      • poulpy
      • 9 years ago

      I want to third that, I give mine a hug every so often to celebrate.

      • KeillRandor
      • 9 years ago

      Well – I got mine (unused) off a friend for £150 – (his crappy PSU at the time couldn’t handle it – (he’s got a much better comp with a 6870 now instead)) – so /[

    • Wintermane
    • 9 years ago

    Ive been llooking at all the new gpus and while yes the nvidia idea of all that tessation does look nice.. right now I just dont care.

    My current bugger can play fallout new vegas and will be able to play skyrim and a few other cool games comming out. Until thats not true anymore im sitting happy and cheap.

    Currently running on a laptop with amd 4200 chip.

    Mind you I AM legaly blind so 1000x aa comes free;/

    • michael_d
    • 9 years ago

    Only Metro 2033 demonstrates a significant performance difference between 5870 and 6970. I wanted to upgrade from my 5870 CrossFire to a single card but clearly 6970 is not worth it. Having said that 6970 CrossFire power consumption is higher although not by much but still disappointing. It is a good upgrade for 48XX owners in my view.

    It is 28nm wait time for me.

      • sweatshopking
      • 9 years ago

      upgrade a 5870 crossfire….? i JUST got a 4850. man, you people have money.

        • Meadows
        • 9 years ago

        And you don’t. Harrdy har har.

          • sweatshopking
          • 9 years ago

          yes 🙁

    • r00t61
    • 9 years ago

    6950 rocks, at least until it gets some competition at the $300 price point. There’s this gigantic hole in the Nvidia lineup between 460 and 570. Yeah, you could get a 470, but GF100 kinda stinks, so why bother…

    6970 value is not nearly as compelling. Trades blows with 570, but it costs a little more. And thermals/power not as great as when 5870 competed with 480. At that point I guess you’d have to decide what you’d rather have: Eyefinity or PhysX/Cuda.

    I hate multi-GPU setups for various reasons, but it is impressive to see the CF scaling on the 69xx (over at Anand’s).

    I’m surprised that the 6970 only matches GTX480. HD5870 was the top dog when it got released, and when GTX480 finally came around, it was 20% faster, sure, but also 20% louder, 20% hotter, 20% more power hungry, and 20% more expensive. It was also six months late.

    In my (uneducated) opinion, AMD built the 6970, with the 480 as a performance target. They got surprised when 580 was released, and didn’t have enough time/resources/whatever to match. So they went ahead and priced the the 6970 accordingly. No doubt in my mind that if there was no 580/570, 6970 would have debuted at $400+.

    Thanks for another great review, Scott and Geoff. Now leak us some performance data on Sandy Bridge (WE KNOW YOU HAVE THEM) and we’ll keep quiet until after the holidays.

    • mcnabney
    • 9 years ago

    I have a better idea for an actually useful chart since we all live in console-land.

    Keep a current ‘basket’ of games and tabulate based upon which card is the CHEAPEST to run all of the games tested at HIGH quality and at an agreed upon framerate per game. Show this minimum required card for three or four common display resolutions (WXGA, 1080p, QWXGA…)

    A second chart could be made for the same games at maximum quality.

    This might actually make some people interested in upgrading, or not.

      • NarwhaleAu
      • 9 years ago

      Buy yourself a 22″+ monitor and then you’ll understand why nobody is interested in lower resolutions. You could max out lesser resolutions with a 6850. Problem solved. Buy that and then start saving for a bigger monitor. That’s all you need to know. Once you have that bigger monitor, you can crossfire if you want awesome performance for a low cost. To be blunt, the performance of the 6950, 6970 and GTX 570 are irrelevant to 1680×1050 or lower resolutions.

        • swaaye
        • 9 years ago

        I have had a 1920×1200 24″ since 2005. But over a year ago I bought a 50″ 720p plasma TV and that baby is way more fun to play games on. And at 1360×768 everything runs awesome on even an 8800GTX.

        I’m starting to want to run SSAA on all of my games though so I may have to upgrade my card….

          • mcnabney
          • 9 years ago

          I am running a 63″ 1080p DLP off of a 4850. I am just barely getting by with the 2.5 year old card. There really is no clear answer of what is the cheapest card to run all of the current games well at that resolution.

            • cynan
            • 9 years ago

            I think it comes down to how much you want to try and future proof.

            If you were just worried about tackling today’s games, then I think something like a 6850 (perhaps an overclocked version for a little extra good measure) should be a good bet at around $200 or just under. This will run anything out there now at 1080p, while the most difficult of games (like Metro 2033) might require you to select minimal AA settings, etc to get average frames above 30 FPS.

            If you wanted to guarantee the card to handle anything that was to come out in the next 2 or 3 years at 1080p, well that’s were things get uncertain and you may be better off going with something like a GTX 570 or one of the Caymen offerings if your an AMD fan (or actually if you are connecting to your TV via surround AV receiver and want 7.1 bitstreaming over HDMI then you better avoid NVIDIA and go with a Caymen)

        • mcnabney
        • 9 years ago

        I hate to break it to you, but you are an atypical visitor to TR. One of their recent polls showed that over 2/3 of respondents planned on spending enough money on their next video card to get a 69XX or the Nvidia equivalent or better. If you think that we all run at 2560 on 24″ IPS displays you are wrong. I imagine most visitors would like to know how much GPU they need to buy to meet their needs.

        §[< http://www.techreport.com/discussions.x/19645<]§

      • Bensam123
      • 9 years ago

      I actually quite like this idea.

      Most people have a 1920×1200 or a 1920×1080 screen, so they can’t or wont run at resolutions higher then that, but they will almost always run at that resolution. 8 mega pixels is cool and all, but I don’t really ever see most of the people on here operating at that sort of resolution any time soon.

      I emailed Geoff a few years ago about something similar taking into account that perhaps a better way to look at which video card is the best, is which has the least variance. Offering the most consistent playback (as long as it’s above a certain threshold) would offer the best experience, since most people notice huge changes. Keep in mind I’m not talking about v-sync which merely attempts to tone down the video card to match a baseline and will fail at doing so if performance dips below a certain threshold

      That would work well with a unified resolution. I do however really like how current benchmarks as they are, this could be a interestingly additional way of looking at things (similar to price/performance match ups). Keep in mind this is all hypothetical of course.

      • Bensam123
      • 9 years ago

      *bump for weighting system other then fps/dollar*

    • Jigar
    • 9 years ago

    Although HD6950 does a very good job, i think AMD just made a GTX 480 in the form of HD6970, bad execution, i just hope this cards are hold by drivers…

      • flip-mode
      • 9 years ago

      Everyone loves the 570, but the 6970, which matches up to it extremely well, gets no love. Huh?

        • Jigar
        • 9 years ago

        If you see, in some cases it’s hardly 10FPS ahead of HD5870, i don’t think it should be even called a next generation card.

          • khands
          • 9 years ago

          I think with the SIMD changes it really needs a lot of driver love.

          • flip-mode
          • 9 years ago

          Ditto for the GTX 570. In some cases the 570 is only 3 fps faster than the 5870. So, if we’re being consistent, the 570 is Nvidia’s… 2nd 480?

          Personally, I think the GTX 570 and the Radeon 6970 are both pretty darn great. And the 6950 too. As is the 5870, really. A year later the 5870 is still pretty beastly.

          But, to call any of these “another 480” is forgetting some of the especially painful aspects of the 480. Nvidia had to disable parts of the 480 (and apparently the 460 and 450 too) to even get the needed yields. Despite the fact that part of the 480 got disabled, it still needs 50 watts more than the 6970. It is still 140mm^2 larger than Cypress. Nvidia set out to design the fastest GPU, but AMD still prioritizes die size. So the remark just seems rather Krogothian. Anyone who expected the 6970 to beat the 580 has been letting their imagination run wild.

          Anyway, I find it pretty difficult to choose between the 6970 and the GTX 570. The 6970 has more memory, but that pretty much doesn’t matter. Average performance is a dead tie. Power consumption is almost dead even. The 6970 puts out a little bit more noise at load. 6970 has Eyefinity, 570 has PhysX. GPGPU doesn’t much come into it for me. The 6970 has a dual bios: who knows how useful that is going to be. Dunno, at the end of the day I’d probably get the GTX 570 and put the $20 saved towards a game, but I wouldn’t feel like I’d gotten a better or worse card either way.

            • PixelArmy
            • 9 years ago

            Maybe not because of performance (like you said the 6970 virtually a tie with the 570), but it feels like AMD “aimed low” (or had to cause they weren’t catching the 580). Kinda like a football team playing not to lose.

            It’s all about perception:
            480, better performance than the 5870, but hot and power hungry and late = negative reception
            570, not even the flagship and it matches the previous one, with better power consumption, thermals, cheaper and a quick refresh = positive
            6970, flagship card but it can only match the competitor’s non-flagship, even though it was released later (consequently, they had more time to work on their next gen than nVidia did) and was supposed to regain AMD dominance = negative

            I suspect the 5xx cards really caught them off guard, so they just lowered the price and went with the price/performance mantra. You know these companies love having the crown, regardless of what they actually say.

            • flip-mode
            • 9 years ago

            If you look at AMD’s own slides, the 6970 is not positioned as the flagship card. Scott didn’t show the slide, or even square things up in a chart, for that matter. Other sites paid more attention to the product positioning, I suppose. But the 5970 continues to be the flagship card for AMD, and I think it still retains the title of fastest single graphics card.

            As for AMD “aiming low” – I’m pretty sure they did exactly that, only as it pertains to die size rather than performance. And, perhaps in the end it was power consumption that limited them too. Regardless, AMD has aimed for certain die sizes ever since the R600 debacle, and after R600 that is quite understandable.

            Anyway, if Cayman had across the board improvements over Cypress rather than hit and miss, the picture would be much rosier. As it stands, it’s hard for me to be too disappointed in what Nvidia has done to correct Fermi, nor with what AMD has done to produce a whole new architecture at a larger process node than originally intended. And, AMD has priced the card very appropriately. I think the expectations were way off the charts. I personally never expected Cayman to even meet the GTX 580, much less beat it, and I’m not sure where everyone else got that expectation from.

    • flip-mode
    • 9 years ago

    I just realized that TechReport has never tested with Lost Planet 2 below 2560×1600. Interesting.

      • potatochobit
      • 9 years ago

      capcom games are optimized well so there is no real need

    • Thanato
    • 9 years ago

    Would really like to see Eyefinity benches.

      • Thanato
      • 9 years ago

      I don’t mean to sound like I’m complaining, at the very least you guys rock. It’s just a major feature to AMD cards and I like to see how the gpu architecture handles it.

    • clone
    • 9 years ago

    good review but requires real effort to navigate and please no more “dudes” ever again.

    P.S. once the line graphs go past 3 colors they just become messy and lastly given Crossfire and SLI’s intended useage their results really don’t need to be in any chart other than the most demanding ones.

    • Thanato
    • 9 years ago

    I see 5 monitor outputs, does it support 5 monitors? please forgive my noobdom.

      • Thanato
      • 9 years ago

      got it,
      “The most noteworthy change here is probably support for version 1.2 of the DisplayPort standard. This version has twice the bandwidth of 1.1 and enables some novel capabilities. One of those is transport for high-bitrate audio, including the Dolby TrueHD and DTS Master Audio formats, over DisplayPort. Another is the ability to drive multiple displays off of a single output, either via daisy chaining or the use of a break-out DisplayPort hub with multiple outputs. In one example AMD shared with us, a hub could sport four DVI outputs and drive all of those displays via one DP input. What’s more, Barts-based Radeons can support multiple display timings and resolutions over a single connector, so there’s tremendous flexibility involved. In fact, for this reason, AMD has no need to offer a special six-output Eyefinity edition of the Radeon HD 6870.”

    • Ushio01
    • 9 years ago

    I can’t help but be disappointed where’s the wow factor and the amazing performance increase between generations?

    Below is a link to a page of TR’s 8800GTX review where it blows away the best single chip card’s, dual chip cards and multi card setups from the previous generation of both Nvidia and ATI where is that today?
    §[< https://techreport.com/articles.x/11211/14<]§

    • kilkennycat
    • 9 years ago

    r[

    • Krogoth
    • 9 years ago

    Cayman performs as I had expected once clear details of its theorical capabilites were leaked. It was unrealistic to expect it to leapfrog Cypress/Bart.

    Cayman is basically a Cypress design tweaked for improved tessellation performance, but it still wasn’t enough to catch-up to its GF100, GF104 and GF110 counterparts. I suspect the aggressive throttling is the largest reason why it falls short in some of the synthetics despite what the architecture can do on paper. I expect this problem to be a non-issue for normal stuff which typically never fully utillize all of the resources of the GPU.

    At least AMD, didn’t go try to pull another HD 2900, GTX 480, FX 5800U. The loaded power consumption for 6950/6970 are a little bit less than their Cypress predecessors, not too bad.

    It kinda reminds me of the jump from X1800 to X1900. Both architectures performed about the same in modern games, but once next-generation started to take advantage of the newer architecture resoruces. The gap between the two became appearent. (for X1900XT it was shadering performance, in Cayman’s case it will be tessellation performance).

    The enthusiast that were “dissappointed” at the lack of performance increase need to realize that GPU guys are already towards the end of road with they can realistically do with silicion. GTX480 was a painfully reminder. It matters even less now that PC gaming no longer drives the market. Console ports and MMORPGs rule the roost and do not need the prowess of the current, high-end GPUs for an enjoyable experience.

    • swaaye
    • 9 years ago

    It was great to see 8800GTX again. I’m still using one! I’m considering upgrading solely because I want to run SSAA well. Otherwise the 8800GTX plays everything. I only game at 1360×768 (TV) and it even ran Metro 2033 well in that case.

    I think that SSAA should get more attention in reviews. The other FSAA modes don’t address shader aliasing and that is getting to be a MSAA deal breaker in a lot of games. The new forced driver-based MLAA isn’t so great either.

    • Meadows
    • 9 years ago

    I honestly don’t remember the last time I was this underwhelmed. This is pathetic.

      • henfactor
      • 9 years ago

      I can. Your mom, last night. Boom. Roasted.

        • anotherengineer
        • 9 years ago

        lolz

        *waits for Meadows witty, yet sarcastic, and slightly cynical reply*

          • Meadows
          • 9 years ago

          Keep waiting.

            • TaBoVilla
            • 9 years ago

            sorry mate, but you have to admit that was funny =)

            • derFunkenstein
            • 9 years ago

            Depends on how often you’ve heard a “your mom” joke. It stops being funny after the 100th. Also, for Meadows, it probably stops being funny when it’s true. *rimshot*

            • sweatshopking
            • 9 years ago

            lol

      • flip-mode
      • 9 years ago

      December 7, 2010 (GTX 570 launch) seems a reasonable guess since they offer equivalent performance on average at a roughly equivalent price.

      • GTVic
      • 9 years ago

      I remember when I used to read these comments to get some intelligent thoughts from knowledgeable users.

    • Scrotos
    • 9 years ago

    I would have liked to have seen the 8800 GTX with power consumption and noise and temperatures.

    Because, actually, I’ve been looking to replace a 8800 GTX so it would have been handy to see! 😀

    That and it’s always neat to see how far we’ve come over the generations with that kind of stuff. Same as when the P4 and Q6600 were included in the CPU reviews.

    • Lans
    • 9 years ago

    “/[

      • Damage
      • 9 years ago

      FYI, the Radeon HD 5780 2GB handled 3-6 displays and 6-12 mpixels quite well so long as we watched our quality settings. See our Eyefinity review.

        • Lans
        • 9 years ago

        Yeah, that is along the lines of what I meant by educated guess. I could check the performance of HD 69×0 vs. HD 5870 and see if I should expect similar results as the HD 5870. But also wanted to see if PowerTune would be an issue or not and if EQAA would help (it “should” and Sacred 2 with 2X EQAA wouldn’t be as big of a compromise).

    • PRIME1
    • 9 years ago

    r[

      • dpaus
      • 9 years ago

      Hey, he’s not dead after all!

        • flip-mode
        • 9 years ago

        Nvidia relocated him to [H] for the last little while.

      • StuG
      • 9 years ago

      Prime1, finally getting over the butthurt that was the GTX480 eh?

      • TaBoVilla
      • 9 years ago

      dude! you are not dead! =) glad you are back!

      • Krogoth
      • 9 years ago

      Epic fail?

      Nah, more like realistic expectations.

      Even nvidiots weren’t exacly going nuts with GTX580’s release. They were like “this is what GTX480 “should have been” at launch!”

      Anyway, the real meat of the market is 6850/6870 versus 460 1GB. The battle here is heavily contested. Both sides have their pluses and minuses. The greatest plus for customers is competitive prices. 2Megapxiel gaming even with some AA/AF thrown in!

        • dpaus
        • 9 years ago

        Did you just say “compunative prices”??

      • anotherengineer
      • 9 years ago

      LOLZ

      HE’S ALIVE

        • dpaus
        • 9 years ago

        Quickly, Igor, the oaken stake!!

      • Duck
      • 9 years ago

      ZOMG!!

      • cavedog
      • 9 years ago

      My god i thought we lost you forever. Thanks for coming back. It’s not the same without you 🙂

      • dpaus
      • 9 years ago

      BTW, have you seen snakeoil?

      • michael_d
      • 9 years ago

      Epic post!

      • swaaye
      • 9 years ago

      Are you saying that NV40 disappointed you?

      edit: Oh you were referring to 6850/6870, I bet.

      • can-a-tuna
      • 9 years ago

      Won’t you just die already?

        • derFunkenstein
        • 9 years ago

        Can someone ban this guy? ZooTech got banned for much less.

    • Kaleid
    • 9 years ago

    Bring on 28nm

      • StashTheVampede
      • 9 years ago

      ^^^Bingo^^^

      Brand new tech on lower yielding, lower selling part. Get the sucker shipped, learn from it and bring on v1.1 in the spring. When the next set of Radeon’s ship, it’ll be with this tech (refined) in faster/lower speeds.

      It’s the “tick-tock” cycle from AMD, just in GPU form.

    • ultima_trev
    • 9 years ago

    Another great review.

    Overall, I’d say the HD 69xx series is quite disappointing. I know HD 6970 was never meant to compete with GTX 580, but I was hoping it would do a little more to outshine GTX 570 and the previous high end champ, GTX 480. Also, if HAWX 2 is a clear indicator of tessellation implementation for future titles, AMD has some work to do there… I guess this one is still in the bag though.

    Still, with 1536 shaders and 96 TMUs, clocked at 880 MHz no less, I feel this architecture is being bottlenecked by too few ROPs and too small a memory path. Higher end Radeons have have been using a 256-bit memory bus since what, the 9700 Pro in 2002? I’m sure with a 512-bit bus and 64 ROPs, this architecture would have crushed Fermi. I guess on that note, if HD 6950 CrossfireX results are an accurate foretelling of HD 6990’s performance, and given the improvements made to CFX with HD 68xx, then AMD should have a strong foothold in the niche enthusiast market.

      • bittermann
      • 9 years ago

      I agree…am a little disappointed as well…was hoping for more of a challenger to the 580…lets’ hope drivers can boost the performance in the next few months.

      • flip-mode
      • 9 years ago

      I disagree! I’m thrilled the 6970 targets the 570/480. Competition and price moves at the $500 level does nothing for me. Competition at saner prices that will possibly drive $350 cards down below $300 sooner rather than later and push the lesser cards’ prices down at the same time is just what the doctor ordered.

      Let the GTX 580 have the high end all to itself and let AMD battle it with it’s dual-gpu cards. That’s perfectly fine by me.

      q[< if HAWX 2 is a clear indicator of tessellation implementation for future titles, AMD has some work to do there<]q If that's the case then we can all be disappointed in tessellation itself, both in what it gets used for and the results that are delivered. Have you looked at the images produced? Not worth a damn.

        • bittermann
        • 9 years ago

        Not disagreeing with any points you’ve stated was just hoping for more from ATI…

        • khands
        • 9 years ago

        If it had been $400 and around 2% slower than the GTX 580 it would have dropped the entire lineup.

        • Silus
        • 9 years ago

        Antilles won’t be fighting with the GTX 580, but rather NVIDIA’s own dual GPU card. Given what happened with the release of the GTX 580/570, we’ll have to wait and see which one comes out first.

          • Lans
          • 9 years ago

          That one seems easier to predict… *points at GTX 580 SLI and HD 69×0 CF results*.

          • flip-mode
          • 9 years ago

          Again, I really don’t care what fights happen at $500, and I can’t bring myself to care who wins those fights either.

      • OneArmedScissor
      • 9 years ago

      The 6970’s TMUs, bandwidth, and texture fillrate increases are all roughly proportional compared to the 5870, about 20% for each. The pixel fillrate didn’t hardly budge.

      But it’s 20% faster or more when pushed. AvP is 20% faster, and Metro 2033, the only game that really hammered both of those cards, is about 50% faster. The improved texture fillrate seems to have translated well, combined with whatever architecural benefits there may be.

      It doesn’t look bandwidth or ROP constrained so much as finally balanced out, compared to the 5870. That shouldn’t be a surprise, considering how only minor reorganization produced the smaller and roughly equal 6800s. Something was amiss.

      The progression from 4000 > 5000 > 6000 may not make sense on paper, but it’s the end result of a long process of making the best of what they’ve been given, as they got it.

      Screwy 40nm? Just double the 55nm design and ignore major adjustments that would complicate it (5800s). Cancelled 32nm? Go back, fix the problems (6800s), then take the adjustments made for a much more complex chip, and apply them to a slightly more complex one (6900s).

      You only need to look as far as Nvidia to see which order of operations fared better. If AMD used a design based on more ROPs and a wider bus, they’d just have a similarly expensive and problematic monster chip.

      Regardless of those issues, it’s been a recurring trend to progressively improve the architecture of cards by making them faster overall with fewer ROPs and less bandwidth needed. For example, the original 8800 chips had 24 ROPs and up to 100GB/s of bandwidth, but then the faster revisions that followed all had 16 ROPs and less bandwidth, combined with more shaders, TMUs, and higher clock speeds.

      The GTX 580 would probably have like 100 ROPs and 400GB/s if they’d had to keep that in proportion lol.

        • flip-mode
        • 9 years ago

        Nice post.

    • CaptTomato
    • 9 years ago

    Would a stock e8400 hold back a 6950?

      • Freon
      • 9 years ago

      I would say yes, unless you run at 2560×1600 on a 30″ monitor with 8x AA, but all the CPU limiting options turned way down.

        • CaptTomato
        • 9 years ago

        Bah, I feel like waiting.
        Prices in AUD.

        6970 $540
        6950 $380
        6870 $280
        6850 $220

        I might even try and hold out and buy another “box” in Q3-4 next yr.

          • OneArmedScissor
          • 9 years ago

          You’d finally be able to get a totally different computer at that point than what’s been the status quo for about 2 years now. Might as well wait.

            • CaptTomato
            • 9 years ago

            Might be the way to go, I always find that single cards aren’t fast enough/good value, yet I’m reluctant to XF as I couldn’t do it atm with dualslot 4850+soundcard+HDTV tuner card in the way.

    • Sunburn74
    • 9 years ago

    Well great looking cards again. Lets wait for the upcoming price war.

    • bfar
    • 9 years ago

    Refreshes on the same process node were never going to be particularly radical in terms of raw speed. If the next big releases go down to 32nm, we should be in for some real fun!

    However, I’m pleased to see AMD using 2GB of vram. HiRes/Multi displays are getting popular now.

    One more observation – any resolution higher than 1080p+AA will potentially bottleneck a 1GB card, give or take. As such, benchmarks of 1GB cards at 1600p+AA don’t really give reliable comparative data to estimate how they’ll perform at more common resolutions.

    • Umbragen
    • 9 years ago

    Meh, since we’re all stuck in the XBox doldrums, I can wait and see what next fall brings.

      • Sunburn74
      • 9 years ago

      Yeah… I still have no real reason to ditch my 5850. Not that these new cards don’t trash it thoroughly but rather that my 5850 trashes all new games thoroughly. I’m already getting like 80-112 FPS on starcraft 2 maxed out at 1920×1080. Geez…

        • anotherengineer
        • 9 years ago

        Indeed.

        No reason to ditch my 4850 lol

        Unless I find a site that shows a 6850 destroying it in a Source Engine benchmark.

        • Buzzard44
        • 9 years ago

        Yeah, my 9800GTX runs SC2 at ~45-50 fps with everything maxed at 1920×1080. I won’t be upgrading for quite a while.

    • TaBoVilla
    • 9 years ago

    I dunno what to say..

    First, great review as always Mr. Damage. From Cayman? honestly, I was expecting more. I mean, one year later, same more mature process, larger die size, new architecture, dual rasterizers, possibly higher clocks, new power management, etc.. to have marginally better performance on most stuff to the one year old and one thousand one hundred product numbering units less HD5870?

    It seems far more impressive now what they’ve done with Barts, providing 5850 level performance with 250mm2. What´s more intriguing is, what made AMD create this almost 400mm2 half breed other than using Barts style 5-way VLIW + optimizations on a cypress sized chip.

    One thing is certain, AMD is not selling these at competitive pricing. Why is nVidia losing their butts off selling far larger dies on GTX570 at $350? these new cards would have been the bomb at $250 for the 6950 and $300 for the 6970, this is just giving nVidia air.

      • Sunburn74
      • 9 years ago

      Agreed. But I figure just wait and let the prices settle over time.

      • Goty
      • 9 years ago

      I’m of the opinion that the VLIW5 shaders didn’t have any more life in them, i.e. they had reached their limits when it comes to scalability. If AMD could have just slapped on more of the old shaders onto a die with the tweaked setup engine and gotten the same performance as the new architecture, they would have.

        • khands
        • 9 years ago

        They may have pulled a GF104 with Cayman, hopefully the 7000 series cards do better than the GF110.

    • dpaus
    • 9 years ago

    Given the showings for SLI and CrossFireX in these reviews, and especially the way CrossFireX scales, I’m starting to wonder how long it will be before AMD creates a “single GPU” chip with 2 or even 4 Caymans on a single die. Or even a card with 4x 5000-series GPUs in an on-card CrossFireX configuration.

    • obarthelemy
    • 9 years ago

    Is there any impact to doing multiscreen ? ie, I usually play on my main, 1900×1200 screen, while watching video on my second, 1680, screen. My MB has an IGP.

    1- Will my playing video off the vidcard impact game performance on my main screen ? by how much ?
    2- Should I use the IGP for video instead ?

    • dpaus
    • 9 years ago

    After seeing the direction AMD is going with the new Catalyst Control Centre, I’m thinking that – if they add CPU tuning to it, and some tweaks to optimize your CPU/GPU combination for your specific usage patterns – it could become a significant differentiator in the GPU wars. And one that Nvidia can’t match.

    • flip-mode
    • 9 years ago

    Agree with #15. Scott, bless your caffeine stimulated heart, does it not seem reasonable to remove HAWX 2 from the benchmark suite? It is a poor video card benchmark for because:

    1. Nvidia requested it’s inclusion
    2. Cards don’t struggle with it anyway, so it just ends up…
    3. Scaling on Radeons is fubar. WTF? There’s biased and then there’s broken. This game could actually be both.

    Crysis, though not DX11, could maybe replace it? There is much to recommend keeping Crysis around. It is still able to bring cards to a sweat, and it provides a historical reference point, and it’s as good looking of a game as anything else out there.

    Edit: revised the whole post. 45 fps on a 6850 @ 2560×1600 with 4x AA? 44 fps on a 5870? Just out of curiosity, can you run the same test on a 5670 1gb and see if you get… 43 fps? Can AMD be bothered to comment on any of this?

      • Deanjo
      • 9 years ago

      Well then maybe they should set the drivers for the same image quality as well then and re-bench.

        • flip-mode
        • 9 years ago

        Why bother? It’s useless. A 6850 runs the thing at 45 fps with all the setting cranked. It’s a pathetic benchmark. Including that benchmark has two effects:

        1: It tells me all I need is a 6850 to run the game maxed out at high res, which is great, if it were not for…

        2: Throwing the cumulative averages waaaaaaaaay out of wack, which is really detrimental when the point is calculating value.

          • derFunkenstein
          • 9 years ago

          It’s also a relatively popular game. Shouldn’t popular games be tested?

            • TheEmrys
            • 9 years ago

            Bring back Counterstrike: Source then!

            • derFunkenstein
            • 9 years ago

            Maybe I should say “popular and recent”.

            • flip-mode
            • 9 years ago

            Sims and WoW are popular too. I don’t think benchmark are chosen because they are particularly popular, rather, they are (hopefully! very hopefully!) chosen because they can show some meaningful differentiation in the available cards. What we’re lacking with Hawx 2 is “meaningful”, at least, in my opinion. I’d say including it does more harm than good given the skew it throws into the cumulative average.

            • derFunkenstein
            • 9 years ago

            I would also hope they’d test games I actually cared about. Yes, they should show some differentiation – and guess what, HAWX2 definitely does. Just because you don’t like it doesn’t mean it’s not noteworthy.

            note: I use an all-AMD setup (Phenom II X4 and a Radeon graphics card) and play HAWX2 anyway. It’s enjoyable as a dogfighter arcade kind of game.

            I wouldn’t mind if they tested The Sims 3 and WoW, assuming they’re tested relatively easily. They are quite popular, especially WoW on this site. And WoW cranked up to max does look nice along with putting a strain on the system.

            • flip-mode
            • 9 years ago

            I feel like you’re missing my point. It’s not that I “don’t like it” – I’ve never played it so I don’t know – nor is it that it doesn’t show differentiation, but what’s the value in the differentiation shown when, at the highest settings at resolution, a 6850 – pretty much the bottom of the stack of gaming cards that are worth buying – churns out 45 fps? What does that tell you? How does that influence your purchase? That’s my point (my only point now that my other point has been retracted). Hawx 2 does nothing to inform the purchasing decision. All Hawx 2 shows is that Hawx 2 isn’t a demanding enough game to stress even the lowest card in the set. Yet, your reasons for keeping it are…? What useful info does the game tell? How would that game influence a card purchasing decision?

            • derFunkenstein
            • 9 years ago

            No, i mean because you don’t like the result. It’s noteworthy regardless specifically because people do want to play this game. Still playable? Sure. Not quite as pretty? Who knows.

            • flip-mode
            • 9 years ago

            What result don’t I like? I never said I didn’t like the result.

            • derFunkenstein
            • 9 years ago

            Then why are you arguing for its removal?

            • flip-mode
            • 9 years ago

            Wow, that didn’t answer my question at all. What results are you saying I don’t like? And why are you asking me why I’m arguing for it’s removal when my argument is already laid out in its entirety in the preceding posts? If you’re accusing me of bias against Nvidia you need to quit being cowardly about it and just say so. Then I can proceed to point you to all the threads where I’ve argued for the value of the GTX 470 much to the protests of pretty much everyone, and to my comments to the GTX 570 which were very positive, as well as my arguing in favor of keeping factory overclocked GTX 460s in reviews.

            • sweatshopking
            • 9 years ago

            I don’t think you’re biased. You’re quite subjective. it’s really too bad, as nvidia clearly sucks. ;P

          • PixelArmy
          • 9 years ago

          Read people, read!!! This was in the last one review everyone was complaining about as well.

          l[

            • flip-mode
            • 9 years ago

            MY BAD! Many apologies! I am appropriately ashamed!

            • StuG
            • 9 years ago

            I have to say flip-mode I agree with you. I don’t really see a huge point, its like when borderlands was still refusing to die from the testing suite. The game itself is not only extremely playable on most hardware out now, but also purposefully made to run better on an Nvidia system. Aside from moral beliefs on this subject, whether I agree or disagree, its the fact that the game is totally playable and not giving us any real data. If all cards were struggling with it, than Nvidia was doing better because of their “relationship” with the title I could understand its inclusion. But when you have to exclude something from results just to see the relative situation, why bother even keeping the game in the suite at all?

            • PixelArmy
            • 9 years ago

            1) DX11
            2) Different style game
            3) Somewhat new
            4) Supposedly popular

            You might not even care/agree (and that’s fine), but in a way it does bring something to the table. Crysis, on the other hand, is none of those and, being a FPS, is easily “covered” by something like Metro 2033.

            Personally, I’d much rather they bench *[

      • phez
      • 9 years ago

      Environment quality is a required part of any flight game, combat, sim or otherwise. As a fan of these games I couldn’t give two shits about the politics behind the benchmark, if indeed there ever was any aside from whining marketing departments.

      The benchmark is entirely appropriate for the game, and showcases AMD’s lack in this field, as testament by the Lost Planet 2 results as well.

    • grantmeaname
    • 9 years ago

    Shouldn’t the metro 2033 graphs be labeled 1920*1200?

    • flip-mode
    • 9 years ago

    Hmm… I’m going to do this by the benchmark:

    Hawx 2: I don’t like it as a benchmark but… the GTX 460 768 is the unambiguous winner of this one. 58 fps at 2560×1600 with all the details cranked.

    LP2: 570 gets the pick at 2560×1600. If $350 is too rich then ???? TechReport has only ever tested this game at 2560×1600 which… is a bummer

    Civ 5 late game: Gotta go with the 6870 or else the 570. The 570, being 33% faster than the 6870, is probably worth the extra money if you spend all your time playing Civ 5 on a 30″ monitor.

    SC2: 6870 (41fps) or the 6950 (48fps). The 6950 is probably worth the extra $60 here, unless you’re happy with the 41 fps the 6870 is already giving (which I would be). The 570 (47fps) doesn’t justify it’s price in this game.

    BFBC2: 6870 for 35fps or the 6950 for 40fps. 6950 is probably worth the extra $60 here to get the min fps bump. 570, again, doesn’t show value in this game.

    Metro: 6950, solid, no regrets, no close alternative. It beats the 570 for $50 less.

    AvP: 6950 below 2560×1600 (580 is the only card to break 30 fps at 2560×1600). The 5870 also does quite well below 2560×1600, but you’d have to find one priced right.

    Dirt2 DX11: 6870, solid. Why pay more? 6870 delivers 45 fps at 2560×1600 with 8xAA.

    Power consumption idle: 460s look great, 5870/68xx look real good
    Power consumption load: 6850 is awesome, 6870 good, 5870 nothing to worry about, 6950 looks terrific, really.

    Idle noise: nothing to comment on
    Load noise: 6950 is very quiet, 570 is OK, 5870 loses some luster but is quieter than the 6970. *[

      • flip-mode
      • 9 years ago

      And, I’d like to see the overall averages with Hawx 2 removed. That’s something I can do myself when I get time and if I really feel the need, but Hawx 2 ruins the work done to show the averages.

    • codedivine
    • 9 years ago

    A little disappointed that 6950 is a long 10.5” card. I hope the card companies come up with custom 6950s with shorter sizes 🙁

      • codedivine
      • 9 years ago

      Oh and I wanted to know, is there any info about cache sizes and register files? And is the LDS size per SIMD the same as Cypress? (Kind of important information for a GPGPU focused guy like me).

    • Firestarter
    • 9 years ago

    Yeah, but that’s what the 6990 is for, right? Something with it being more economical to focus development on the mainstream cards and produce the flagship card by putting 2 of those GPUs on one card.

    edit: meant as reply to #21

    • Firestarter
    • 9 years ago

    I see a few people are disappointed by Cayman’s performance. Is that because the hype was built on the expectation of a flagship GPU at new process node?

    I for one am just happy to see both companies duking it out and being generally competitive. The only thing that leaves a sour taste is the suspicion that Nvidia is more aggressively manipulating the perceived performance of its GPUs. That leaves room for AMD to be the good kid.

    As for being future proof, I guess few companies would be stupid enough to completely buy into the TWIMTBP thing and shoot themselves in the foot with excessive tessellation.

      • potatochobit
      • 9 years ago

      because basically it’s a side grade at a cheaper price point
      most people wanted and upgrade and were ok with paying a little more

      of course, getting the same performance at 100$ less is not bad in itself if you are using an older GPU

    • BlackStar
    • 9 years ago

    The 6950 is very very interesting, a clear win for Ati at the $300 price point.

    Between the 570 and the 6970, the latter seems to be the more forward-looking solution (2GB memory, multi-display, potential driver optimizations) and it tends to perform better in the games that actually matter. Interestingly, other review sites don’t paint the 570 in such rosy pictures and find it offers lower performance and higher power consumption than the 6970. I guess this depends on the benchmarks used.

    TL;DR Ati dominates for the third generation in a row (4xx0, 5xx0, 6xx0). Nice job!

      • Deanjo
      • 9 years ago

      Dominates 3 Gens in a row? Did you not read the review?

        • BlackStar
        • 9 years ago

        Did *you* read the review? Check the 4870 performance. Check the 5970 performance. Check the 69×0 crossfire performance. Now tell me that Ati doesn’t dominate.

          • Deanjo
          • 9 years ago

          lol, ok put a couple of nvidia cards in SLi. I should hope two GPU’s could beat one. GPU vs GPU ATI has lost in overall performance for the last 3 series.

          l[

            • anotherengineer
            • 9 years ago

            meh, its all about bang for you buck nowadays.

            And Scott I noticed in the review you ran LFD2 to test power consumption, and yet you didn’t put any LFD2 benchmarks in?!?!

            Forshame 🙁

            edit: this is the chart I like the best
            §[< http://www.techpowerup.com/reviews/HIS/Radeon_HD_6950/31.html<]§

            • BlackStar
            • 9 years ago

            Yes, obviously. The 69×0 are better than the opposition both standalone and in Crossfire mode.

            • Goty
            • 9 years ago

            You can go ahead and pay more than $100 more to play the same games at the same settings I do, that’s fine. More 6970s for the rest of us!

            • Deanjo
            • 9 years ago

            You would be pretty hard to beat the $/performance ratio I got with my GTX. Won $2k on a $5 bet and used it for my GTX-580 setup and a 30″ monitor.

            • anotherengineer
            • 9 years ago

            Nice, although I would have dumped that 2k towards my student loan anyway 🙁

            ooooo a source benchie with a dismal list of comparison cards.

            §[< http://www.pcper.com/article.php?aid=1051&type=expert&pid=14<]§

            • indeego
            • 9 years ago

            What was the betg{

            • Deanjo
            • 9 years ago

            Video lottery machine winnings (aka modern slot machine). Dumped a $5 bill into it while waiting for my steak sandwich to arrive at the local pub.

            • indeego
            • 9 years ago

            How much do you estimate you spent up until that pointg{

            • tfp
            • 9 years ago

            Well I’m guessing at a pub say 3 beers and the sandwich so 25 dollars give or take plus tipr[

            • Deanjo
            • 9 years ago

            I spent exactly $5. I don’t gamble as a rule. Another $20 for the steak sandwich and the caeser.

            • tfp
            • 9 years ago

            I was pretty close in my estimation then.

            • BlackStar
            • 9 years ago

            Ah, that explains a lot. 😉

      • pogsnet
      • 9 years ago
    • KarateBob
    • 9 years ago

    Typo on page 3, “but for those who are familiar, EQAA simply stores fewer color samples than it does color samples” should say “but for those who are familiar, EQAA simply stores fewer color samples than it does *[

      • sparkman
      • 9 years ago

      (coverage sample typo)
      bump

        • Damage
        • 9 years ago

        Fixed. Thanks.

    • techworm
    • 9 years ago

    you have included 2 of the most ridiculously nvidia only! games (lost planet 2 hawx 2)in the test battery and concluded 6970 and GTX570 are tie!

      • ClickClick5
      • 9 years ago

      AND Metro 2033.

    • Palek
    • 9 years ago

    Anandtech makes a very good point about 6950/6970 performance. The major architectural overhaul in Cayman quite likely means that AMD driver developers are not even close to maximizing the performance of these products. Both nVidia and AMD/ATi made significant performance improvements over time via driver improvements following major architectural changes. It’s not unreasonable to expect the same from AMD this time around.

    Wait and see is the correct approach on this one.

    • beck2448
    • 9 years ago

    Disappointing after all the hype. I’m going with 580.

      • TheEmrys
      • 9 years ago

      Kind of a waste of money, then.

    • Voldenuit
    • 9 years ago

    Anand’s 570 was 3.3 dB louder than their 6970. I’m guessing there’s enough sample variation in the cards and variance in test rigs and test equipment that the two cards are probably equivalent (or at least very comparable) acoustically.

      • kroker
      • 9 years ago

      Yes, and in the Anandtech review the 570 also consumed more power in load than HD 6970 (20W more in Crysis Warhead, and 85W more in Furmark (!)).

        • Freon
        • 9 years ago

        I sometimes wonder if this doesn’t vary due to interaction with other fans in the test systems.

        • Goty
        • 9 years ago

        A lot of that is going to come down to the Powertune settings used in each review. It’s been largely ignored by most publications, which annoys me to no end.

    • BoBzeBuilder
    • 9 years ago

    Well that was disappointing.

    • TravelMug
    • 9 years ago

    For some strange reason the thing that stuck with me from this review is that I should upgrade my aging 4850 512MB to a shiny new GTX460 1GB 810Mhz.

      • MadManOriginal
      • 9 years ago

      It’s not strange…it’s perfectly logical if you play at 1920x res for most games and are willing to use less than max details or low or no AA when necessary, for 1680×1050 the GTX 460 is way more than sufficient.

    • ShadowTiger
    • 9 years ago

    I am very sorry for skipping to the conclusions page this time but i swear I was planning to (and still will) read the whole thing.

    Thanks for the awesome article!

    • StuG
    • 9 years ago

    *Pets 5870 Crossfire*

    Performs pretty well compared to today’s top end for having enjoyed them over the last year plus. I am happy 🙂

    • bdwilcox
    • 9 years ago

    s[

      • ssidbroadcast
      • 9 years ago

      Or maybe watching too many Pauly Shore movies.

      • green
      • 9 years ago

      pretty sure it’s in reference to comments in tr’s previous review where a certain group were asking for metro 2033 results to be effectively removed as it gave nvidia and unfair advantage as they believed ‘physx’ never really gets disabled (hence 6870/6850 number looked much worse) resulting in a boost to any nvidia card (where amd cards would offload onto the terrible cpu implementation of physx)

      i’m guessing for this review they’ll be saying amd would annihilate nvidia if physx could be properly disabled resulting in a much better proper value proposition for amd

      either way, the main conclusion i draw from this review is i’m still waiting on the 6850’s to come down

      • gecko575
      • 9 years ago

      This reminded me of Bill and Ted’s Excellent Adventure. Awesomeness like this requires one to read out loud in Keanu Reeves voice. Party on, dudes.

        • DrDillyBar
        • 9 years ago

        hahaha.

      • etrigan420
      • 9 years ago

      Fast Times at Ridgemont High…

    • potatochobit
    • 9 years ago

    17 pages >:o

    so it looks like my little 6850 is a good value
    now the question is does crossfire make sense, or sell it off and get a bigger single GPU in the future
    those are all stock benchs, right? time to wait to see what OC numbers pop up

      • NarwhaleAu
      • 9 years ago

      Crossfire all the way. You end up with a huge jump in performance for only a little more than a 6950.

      • cynan
      • 9 years ago

      I just hope that the way crossfire is handled in the bios/drivers has been implemented better than with the 4800 generation. This was an AMD total fail – and not something that gets mentioned much. Does anyone know if this is the case?

      When I first got my dual 4850s (just after launch), crossfire was great in most games. Then after Cat 10.5, crossfire no longer worked. Period. Full Stop. (well except for in Metro 2033 for some reason – but the dozen other games I have didn’t – popular titles such as COD:MW2, Fallout, etc).

      It turned out that some early bios versions resulted in breaking crossfire (and Catalyst implemented overclocking) after Cat 10.5, while many cards released further after launch continued to work. After flashing my cards to a later bios from another manufacturer (potentially dangerous and warranty voiding) crossfire worked for me again in most games. And AMD is doing nothing about it. (though I suppose there are only a handful of end users in this situation who actually care).

      Long story short, when the 4850s came out I though I couldn’t go wrong with the performance/dollar (I got the 4850s for $150 each at launch). But, as explained above, after Catalyst 10.5, I was in for a rude awakening.

      If the 6000 series ends up with something like this down the line, then, while crossfire looks tempting now, it may end up being a huge PITA. In the end, with crossfire, you introduce more complexities and just increase the likelihood of something like this happening.

    • NarwhaleAu
    • 9 years ago

    Glad to see you approve of the 6950 purchase. I don’t feel so bad now about buying it around 15 minutes after launch.

    • Crayon Shin Chan
    • 9 years ago

    First in! I’ll go for a 6870 then

Pin It on Pinterest

Share This