XFX’s Radeon HD 6950 1GB takes on Zotac’s GeForce GTX 560 Ti AMP

There are many great rivalries. Celtics and Lakers. Britney and Christina. Camaro and Mustang. GeForce and Radeon. As usual, the automotive analogy is the most applicable to the enthusiast-class PC hardware we have for you today: XFX’s Radeon HD 6950 1GB and Zotac’s GeForce GTX 560 Ti AMP. Each is a mid-range muscle card targeting the performance crowd. The graphics engines in both have been turbo-charged beyond stock settings to offer a little more horsepower when you put your foot down. You’ll pay about the same for each of ’em, too.

While some folks might be inclined to choose between the two based on a near-fanatical religious devotion to AMD or Nvidia, true enthusiasts tend to be more pragmatic. We want to know which card delivers the best in-game frame rates, lowest power consumption, and quietest noise levels. We’re curious about overclocking, and we want to know if XFX or Zotac throws in any extras to sweeten the pot.

To answer these important questions, we’ve arranged an epic throwdown between the XFX Radeon HD 6950 1GB and Zotac GeForce GTX 560 Ti AMP! Edition. Read on as the mid-range rivalry between AMD and Nvidia—and board partners XFX and Zotac—manifests in this latest generation.

Before digging in, we should note that this is not meant to be an exhaustive analysis of the GPU silicon that powers these two cards. If you’re curious about the underlying architecture behind the Cayman GPU in the Radeon HD 6950, hit up our initial review of the 6900 series. Our review of the GeForce GTX 560 Ti has all the gory details on that card’s GF114 graphics chip. Today, we’re going to focus on what these particular XFX and Zotac models bring to the table.

Card XFX Radeon HD 6950 1GB OC Zotac GeForce GTX 560 Ti AMP
GPU Cayman GF114
GPU clock 830MHz 950MHz
Memory clock 1300MHz 1100MHz
Memory bus width 256-bit 256-bit
Memory size 1GB GDDR5 1GB GDDR5
Outputs 2 DVI

2 Mini DisplayPort

1 HDMI

2 DVI

1 Mini HDMI*

Warranty length Two years* Two years*
Street price $269.99 $267.99

The key similarity here is price. Both cards cost around $270, making them direct rivals just north of the typical mid-range graphics sweet spot. They’re also at parity on the memory front. Each card has 1GB of GDDR5 RAM tied to a 256-bit memory bus. That’s where the similarities end, though.

Obviously, these two beauties are based on very different GPUs. The Nvidia’s GF114 graphics chip was built specifically for mid-range cards like the GeForce GTX 560 Ti, and the lights are on in all of the chip’s functional units. With the Radeon HD 6950, AMD uses the same Cayman GPU that anchors pricier 6970s. Two of the GPU’s SIMD blocks are disabled in the 6950, cutting the chip’s effective shader count from 1536 to 1408. This de-tuned silicon can be found in both 1GB and 2GB flavors of the 6950; the 1GB variant was introduced primarily to compete with the 560 Ti, setting up today’s clash.

XFX and Zotac make versions of the GeForce GTX 560 Ti and Radeon HD 6950 that stick to the base clock speeds defined by Nvidia and AMD. The particular models we’re looking at today pack a little more oomph under the hood, however. XFX boosts the Radeon’s core and memory clocks to 830 and 1300MHz—a modest bump from stock clocks of 800 and 1250MHz. The GeForce has been pushed even harder. Zotac sets the GPU clock speed at 950MHz and the memory 1100MHz, up from default speeds of 822 and 1000MHz.

Comparing clock speeds between cards that use entirely different GPUs isn’t terribly helpful without associated performance data, which we’ll get to in a moment. For now, let’s focus our attention on another point of differentiation: display outputs. The Radeon is loaded, offering dual DVI outputs alongside a full-size HDMI jack and a couple of Mini DisplayPort, er, ports. Zotac offers fewer outputs and shuns DisplayPort entirely, instead coughing up a pair of DVI ports and a Mini HDMI connector.

Graphics card makers can differ quite a bit when it comes to warranty coverage, but XFX and Zotac are closer than one might expect. Both cards come with two-year warranties by default. Register with the respective companies within 30 days of purchase, and your coverage will be upgraded to a lifetime warranty. With XFX, that warranty is a “double lifetime” deal that covers the card through its first resale, adding a little extra value for second-hand buyers. You can’t pass on the GeForce’s lifetime warranty if you sell the card.

XFX warms up the Radeon HD 6950 1GB

It seems like every time we review a new Nvidia graphics card, nitrous-infused variants with substantially higher clock speeds are available from the start. We end up testing those cards because they’re representative of what users can buy, but the practice doesn’t sit well with AMD fanboys who haven’t come to terms with the fact that most Radeons run at stock speeds. Even the ones that are “factory overclocked” typically stay close to home. Case in point: XFX’s Radeon HD 6950 1GB.

This Radeon’s GPU and memory clocks have been raised by 3.75% and 4%, respectively, so we’re not looking at much of an upgrade over stock speeds. That could mean the card has plenty of headroom to exploit—something we’ll explore in our overclocking tests a little later in this review.

Such a modest increase in clock speeds doesn’t earn the Radeon a wicked-cool suffix like Xtreme, OC, or Winning! Indeed, the only mark identifying the card’s minor clock bump is the HD-695X-ZDDC model number. Otherwise, this 6950 looks identical to a stock-clocked XFX model that costs $30 less.

Rather than using AMD’s reference cooler for the 6950, XFX swaps in a custom dual-slot design wrapped in a stealthy black skin. The exterior’s matte surface has a menacing edge, and the drilled-out holes at the rear of the card give it a nice gun-barrel look. That venting should also provide a path for airflow when the card is packed tightly next to another in a CrossFire tag team.

Speaking of packing things tightly, we should note that XFX’s decision to flex its pipes outside the cooling shroud adds 0.4″ to the height of the card. The Radeon measures 4.8″ from the base of its PCIe slot to the top of the tallest pipe, which may create clearance problems in low-profile Mini-ITX and home-theater PC enclosures. The extra height is unlikely to be an issue in typical tower enclosures, and the card’s 9.5″ length should be easy to accommodate.

Next to the metal skin and copper pipes, the Radeon’s smoked plastic fan blades look a little low-rent. There is twice the number of fans as on the GeForce, though. The 80-mm units have 11 blades each, so they should be able to move a lot of air while spinning at relatively low (and quiet) speeds.

Some of the airflow generated the Radeon’s fans will be directed outside one’s system via a small vent in the card’s back plate. The almost-floating XFX logo is nice touch.

That very nice array of display outputs grants the ability to power five monitors with a single card. You can set up an Eyefinity array for gaming or enjoy a mass of desktop real-estate spread across multi-screen setup befitting of the Batcave. Even if you’re not a multi-monitor madman, the ability to drive five digital displays with a single graphics card is pretty impressive.

Zotac turbo-charges the GeForce GTX 560 Ti

Of the two cards we’re looking at today, the Radeon has the bigger engine. Cypress is a larger GPU than the GF114 behind the GeForce, after all. To stay competitive, Zotac employs much more aggressive tuning with its hot-clocked GeForce GTX 560 Ti. The card’s 950MHz core clock is a whopping 16% faster than stock, and the memory clock has been increased by 100MHz—a 10% jump.

To keep consumers from confusing this card with a modestly juiced Zotac model with an 850MHz core, the faster variant has been tagged the AMP! Edition. Apparently, an all-caps suffix wasn’t enough to convey the extremeness of what’s going on here, so Zotac added an exclamation mark. I suppose we should be thankful it didn’t resort to blink tags.

Incidentally, that 850MHz model will set you back only $240. For an extra $30, Zotac offers a much bigger step up in clock speeds than XFX.

Despite substantially faster clocks, the AMP! Edition has the same single-fan cooler as the 850MHz card. Heck, Zotac even sells a stock-clocked version of the GeForce GTX 560 Ti with an identical-looking cooler. The single fan makes me a little wary, but I quite like the look of the cooler overall. Matte black makes another appearance, and this time the fan matches the shroud. There’s also a healthy splash of color thanks to metal grills painted a rich shade of yellowy orange.

The cooler’s lone fan is the same diameter as what’s used in the XFX card, although it has fewer blades. Each of the Zotac’s blades is larger, so there probably isn’t much of a difference in airflow… apart from the fact that the Radeon has a second fan. Like the XFX card, the Zotac’s underlying heatsink features three copper heatpipes that link the GPU to a finger-grating array of cooling fins.

Because its heatpipes don’t protrude from the shroud, the Zotac card is a little squatter than the XFX at 4.4″ from the bottom of the PCI Express connector to the top of the cooler. The GeForce is only 9″ long, making it half an inch shorter than the XFX in that dimension.

Like the Radeon, the GeForce is equipped with dual 6-pin PCI Express power connectors. There’s also a nice open space above the power jacks to facilitate airflow when cards are packed side by side.

With only three display outputs, the GeForce has enough back-plate real-estate for a generous exhaust vent, as well. There’s more missing than a couple of DisplayPort outs, though. While a single Radeon HD 6950 is capable of powering a multi-screen Eyefinity array, Nvidia’s comparable surround technology requires a second graphics card running in SLI. If you don’t mind wearing dorky glasses and splurging on 120Hz LCDs, that SLI setup will support 3D Vision Surround.

We’ve long taken shots at Apple for using obscure “mini” connectors types for standard expansion ports and then charging a fortune for an adapter you’ll invariably need. Zotac gets a pass for using a Mini HDMI on the AMP! Edition because it throws in the necessary adapter for free.

There are more goodies in the box, including PCIe power adapters for older PSUs and a VGA adapter for ancient CRTs and budget LCDs. None of extra plugs is all that exciting, but it’s nice to see them included, especially since similar adapters aren’t provided with the Radeon.

The AMP! Edition’s accessory bundle has an ace up its sleeve in the form of a download code for Assassin’s Creed Brotherhood. Game bundles were all the rage a while back, but they rarely included recent and critically acclaimed titles. Brotherhood hit the PC at the end of March, so it’s still a new release. The game has also racked up a MetaCritic score of 88 with an impressive 8.5 user rating. If you’re interested in playing Brotherhood, the download code could add a lot of value to the AMP! Edition. The game typically sells for $40-50 online.

Our testing methods

Although we’ve busted out a healthy collection of games to test these two cards, we won’t be straying outside the head-to-head matchup. If you’re curious about how the performance of the Radeon HD 6950 1GB and GeForce GTX 560 Ti compares to a much broader range of competitors, hit up our GeForce GTX 560 Ti review for single-card results and our GeForce GTX 590 review for multi-card configs.

In the graphs on the following pages, we’ve marked the XFX Radeon HD 6950 1GB with an OC to denote its higher-than-stock clock speeds. The Zotac card is referred to as the AMP for simplicity’s sake.

We run most of our tests at least three times and report the median of the results. For our gaming tests, we used the latest Catalyst 11.4 preview drivers from AMD and Nvidia’s freshly released ForceWare 270.61 drivers. Noise, power, and temperature testing was conducted with the older Catalyst 11.3 and ForceWare 266.66 drivers. We configured our Radeon’s Catalyst Control Panel like so, leaving optional AMD optimizations for tessellation and texture filtering disabled. The following system was used for testing.

Processor Intel Core i7-2600K 3.4GHz
Motherboard Asus P8P67 PRO
Bios revision 1305
Platform hub Intel P67 Express
Platform drivers INF update 9.2.0.1025

RST 10.1.0.1008

Memory size 8GB (2 DIMMs)
Memory type Corsair Vengeance DDR3 SDRAM at 1333MHz
Memory timings 9-9-9-24-1T
Audio Realtek ALC892 with 2.58 drivers
Graphics XFX Radeon HD 6950 1GB OC with Catalyst 11.4 preview drivers
Zotac GeForce 560 Ti AMP with ForceWare 270.61 drivers
Hard drive Corsair Force F120 120GB
Power supply PC Power & Cooling Silencer 750W
OS Windows 7 Ultimate x64

Thanks to XFX and Zotac for providing the graphics cards, Asus for the test system’s motherboard, Intel for the CPU, OCZ for the PSU, and Corsair for the memory and SSD.

We used the following versions of our test applications:

Some further notes on our methods:

  • Many of our performance tests are scripted and repeatable, but for Bulletstorm, Assassin’s Creed Brotherhood, and Shift 2 Unleashed, we used the Fraps utility to record frame rates while playing a 60-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We raised our sample size, testing each Fraps sequence five times per video card, in order to counteract any variability. We’ve included second-by-second frame rate results from Fraps for those games, and in that case, you’re seeing the results from a single, representative pass through the test sequence.
  • We measured total system power consumption at the wall socket using a Watt’s Up Pro digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

    The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Bulletstorm at a 1920×1080 resolution with 4X AA and 16X anisotropic filtering.

  • We measured noise levels on our test system, sitting on an open test bench, using a TES-52 digital sound level meter. The meter was held approximately 8″ from the test system at a height even with the top of the video card.

    You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

  • We used GPU-Z to log GPU temperatures during our load testing on all cards.

The test systems’ Windows desktop was set at 1920×1080 in 32-bit color at a 60Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

Most of the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Lost Planet 2

Thank you, developers of Lost Planet 2, for including an in-game benchmark that shows off all sorts of DirectX 11 effects, including a heavily tessellated monster. The Benchmarking Sweatshop doesn’t have a fancy 30″ monitor, so testing was conducted at the common 24″ resolution of 1920×1080. To make things difficult for the cards, we maximized in-game detail levels and cranking up the antialiasing.

Score one for the GeForce. Fermi-based GPUs have traditionally fared well in this test, so the AMP’s strong showing is no surprise. The Radeon at least manages over 30 FPS, though. Since this benchmark has a larger tessellation component than typical games, we won’t dwell too much on the results.

Civilization V
Civilization V‘s Leader benchmark isn’t representative of actual gameplay, but it includes a number of fancy graphics effects that are very much a part of the game. As with Lost Planet 2, we were able to push in-game detail settings to their highest levels. We even enabled 8X antialiasing.

This one’s a dead heat. Both cards manage very high frame rates even with the highest settings possible.

Of course, the Leader benchmark isn’t as indicative of real-world gameplay as another Civ test that shows a late-game view loaded with structures and units.

In this test, the GeForce comes out ahead by a substantial margin. Frame rates are much lower overall, but both cards should be able to play Civilization V comfortably at this resolution.

Metro 2033

One of the most demanding PC games around, Metro 2033 forced us to back off a little and disable PhysX and depth-of-field effects. We also dialed back the antialiasing to 4X, which still looks fantastic.

The Radeon leads the GeForce in Metro 2033. In addition to having an average frame rate six FPS higher than the Zotac card, the XFX’s low frame rate is also higher. Given the slow pace of the game, though, I’m not sure you’d need to back off on any of the graphical settings to run this game smoothly on the AMP! Edition.

Bulletstorm

The first of three games we tested using Fraps, the Bulletstorm demo’s short Echo level is perfect for benchmarking. It’s a heck of a lot of fun, too. We tested gameplay in 60-second bursts with all the details turned up, including 8X antialiasing.

Again, the XFX 6950 1GB comes out ahead of the Zotac GTX 560 Ti. The GeForce is still perfectly playable using these settings, but the Radeon delivers higher average and low frame rates. The Radeon’s frame rates are more consistent overall, too.

Assassin’s Creed Brotherhood

We couldn’t resist testing the freebie that comes bundled with the Zotac card, so we fired up Assassin’s Creed Brotherhood for a taste of parkour-infused medieval assassination. As with Bulletstorm, we recorded five 60-second gameplay sessions with maximum in-game details and 8X antialiasing.

The Assassin’s Creed franchise has roots in the console world, so it’s not surprising to see a couple of cutting-edge graphics cards pump out such high frame rates. Either card is plenty for this game, but the Radeon scores another victory over the GeForce.

Shift 2 Unleashed

With a unique helmet cam and solid driving dynamics, Shift 2 Unleashed is arguably the best racing sim available on the PC. We used Fraps to see how well our two hot rods handled a few laps of the Suzuka circuit with all the eye candy turned up.

Both deliver smooth and consistent frame rates with the detail settings maxed. Neither card distances itself from the other, though.

Power consumption

At idle, there’s only a one-watt difference between the two cards. Crank things up with a few minutes of Bulletstorm, however, and the AMP! Edition draws almost 40W more than the XFX card. Super-charging the GeForce’s engine is certainly good for extra horsepower, but there’s a definite impact on fuel economy when you’re really using the thing.

Noise levels

Despite consuming a lot more power under load, the Zotac card is only marginally louder than the XFX with its dual-slot cooler. The one-decibel difference in idle noise levels is difficult to detect when sitting a couple of feet away. All of our noise-level readings were recorded with the decibel meter just eight inches from the test rig.

GPU temperatures

Zotac manages to keep the AMP! Edition so quiet by letting the GPU hit higher temperatures than the Radeon before ramping up the cooler’s fan speed. The extra power consumed by the GeForce under load has to go somewhere, and I’m actually surprised there isn’t a bigger difference in GPU temperatures.

Overclocking

Just because these two cards have higher-than-stock clock speeds set at the factory doesn’t mean we can’t get our hands dirty with some real overclocking. With the Radeon, our options are limited because XFX doesn’t ship the card with any tweaking software. Fortunately, AMD provides a few clock, fan, and power controls via its Catalyst drivers.

Kudos to AMD for stepping up and offering this functionality. Here again, though, we see AMD’s fairly conservative approach to pushing Radeon clock speeds. The drivers only allow the 6950’s core clock to be increased to 840MHz, a measly 10MHz higher than the XFX model’s existing default. Things don’t get much better on the memory front, where the card’s 1300MHz clock speed can only be jacked an additional 25MHz.

These limitations persist outside of the Catalyst control panel, too. MSI’s Afterburner utility works with a range of different graphics cards and is compatible with the XFX Radeon. Unfortunately, the app’s core and memory sliders for the card top out at 840 and 1325MHz, respectively.

To XFX’s credit, the card runs perfectly at 840/1325MHz. I’m gonna take away half of AMD’s kudo for being so stingy with the clock ceilings, though.

Zotac does offer a tweaking tool with the AMP! Edition, and the FireStorm app provides a measure of clock control in addition to a slider that governs the cooling fan. As with all too many applications created by Taiwanese hardware makers, FireStorm’s user interface could use some work. You have to set the shader clock at double the GPU clock manually or the settings won’t stick, which is rather annoying since Zotac could’ve just linked the two sliders together. At least there’s plenty of range for the GPU and memory clocks.

We used Civilization V to test stability when overclocking. Initially, the AMP! Edition ran the game fine with a 990MHz core and 1225MHz memory. However, it wouldn’t complete a full Civ run without resetting the clocks back to 950/1100MHz. We thought MSI’s Afterburner app might be able to provide a voltage boost, but the app’s GPU voltage slider doesn’t work with the Zotac card. After some fiddling, we eventually settled on a 980MHz core.

Even though we squeezed more MHz out of the GeForce, the resulting increase in Civilization V frame rates only amounted to an extra FPS for both cards. There’s certainly more headroom to exploit in the AMP! Edition, but pushing the clocks won’t improve in-game frame rates dramatically.

Conclusions

The XFX Radeon HD 6950 1GB and Zotac GeForce GTX 560 Ti 1GB approach the upper reaches of the mid-range graphics market a little differently. XFX attacks from above with a de-tuned version of AMD’s high-end Cypress GPU, while Zotac takes a shot from below with a hopped-up spin on the mid-range GeForce GTX 560 Ti. The Radeon has superior silicon, but the GeForce GPU has been subjected to much more aggresive clock boosting.

Although the paths they take are different, XFX and Zotac both arrive at roughly the same $270 asking price. Technically, the GeForce is $2 cheaper than the Radeon. The XFX currently offers a $30 mail-in rebate, though. We prefer immediate savings to waiting for a check in the mail, so we’ll call this one a wash, at least on price.

On the performance front, the Radeon has a slight edge. The AMP! Edition is faster in a few scenarios, but the Radeon offers higher frame rates in more games. Interestingly, there’s little difference in noise levels between the two cards. There is, however, quite a discrepancy between their power consumption under load. The nearly 40W gap is surely responsible for the GeForce’s higher GPU temperatures, and it highlights a potential pitfall associated with venturing far beyond default clock speeds.

Zotac is to be commended for making such a power-hungry card run so quietly. The company also deserves praise for offering an attractive game bundle. Even if you’re not interested in Assassin’s Creed Brotherhood, you can flip the download voucher on Ebay and make a few bucks.

As a gamer who quite likes Assassin’s Creed, I’m tempted to recommend the AMP! Edition due to the value provided by the overall package. This is a sweet card, and Zotac has combined it with a tasty assortment of extras. However, the PC enthusiast in me can’t get past the fact that XFX’s Radeon HD 6950 1GB is faster overall, a little bit quieter, and much more power-efficient under load. Take away the extraneous extras, and it’s the better graphics card.

Comments closed
    • coolkev99
    • 9 years ago

    Hmm.. I just bought the 560 ti, just before reading this. But it’s still unopened. However my last card, an ATI, left a bad taste in in my mouth – so to speak – as the drivers and installer seemed very flaky. I may as well keep the 560… I don’t think I’ll really notice any performance differences at my 1650×1050 res that I currently play at?

      • sweatshopking
      • 9 years ago

      they’ll both be fast enough at that res. i wouldn’t bother shipping it back.

    • Ngazi
    • 9 years ago

    The Mustang and Camero were pony cars and sometimes real sports cars. Now they are just compacts. The Galaxy and CTS are muscle cars. Big engine light sedan.

    • south side sammy
    • 9 years ago

    I owned a GTX560 2gig card and have a 6950 2gig card. Out of the games listed Metro 2033 was the hardest on either card and the 560 couldn’t play the game at 1920×1200 at highest settings. And struggled at lesser settings. The 6950 can and does play it smoothly at highest possible settings. Kind of insulting that I had to turn back the settings so much on the nvidia product seeing all the hype on the card. And the reason I said “owned” is because the nvidia card suffered from thermal breakdown making smooth gaming impossible after a short time….. with the games it could handle.

      • odizzido
      • 9 years ago

      Nvidia products have been dying left and right for me recently…..I can’t find myself recommending them to anyone anymore.

    • Fighterpilot
    • 9 years ago

    OMG…Hell finally HAS frozen over…..
    A Radeon wins a two card comparison with NVidia at the Tech Report.

    I better print this one out for inclusion in Ripley’s “Believe it or not”

    Nice report there Geoffrey 🙂

    • jamsbong
    • 9 years ago

    In regards to Overclocking in Radeon cards. If you use ATI tray tool, you can easily adjust the frequencies without any limit.

    Personally, I reckon a high power consumption card with a quiet cooling setup equals high GPU temperature. That will significantly affect the durability and reliability of the card. Geforce are well-known to break down over time and I don’t want that.

    • Meadows
    • 9 years ago

    The AC Brotherhood screenshot is ridiculous. What’s up with the shadows?

    • kamikaziechameleon
    • 9 years ago

    doesn’t the amd solution here also have more Overclocking head room so theoretically its a card with allot more for the money where as the nvidia offering is at the outer limits of the cards capabilities.

      • grantmeaname
      • 9 years ago

      You got it backwards.

        • Goty
        • 9 years ago

        Well, he’s got it backwards according to this review, but it appears that it only appears that way because the reviewers didn’t do anything with the powertune limit. The card ran perfectly well at 840MHz, implying that there is headroom left.

          • grantmeaname
          • 9 years ago

          Dissonance said in the comments that he did do anything with the powertune limit, actually.

      • Dissonance
      • 9 years ago

      There might be headroom, but you’re not going to be able to exploit it. Raising the power control in CCC doesn’t change the clock caps.

        • mno
        • 9 years ago

        Actually, contrary to what is stated in the review, MSI Afterburner is in fact able to go past the clock caps. As a quick Google search shows, you just need to enable unofficial overclocking in the config file to do so.

    • shark195
    • 9 years ago

    From all the reviews so far, it appears AMD’s architecture has proven to be more power efficient than the Femi, in all power consumptions done so far, AMD leads by far margin. The conversion of power consumed into the admirable FPS is so well achieved by the Red team whiles in the Green team it is not. So i was wondering if that indeed it the case, where does so much power consumed by the green team’s product translate into, is it wasted as heat, noise or what? In today’s engineering, the hallmark of every engineering product is efficiency (how much of what you put in) simple output/input, amd has a stronger ratio of this than nvidia, and if one team has that, then it clear, it is the best
    So for me the current Radeons rein no matter how you look at, Nvidia has go to improve on power consumption, that is why they could not increase the clock speed of the GTX 590, otherwise it would have consumed twice or 1.5 of the power consumed by 6990, the fastest single GPU on the planet, no doubt, AMD big thumps for your engineering team, you have done a great work!

      • khands
      • 9 years ago

      Fermi appears to be an inherently power hungry architecture (though they’ve worked on it a good deal since the 400 series) and this is, IMO, due to its multipurpose design. It’s not just meant for graphics anymore, while the Radeon’s can’t say that as well.

        • shark195
        • 9 years ago

        If it’s not meant for graphics anymore, what else is it for?

          • Buzzard44
          • 9 years ago

          Khands said “It’s not JUST meant for graphics anymore.” He’s referring to NVidia’s approach with Fermi to have a single architecture to address both graphics AND GPGPU needs.

            • shark195
            • 9 years ago

            Nvidia has CUDA, an SDK and API that allows a programmer to use the C programming language to code algorithms for execution on Geforce series GPUs. AMD offers a similar SDK+API for their ATI-based GPUs that SDK and technology is called FireStream SDK, designed to compete directly with Nvidia’s CUDA. OpenCL from Khronos Group is used paired with OpenGL to unify the C languages extension between different architectures; it supports both Nvidia and AMD/ATI GPUs, and general-purpose CPUs too.
            So where lies Nvidia’s GPGPU (General-purpose computing on graphics processing units)?
            When it comes to multi-monitor capabilities, AMD’s Eyefinity-enabled graphics cards have the upper hand over their Nvidia-powered counterparts, there’s no denying that.
            I CAN’T SIMPLY FIGURE THAT OUT? IT IS SIMPLY NOT THERE!!

            • jensend
            • 9 years ago

            Laughable. If you were actually involved in GPGPU beyond looking at a couple benchmarks you’d know that for the time being nV/CUDA is pretty much the only serious option for many folks.

            The 6xxx series Radeons are supposed to help close the compute performance gap, but nV and CUDA still have a significant lead on most GPGPU stuff. The Radeons have higher theoretical performance but for many algorithms it simply isn’t possible to get anywhere near that performance, while for other algorithms it’s possible but very difficult compared to getting good performance from an nV card.

            While I really hope OpenCL catches on soon it just isn’t there yet; CUDA has a lot more mature support out there and a lot more flexibility. A relevant quote from Accelereyes: “OpenCL is notably more difficult to program and debug than CUDA since OpenCL documentation, tools, and scientific computation libraries are still very limited.”

            Only people with millions of dollars worth of AMD cards at their disposal (Folding@Home, maybe a couple of supercomputing projects) seem to be using CTM/the Firestream SDK.

            • shark195
            • 9 years ago

            Hilarious!!! When comparing Radeons to GeForces, AMD’s cards get the nod for higher overall scores and better results per dollar spent. Frankly, the main reason for the superior results are AMD’s consistent cadence detection, better noise reduction (especially when it comes to compressed video), and a working flesh tone correction feature. [url<]http://www.tomshardware.com/reviews/hqv-2-radeon-geforce,2844-10.html[/url<] Another point is that when an Nvidia card of comparable performance to AMD for example HD 6970 and GTX 570, nvidia cards consume much more power for just FPS, which ATI produces better FPS with less power consumption, so with time, you pay more for the Nvidia card. Let's face it, Femi is just POWER HUNGRIER THAN Cayman, AMD is doing well in this respect. With APU gaining grounds, let see where discrete GPUs will find their place in the not too distant future.

            • sweatshopking
            • 9 years ago

            you didn’t address the main crux of his argument though. CUDA simply is better than stream or openCL at this point.

            • jensend
            • 9 years ago

            Nothing you said has anything to do with GPGPU. Why are you blabbing about video quality and FPS, both of which are totally irrelevant to the subject at hand?

            In general your comments’ strange run-on sentence structure, your odd phrasing, and your tendency to use sentences from other sources rather than doing your own writing** suggest that either (1) you’re a totally daft troll or (2) English is not your native language. If it’s the latter, I’m willing to give you a break for not understanding what GPGPU is about, but in either case I’d strongly suggest you not bother people by spewing quoted material on subjects you know nothing about.

            **besides the paragraph directly quoted from Tom’s Hardware, compare shark195’s previous comment to the following paragraph from Wikipedia’s GPGPU entry:

            “In November 2006 Nvidia launched CUDA, an SDK and API that allows a programmer to use the C programming language to code algorithms for execution on Geforce 8 series GPUs. AMD offers a similar SDK+API for their ATI-based GPUs, that SDK and technology is called FireStream SDK (formerly a thin hardware interface[clarification needed]Close to Metal), designed to compete directly with Nvidia’s CUDA. OpenCL from Khronos Group is used paired with OpenGL to unify the C languages extension between different architectures; it supports both Nvidia and AMD/ATI GPUs, and general-purpose CPUs too.”

            • shark195
            • 9 years ago

            @jensend It’s funny when you have to make a valid point you go about attacking someone with derogative words, for me that is totally immature way of presenting your case, if your case is well presented, i think other readers will certainly see the difference from the presentations of both parties involved and you may be acknowledged (which i doubt 100%) as the inventor of GPGPU, hahaha, which I think it is only documents obviously from somewhere (different sources) that you have read just as you quoted the one above. Hypocrisy at its best!
            The two issues I raised in my first comment were as follows;
            1.Femi is power hungry and Cayman is not.
            2.Cayman as a result is more efficient; that is the ratio [output(FPS)/input power(watts)] is higher in Cayman than in Femi
            You can continue dragging the issue by presenting us with (since you don’t quote anybody, Super Einstein in the making) a review of your own testing all aspects of GPGPU and explain why power consumed by Femi cards do not necessary translate into FPS, that will make more sense than your rather absurb approach of using derogatory words.

            • jensend
            • 9 years ago

            There’s nothing wrong with citing sources to support points you’re making. But you weren’t making any points of your own to support, didn’t cite your source on the Wikipedia quote, and didn’t use quotation marks in either instance, instead pretending that these were your own words. This is called “plagiarism” in English; perhaps your native language has a word for this concept as well.

            I wasn’t “dragging the issue.” Here’s a synopsis of the conversation so far:
            –you made a rambling rant about AMD’s superiority which could have better been expressed by the single true statement “AMD’s cards are more power-efficient than nV’s in the majority of current games.”
            — khands remarked that this is because nV GPUs aren’t quite as specifically directed at current-generation games, with the obvious implication that it’s because their GPUs are more targeted at general purpose calculation than AMD’s.
            — After Buzzard44 pointed out the obvious for you, you tried to refute khands by just spamming the forum with wikipedia content. Was this perhaps the first time you’d heard of GPGPU?
            — I brought up a few reasons why nV has a GPGPU advantage. You somehow thought that bringing a Tom’s Hardware video playback review into the conversation was relevant. Why???

            If you actually look at the software that’s out there, the academic papers being published, etc you can easily see that the vast majority of GPGPU work, from number theory (msieve for prime factorization) to PDEs (opencurrent, a whole host of others) is CUDA-only for the time being. nV hardware, CUDA, and the tools and libraries built on it just have tons of features that other combinations of hardware and software don’t have, all the way from the better thread-local caches on the hardware to the numerical libraries (BLAS, FFT, etc) and C++ support. Built-in easy to use CUDA support in MATLAB and Mathematica is a big plus as well.

            OpenCL will catch up to some extent, and Cayman’s switch from VLIW5 to VLIW4 and a bunch of other architectural changes show that AMD is starting to pay more attention to this as well. But it just isn’t there yet.

            • shark195
            • 9 years ago

            @jensend I was not against the point you raised concerning Nvidia having GPGPU advantage, but I remarked AMD has something to compete it.
            I admit I should have quoted the document from Wikipedia, but you find that the was a link to the one from Tom’shardware.
            Your tone and choice of words were rather inappropriate as an intellectual and I find that very disappointing from a grandeur exhibition of an eminent Einstein and Shakespear combination,
            You failed to provide benchmarks to your claim that Nvidia GPUs are superior to AMD in GPGPU.
            Your assertion that Nvidia GPUs target games that are obviously unavailable is completely unfounded in that, GPUs rarely live more than two years, they get replaced in about six months time.
            So why will one buy an expensive GPU with a functionality that is never used and still pay hill bills since that card consumes so much power.
            Don’t forget, higher power consumption will certainly lead to higher pressure on the components on the PCB and will invariably lead to a rather short life span.
            You claim to a be custodian of “the English Language” insinuating I have it as a second language, what’s that for?
            Let’s present cases with contents worth reading and stop the unnecessary rants, they don’t help in anyway!

    • Bensam123
    • 9 years ago

    “…true enthusiasts tend to be more pragmatic.” Like buying the 6950 and converting it to a 6970? Why don’t you guys make any mention of the ability to flash a 6950 to a 6970? That’s not the same as saying that you endorse it (same as overclocking), but that it is possible and a feasible solution. I’m also curious why you guys didn’t boost the powertune levels to 20%?

    Also, also curious why these two cards are only facing off against each other? I noticed some of the benchmarks look different and you’ve included some new games. Is that to avoid directly comparing them to other cards that are already tested, but weren’t tested under similar conditions?

      • sweatshopking
      • 9 years ago

      whilst they did mention the unlocking of athlon’s and phenom’s, they were equally hesitant about that, as they do comparisons on what you’re guaranteed.

        • Bensam123
        • 9 years ago

        I mentioned that… then how do they go about mentioning over clocking, over volting, and results with turboboost turned on?

      • Dissonance
      • 9 years ago

      As I understand it, the 1GB 6950s can’t be flashed with 6970 BIOSes. Boosting powertune didn’t increase the clock ceilings in CCC, either.

      Since we’d already done 560 and 6950 reviews, this was meant to be a specific comparison of two different cards. I didn’t use the same test system as Scott did in his reviews, so the results aren’t directly comparable (different drivers this time around, too). The lack of other cards is simply a matter of time constraints and the fact that the initial reviews already provide a wealth of comparative performance data. As for the games, I tested with the most recent titles I have in the Benchmarking Sweatshop.

        • Bensam123
        • 9 years ago

        [url<]http://www.techpowerup.com/articles/overclocking/vidcard/159[/url<] "Update 2: Above method is only for 2 GB HD 6950 reference design cards. If you have a custom design or 1 GB card, then use RBE to modify your existing BIOS. Save the BIOS from your card, load it into RBE, enable the shader unlock option on the last tab, then flash that modified BIOS to the card instead of the one downloaded from this page." I do appreciate the rest of the explanation though. Including information in the initial review would be helpful

    • Goty
    • 9 years ago

    I may be mistaken, but don’t the Overdrive limits in CCC increase when you increase the powertune limit? I think that may be the reason you’re limited to 840 MHz on the core.

      • Dissonance
      • 9 years ago

      Changing the Power Control setting in CCC doesn’t help. Cranked it to +20%, and the Overdrive options are still capped at 840/1325.

    • dpaus
    • 9 years ago

    [quote<]"Even if you're not a multi-monitor madman, the ability to drive five digital displays...."[/quote<] Hey, I resemble that remark!

      • Firestarter
      • 9 years ago

      resemble or resent?

        • dpaus
        • 9 years ago

        as if they’re mutually exclusive!

    • darryl
    • 9 years ago

    No matter how these GPU reviews slice it, the evidence is almost always lacking as to which card to buy, for one simple reason (that I see, anyway): what if you don’t play any of the games used in the test? How does one make a decision based on these”irrelevant” games? I just don’t get it. Someone help me. Please. Wouldn’t want to buy either one if it’s performance was poor in games I play (and of course I wouldn’t know that up front).

      • DancinJack
      • 9 years ago

      If any game swayed someone enough to make a reccomendation based on that particular game, it’d be in a review. The fact is that the games chosen for the reviews tend to be current tech stuff that pushes these cards. You can assume that if a card does well in the test suite,it will do well in the games you want to play. This is also assuming your decision isn’t contingent upon 3-4fps in your said game.

      • indeego
      • 9 years ago

      So that is a complaint about every review, everywhere, then? Cars not driven on the roads near your house? Blenders not whipping your particular ingredients? etc? Reviews can only get you so far.

      • OneArmedScissor
      • 9 years ago

      There’s no shortage of $200+ graphics card reviews. I’m sure you can find someone testing with some game you play if you look around.

      At least the standard procedure for graphics cards is actually testing multiple real life scenarios where the card is put to legitimate use, so it ought to tell you something, one way or another. You can’t say the same for CPU reviews on any site.

    • potatochobit
    • 9 years ago

    put down the crazy pills, bro
    of course, we prefer a 30$ rebate over saving 2$ off the MSRP
    it’s ok to be a nvidia fanboy like meadows, but you need to promote Physx more
    like in the upcoming batman title!
    let’s not forget about cuda either, every bittorrent user needs that
    nvidia 3D vision now works in windowed mode too!

      • grantmeaname
      • 9 years ago

      He said “a wash”. That means they didn’t make a call either way.

      Could you leave now?

        • potatochobit
        • 9 years ago

        y u mad, bob?
        you can be on the green team too if you want
        if it makes you feel better I’ll go play some team fortress 2 before dinner time

      • paulWTAMU
      • 9 years ago

      some of us have had mixed luck, at best, with mail ins.

        • indeego
        • 9 years ago

        Just got a $20 Amex rebate for something I mailed in 6 months ago. Never give up hope!

      • odizzido
      • 9 years ago

      I put zero value in physX and mail-ins.

        • clone
        • 9 years ago

        almost universally I ignore MIR’s unless all things are equal then they get the nod, I almost laughed when it was mentioned that the GTX 560 was $2.00 cheaper vs the $30 MIR for the HD 6950.

        I had never thought I’d have to place a value on an MIR before but when it’s actually given notice that a video card is $2.00 less expensive vs a $30 MIR hands down I’d go with the MIR and never look back.

        if the Zotac had been $50 less expensive I’d consider buying it but $2.00….. why was that mentioned?

        as for PhysX and Cuda I place no monetary value on either yet.

    • phez
    • 9 years ago

    Somebody stole the 30″?

      • grantmeaname
      • 9 years ago

      Only Scott has a 30″.

        • DancinJack
        • 9 years ago

        Even so, a 30″ panel (its resolution, more precisely) isn’t necessary to push these cards. I imagine that’s the primary factor in using a 1920×1080 screen.

    • flip-mode
    • 9 years ago

    Neither card is attractive since you can get the 6950 2GB for the same price and turn it into a 6970.

      • BloodSoul
      • 9 years ago

      Agreed with Flip here. Furthermore, how in the world can you recommend the 560 ti over the 6950 1gb immediately after saying that the 6950 has better overall performance and lower power consumption and temps?

        • sweatshopking
        • 9 years ago

        His final point is that he’d recommend the 6950 1gb. really they’re pretty close. I don’t mind MIR, so a 30$ discount is something i’d be after. that puts it at 239$ vs 269$, and the cheaper card is faster, quieter and uses less power.

        • grantmeaname
        • 9 years ago

        did you read the article?

          • BloodSoul
          • 9 years ago

          Yes I did, and it felt like he was pulling teeth trying to admit that Nvidia wasn’t the superior option here.

            • grantmeaname
            • 9 years ago

            So he both recommended the 6950 over the 560 Ti and recommended the 560 Ti over the 6950?

            No, wait, that didn’t happen.

      • FuturePastNow
      • 9 years ago

      I was thinking in the other direction. Newegg has a GTX 460 768MB for $109 after rebate.

      That’s 40% of the price of these cards, for about 70% of the power.

      • Krogoth
      • 9 years ago

      You can’t turn 6950 2GiB into 6970. The Cayman in the 6950 has a missing block. I suppose that doesn’t it really matter as 6950 2GiB is almost as 6970 and a modest OC on 6950 can bridge the difference.

        • MrJP
        • 9 years ago

        You need to do more checking before making definite statements, or you might end up looking foolish:

        [url<]http://www.techpowerup.com/articles/overclocking/159[/url<]

          • Krogoth
          • 9 years ago

          Those cores were disabled for a reason.

          It is at your risk, you might be able to luck out and get a 6970 core was artifactually crippled to 6950 levels. 😉

            • Waco
            • 9 years ago

            Yeah, they were disabled for a reason. That reason being to fill a price gap.

            I don’t think many people have difficulty unlocking them. Hell, I don’t think I’ve seen a single post in the past few weeks where someone attempted it and had problems.

Pin It on Pinterest

Share This