AMD’s Radeon HD 5970 graphics card


We used to be friends, you know, back in the day. Me and Fatal1ty, I mean. We worked at the same ISP, him taking customer calls on the support front lines and me in the back room, keeping the network running. He came to me when he couldn’t fix a problem. We even played Quake together, before he became the biggest gamer in the world over the successive 18 months or so and I became, well, a cable modem admin.

Sometimes I look back and think, had circumstances been different—had nature, for instance, given me a vastly superior set of skills, abilities, looks, social graces, age, and intelligence—that I could have had the same sort of success that he has. But then I wonder: could I have handled it? Not the pressure of competition or any of that, but the sheer extremeness of being so extreme. Drinking the energy drinks, wearing the bright colors, the hair gel. And I realize, I probably could not have.

Had I somehow managed it, though, without spontaneously combusting, I expect that this new video card from AMD, the Radeon HD 5970, would surely have become my weapon of choice.

The Radeon HD 5970

AMD offers its competition a sip of Hemlock

In spite of its singular name, the Radeon HD 5970 is in fact a dual-GPU graphics card in the tradition of the Radeon HD 4870 X2. Codenamed “Hemlock” during its development—or at least during its pre-launch marketing stage—the 5970 sports two of the Cypress GPUs that power the Radeon HD 5870. Each GPU has its own 1GB bank of GDDR5 memory, and the card itself is longer than a fourth-grade piano recital, measuring out to 12.16″ or 309 mm. Here, have a look at how the 5970 measures up against some common varieties of graphics cards.

From left to right: GeForce GTX 295, Radeon HD 5970, Radeon HD 5870, Radeon HD 4890

At 9.5″, the Radeon HD 4890 looks positively puny by comparison. Even the considerable 10.5″ span occupied by the GeForce GTX 295 looks way less extreme. The crazy thing is that the 5970’s Batmobile-inspired plastic shroud extends roughly half and inch past the end of the board itself—and its primary purpose is to look cool.

AMD points out that the 5970’s length is compliant with the ATX spec, but we haven’t seen a video card this long for many, many moons. I fear many PC cases these days aren’t really built to accommodate such a beast, especially the mid-towers. The bottom line here is that you’ll want to measure for clearance in your own case before hitting the “Buy” button at your favorite online retailer. Then again, if you’ve ponied up for an Obsidian, you surely need not worry.

Dual dual-link DVI ports flank a mini-DisplayPort, er, port

Due to the need to free up an entire slot backplate for the cooler’s exhaust, the 5970 has a different port config than the rest of the Radeon HD 5000 series. The HDMI port has been deleted, while the DisplayPort output has been hit with a shrink ray and reduced to Mini size. Despite the changes, AMD says the 5970 can still drive up to three displays simultaneously, and 5970 cards will ship with a pair of adapters: one that converts Mini DisplayPort to regular DisplayPort, and another of the DVI-to-HDMI variety.

A combo of six- and eight-pin connectors provides power

As the combination of six- and eight-pin power plugs portends, the 5970’s max power draw is rated at 294W, just under the 300W limit imposed by the PCIe spec. The good news on the power front comes at idle, where the 5970 inherits the very nice reductions in power draw achieved by the Cypress GPU and its memory controller. To that, this card adds another wrinkle: when it’s not needed, the second GPU enters a low-power sleep state (AMD likens it to the ACPI S1 state, if you must know), which should blow Al Gore’s skirt up. As a result, the 5970’s idle power draw is rated at 42W.

Two GPUs, but just a single external CrossFire connector

The Cypress GPUs on either side of the PLX PCIe bridge chip

The 5970’s dimensions and peak power draw are so ample because the card has to accommodate two copies of what is currently the fastest GPU on the market. This really is “CrossFire on a stick,” as we like to say, and the performance potential from such a beast is naturally quite considerable. Situated between the two GPUs in the picture above, under a metal cap, is a PCI Express switch chip from PLX, the same model used in the Radeon HD 4870 X2. This chip can support a trio of PCIe x16 links: one to each of the GPUs and a third to the PCIe x16 slot in the host system.

Like any two GPUs in a CrossFire pair, the 5970’s Cypress chips communicate with each other by means of those PCIe links and via a dedicated CrossFire interconnect, as well. Gone is the GPU-to-GPU “sideport” connection that was present on the 4870 X2. AMD says improvements to its drivers and to the performance of its CrossFire interconnect have rendered the sideport link unnecessary, even though the CrossFire interconnect’s physical bit rate, at 7.92 Gbps, remains similar.

Although the magic is all taking place on a single card, the 5970 is subject to the same limitations as any multi-GPU solution. That means you won’t always be able to take advantage of both GPUs in a new game until you’ve installed a driver update with a CrossFire profile for it. Also, you can expect to see something less than twice the performance of a single GPU, because multi-GPU performance rarely scales up perfectly.

Still, we must admit that AMD has made great strides in its multi-GPU support since committing to these dual-GPU cards. The latest bit of evidence on that front is the fact that the 5970’s release drivers will support AMD’s Eyefinity multi-monitor gaming capability. This is a first, and the support currently only extends to 22 games. We haven’t yet had time to try it out for ourselves, either. Still, it is a positive sign.

Speaking of signs, the one in the store next to the 5970 will read “$599.99” or thereabouts. That’s not cheap, but it’s somewhere between the price for two Radeon HD 5850s and two Radeon HD 5870s, so it has its own cruel logic. This is not a bargain-bin item by any stretch. The larger question is whether you’ll be able to buy one at any price, given the shortages we’ve seen on Cypress-based 5850 and 5870 cards, coupled with reports of yield issues at TSMC on 40-nm chips. When we quizzed AMD about 5970 availability, they could only say they expect the 5970 to be selling today at multiple online retailers and that supply “should be steady” through the holidays. Whether it’ll be a steady drip or a steady torrent remains to be seen, but I’m betting on the drip at this point.

Bending the envelope

When it came time to choose the 5970’s clock speeds, AMD encountered a no doubt vexing tradeoff. In order to stay within the 300W power envelope dictated by the PCIe specification, they had to dial back clock speeds considerably from Radeon HD 5870 levels. (That’s probably why this beast isn’t called the Radeon HD 5870 X2.) In fact, the clock speeds they chose are right at Radeon HD 5850 levels: 725MHz for the GPU, and 1GHz (or 4GT/s) for the GDDR5 memory. Here’s a look at how the 5970 stacks up, in peak theoretical performance.

Peak
pixel
fill rate
(Gpixels/s)


Peak bilinear

texel
filtering
rate
(Gtexels/s)

Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)

Peak shader
arithmetic (GFLOPS)

Single-issue Dual-issue
GeForce GTX 285

21.4 53.6 26.8 166.4 744 1116
GeForce GTX 295

32.3 92.2 46.1 223.9 1192 1788
GeForce GTX 285 SLI

42.9 107.2 53.6 332.8 1488 2232
Radeon HD 4870 X2

24.0 60.0 30.0 230.4 2400
Radeon HD 5850

23.2 52.2 26.1 128.0 2088
Radeon HD 5870

27.2 68.0 34.0 153.6 2720
Radeon HD 5970

46.4 116.0 58.0 256.0 4640

These days, I feel like we should attach as many caveats to that table as one might to a federal budget resolution. These are simply theoretical peak values, and they are in some cases quite academic. You’re rarely going to hit the peak pixel fill rate on one of these cards, for instance, since memory bandwidth will likely limit you first. The GeForces probably won’t ever reach their peak dual-issue shader FLOPS numbers, and I doubt the Radeons will achieve more than 80% of their peak compute capacity when executing pixel shaders. On top of all that, we’ve just thrown in the numbers for the multi-GPU solutions, even though they’ll rarely scale linearly.

With that said, you can see above what AMD’s compromise on clock speeds has wrought. The 5970 is still undoubtedly the most capable single-card solution around, but it’s closer to two 5850s than it is to two 5870s. (The GPUs on the 5970 don’t have any functional units disabled, so the 5970 should be slightly faster than a pair of 5850s in terms of shader and texturing capacity.)

The more dramatic comparison, though, is the 5970 versus two GeForce GTX 285 cards in SLI. The 5970 has more than double the peak shader capacity, but it’s pretty similar in terms of texture fill rates—and the GTX 285 SLI config has substantially higher memory bandwidth. Thanks to its 256-bit memory interface, Cypress was already balanced pretty heavily in favor of shader power rather than memory bandwidth. Doubling up on Cypress chips amplifies that fact, especially at these clock speeds, where the 5970 trails some competing solutions. The implications of these numbers should be fairly straightforward: in cases where performance is bound primarily by memory bandwidth or texture filtering, the 5970 may not look as impressive as one might have hoped.

The thing is, one could argue that the 300W PCIe spec for video cards is more of a speed limit than a credit limit. If you go over it, bad things might happen, but the system won’t necessarily stop working immediately. In fact, you might just end up having more fun. Honest, officer. And we know the Cypress GPU and its GDDR5 memory are both good for higher clock speeds, as the Radeon HD 5870 attests.

AMD didn’t want to tread formally beyond 300W, so it has decided to do so informally, instead, by enabling users to overclock the 5970 deep into 400W territory. The firm claims it has taken on considerable expense in the design and production of 5970 cards in order to give them ample headroom. The GPUs, it says, are screened for high speeds and low leakage. The DRAMs are rated for 5Gbps operation, just like on the 5870, even though they only operate at 4Gbps by default on this card. The electronics, including the capacitors and voltage regulators, are built to higher standards. And the cooler’s “massive” vapor chamber can dissipate up to 400W.

So, you know, it’s a Ferrari 599 GTB, and the local speed limit is 55 MPH. Wink, wink.

The 5970 laid bare

These Hynix GDDR5 DRAMs are rated for 5 Gbps operation

To enable its customers’ potential delinquency, AMD has raised the caps on its Overdrive overclocking control panel for the 5970, pushing the limits to 1GHz for the GPU and 1.5GHz (or 6 GT/s) for the memory. The company also expects its board partners to offer voltage tweaking tools with their 5970 products, and it supplied us with a rudimentary example of such a tool.

Those sliders promise granularity, but it’s a mirage. You get two settings here: default and peak. Fortunately, that’s pretty much all you’ll need to make good on what AMD strongly hints is the 5970’s potential: the same clock speeds as the Radeon HD 5870, or 850MHz/1.2GHz GPU and DRAM, respectively.

We didn’t get very far at all in our overclocking attempts without the voltage tweak, but at the higher voltages, we hit 5870 levels with very little drama. In fact, we’ve tested at those speeds and included full results in this article.

Do remember a couple of things, though. First, you’ve got to overclock both GPUs individually in the control panel in order to see any real performance gain. Second, although the clock speed settings in the Overdrive tool will persist after a reboot, the higher voltage settings did not, in our experience. This combination made for some interesting times, let me tell you.

In fact, we had so much fun sorting out that issue that we decided against pushing the 5970’s clocks beyond 5870 levels. We’ll leave that fun up to you.

Regardless of how easy AMD has decided to make it, this is still real overclocking, with no guarantees about likely clock speeds or what bad things may happen when you exceed stock settings. You’ll have to engage in real overclocking to get a 5970 that runs at these speeds, too, because AMD plans to limit the fake “overclocking” conducted by board vendors, the sort where they set higher default speeds and back up the cards with full warranties. AMD even says it has a mechanism in place to cap default clock speeds, and it will prevent board vendors from pushing too far past the 300W limit. That may change eventually; plans are afoot to enable dual 8-pin power connectors on future 5970 cards, but don’t expect to see such an animal until next year.

Test notes

We’ve tested the 5970 against a handful of other high-end solutions in some of the very latest games. I should note that we wanted to include a few things, such as Dragon Age: Origins and the Heaven DX11 benchmark, that don’t yet have CrossFire profiles. In the interest of exploring the 5970’s true potential, we’ve skipped those applications for now, but we may come back to them in the future.

The card you see above is Asus’ version of the Radeon HD 5870, a true retail product that’s replaced the AMD reference cards on our test bench. Asus ships this card with a Steam coupon for DiRT 2 and a three-year warranty that, blessedly, does not require registration of the product within a certain number of days.

The thing that makes this Asus card interesting is that it comes with a voltage-tweaking tool, much like what AMD expects its partners to offer for the 5970.

Asus’ Smart Doctor tool offers voltage control

I’ve not yet pushed this 5870 to its limits, but I should note that the voltage slider tops out at a somewhat scary 1.5V.

Also, one must look at the screenshot above and ask: Really, AMD? You want to entrust the designer of this interface with the 5970’s crucial overvolting feature? Really?

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-965 Extreme 3.2GHz
System bus QPI 6.4 GT/s (3.2GHz)
Motherboard Gigabyte EX58-UD5
BIOS revision F7
North bridge X58 IOH
South bridge ICH10R
Chipset drivers INF update 9.1.1.1015

Matrix Storage Manager 8.9.0.1023

Memory size 6GB (3 DIMMs)
Memory type Corsair Dominator TR3X6G1600C8D

DDR3 SDRAM at 1333MHz

CAS latency (CL) 8
RAS to CAS delay (tRCD) 8
RAS precharge (tRP) 8
Cycle time (tRAS) 24
Command rate 2T
Audio Integrated ICH10R/ALC889A

with Realtek 6.0.1.5919 drivers

Graphics Radeon HD 4870 X2 2GB PCIe

with Catalyst 8.663.1-091105a-091227E drivers

Asus Radeon HD 5870 1GB PCIe

with Catalyst 8.663.1-091105a-091227E drivers

Radeon HD 5970
2GB PCIe

with Catalyst 8.663.1-091105a-091227E drivers

Asus GeForce GTX 285 1GB PCIe

with ForceWare 195.39 drivers

Dual Asus GeForce GTX 285 1GB PCIe

with ForceWare 195.39 drivers

GeForce GTX 295 2GB PCIe

with ForceWare 195.39 drivers

Hard drive WD Caviar SE16 320GB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition RTM
OS updates DirectX
August 2009 update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Call of Duty: Modern Warfare 2

To test this game, we played through the first 60 seconds of the “Wolverines!” level while recording frame rates with FRAPS. We tried to play pretty much the same way each time, but doing things manually like this will naturally involve some variance, so we conducted five test sessions per GPU config. We then reported the median of the average and minimum frame rate values from all five runs. The frame-by-frame results come from a single, representative test session.

We had all of MW2‘s image quality settings maxed out, with 4X antialiasing enabled, as well.

Any of these graphics cards will run this game quite well, even at this four-megapixel display resolution. In terms of average frame rates and bragging rights, the 5970 is clearly the fastest dual-GPU graphics card, ahead of the GeForce GTX 295. However, only three FPS separates the two cards’ lowest frame rates, and since we’re near the 70 FPS mark, playability obviously isn’t an issue.

Overclocking the 5970 to 850MHz/1.2GHz takes its average FPS just beyond the dual GeForce GTX 285s, but its minimum frame rates are a little lower.

Borderlands

This is my favorite game in a long, long time. I played through Borderlands on a variety of graphics cards, mostly from the Radeon HD 5000 series, and it generally ran quite well. However, I noticed certain areas where frame rates tended to dip on the Radeons, and playing with a GeForce seemed to feel smoother. I figured this would be a good test for the 5970, so I picked one of those spots in the game to use in testing for this review.

This is me finding a sore spot and poking it, of course, so it’s not entirely fair on some levels. Still, games these days tend to run exceptionally well on just about anything, and I wanted to explore a case where the 5970’s additional GPU power had a chance to make a difference.

I tested by playing through the “Krom’s Canyon” level of the game and recording frame rates in 60-second chunks with FRAPS. Since I was playing through the whole level and not just repeating the same thing over and over, I took more samples, recording eight sessions per GPU config. We tested at 2560×1600 resolution with all of the in-game quality options at their max. We couldn’t enable antialiasing, though, since the game’s UT3 engine doesn’t support it.

Obviously, the GTX 295 handles this level of Borderlands better, especially when you’re looking at avoiding low frame rates. That’s a result that comes from a lot of playing in varied parts of the level, though. If we look at the very first one of our testing sessions, which occurs in an area that gives the Radeons particular trouble, the problem is easy to see:

Even the 5970’s frame rates drop to about 30 FPS in certain areas. I’m not sure what’s happening here, but my best guess is that it may have something to do with the way the game and the graphics driver determine which objects are visible. Krom’s Canyon is a long and narrow level. Frame rates seem to drop when you look straight down the canyon, even if your view is largely obstructed by the terrain. Yes, frame rates are still borderline acceptable, but you can definitely feel it when the game slows down.

Whatever’s happening here, the 5970’s prodigious graphics power isn’t sufficient to overcome it. Meanwhile, Nvidia seems to have avoided this problem.

Far Cry 2

We tested Far Cry 2 using the game’s built-in benchmarking tool, which allowed us to test the different cards at multiple resolutions in a precisely repeatable manner. We used the benchmark tool’s “Very high” quality presets with the DirectX 10 renderer and 4X multisampled antialiasing.

The 5970 handles this older, more familiar game more gracefully, easily outpacing the GTX 295, even without the help of overclocking. The default clock speeds are slower than two GTX 285s in SLI, though, at 2560×1600.

Resident Evil 5

I’m shocked to say that this may in fact be the best-looking PC game to date, and it has a very nice built-in benchmarking tool. We used the “variable” benchmark that takes in-game AI and the like into account, and we ran the test five times on each config to account for that variability. Naturally, we had all of the in-game quality options cranked.

The cards here finish in the same order they did in Far Cry 2 and Modern Warfare at 2560×1600. Even the relative performance levels are similar.

Left 4 Dead 2

We tested the demo for Left 4 Dead 2 by recording and playing back a custom timedemo comprised of several minutes of gameplay.

This one upsets the apple cart a bit, as the Radeons take a pronounced and consistent lead over the GeForces. The overclocked 5970 is nearly twice as fast as two GeForce GTX 285s in SLI. Looks like Nvidia has some SLI performance scaling issues here. Two GTX 285s are definitely faster than one, but not by as much as one would expect. Obviously, AMD’s multi-GPU solutions scale much better.

Power consumption

We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead at a 2560×1600 resolution. We have a broader set of results here because we’ve included those from our Radeon HD 5700 series review. Although we used older drivers for most of the cards in that review, we don’t expect that to affect power consumption, noise, or GPU temperatures substantially.

The 5970 does indeed achieve relatively modest power consumption at idle, much less than the GeForce GTX 295 or the 4870 X2. Not bad.

At its default speed and voltage, the 5970’s power draw under load is surprisingly modest—lower than the GTX 295’s and considerably less than the 4870 X2’s. One wonders whether AMD could have pushed a little harder on clock speeds, looking at these numbers. Then again, the overvolted and overclocked 5970 draws an awful lot of power.

Noise levels

We measured noise levels on our test system, sitting on an open test bench, using an Extech model 407738 digital sound level meter. The meter was mounted on a tripod approximately 8″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

The big boost in power consumption that comes from overclocking the 5970 translates pretty directly into higher noise levels. The 5970’s fan controller responds to the additional load by cranking up the blower speed, and the difference is noticeable.

Just for kicks, I tried manually setting the 5970’s blower at max speed and taking a reading. The result: a painful 70.5 dB. Thank goodness it rarely needs to go there.

Fortunately, the 5970 is nice and quiet at idle, unlike the rather annoying hiss of the GeForce GTX 295.

GPU temperatures

For most of the cards, we used GPU-Z to log temperatures during our load testing. In the case of multi-GPU setups, we recorded temperatures on the primary card. However, GPU-Z didn’t yet know what to do with the 5700- and 5800- series cards, so we had to resort to running a 3D program in a window while reading the temperature from the Overdrive section of AMD’s Catalyst control panel.

At its stock speeds, the 5970 sticks with AMD’s recent tradition of keeping GPU temperatures fairly low, as these things go. When overclocked, though, the 5970 gets very hot. The overclocked config was fairly stable for us in testing, but it’s definitely gonna run warmer.

Conclusions

What to make of the Radeon HD 5970? On the one hand, it’s clearly the fastest graphics card you can plug into your PC, easily outpacing the previous champ, the GeForce GTX 295. This one seemed like an obvious winner, given that it has two of AMD’s excellent Cypress GPUs onboard. That means all sorts of goodness we haven’t really discussed yet in the context of this review, including a DirectX 11-capable feature set (which no Nvidia GPU has yet delivered) and the highest texture filtering quality on the market. At its default speeds, the card’s power draw, thermals, and acoustics are all quite good, too. There’s lots to like here.

AMD’s new product stack

On the other hand, the 5970 is, well, extreme in various ways. The 12″-plus board length is the most obvious candidate, followed closely by the need to overvolt and overclock the product—and void your warranty—in order to achieve its true potential. Once you’ve done so, the 5970 will draw more power than it probably should through its aux power connectors, and it will be one hot and loud card, too.

Then there’s the inescapable fact that even today’s best-looking games don’t require anything more than a single Cypress GPU to achieve smooth-as-glass playback at a four-megapixel display resolution. To take full advantage of the 5970’s sheer power, you’ll need to do something extreme, like using an Eyefinity multi-monitor setup or turning on AMD’s supersampled antialiasing. Or you’ll have to justify the purchase in the name of future-proofing, aiming for the fabled Games That Don’t Exist Yet as your main target. I’m in no way against having this measure of graphics power on tap, but the age of console ports has endured another Christmas season, as our new-title-fortified benchmark suite has demonstrated. PC graphics have just outstripped the requirements of today’s games, and game developers have responded by releasing sequels with similar graphical requirements several years in a row now. I’m hopeful for the upcoming DX11 games, but skeptical they will make good use of dual Cypress chips.

Then there’s the $599 price tag and the murky availability picture.

Add it all up, and the 5970 isn’t a bad product by any stretch, but it is very much a niche product in a way its predecessor, the Radeon HD 4870 X2, was not. One wonders how many people will buy these things.

Then again, I’ve never been any good at figuring out the calculus of extremeness, as I’ve said. There’s no doubt the 5970 is the most extreme graphics card you can buy, and for some folks, I suppose, that will be all that matters.

Comments closed
    • Ratchet
    • 10 years ago

    Have I missed something or are there no tests comparing two 5870s in Crossfire (and 5850s as well)? Seems like that would have been an obvious comparison point here.

      • Damage
      • 10 years ago

      ’twas an error in my specs table. Sorry about that. I’ve corrected it.

      • bfellow
      • 10 years ago

      Looks like I’ll just have to run my PC without hard drives since I would have to remove my hard drive bay.

    • Ikeoth
    • 10 years ago

    Hey I noticed that the system bus speed for your Test Bed system’s Core i7-965 Extreme 3.2GHz CPU is rated at QPI 4.8 GT/s (2.4GHz). Yet Intel specifies it should run at 6.4GT/s. (http://processorfinder.intel.com/Details.aspx?sSpec=SLBCJ) Are you guys changing this on purpose or is this a mistake?

      • Damage
      • 10 years ago

      ’twas an error in my specs table. Sorry about that. I’ve corrected it.

    • indeego
    • 10 years ago

    Can it render this in real-time?
    ยง[<http://www.youtube.com/watch?v=YG5qDeWHNmk<]ยง I already know it can'tg{<.<}g :)

      • Krogoth
      • 10 years ago

      I don’t think most systems are capable of rendering 5000+ actors on the screen with a playable framerate.

    • sammorris
    • 10 years ago

    “there’s the inescapable fact that even today’s best-looking games don’t require anything more than a single Cypress GPU to achieve smooth-as-glass playback at a four-megapixel display resolution”

    I STRONGLY disagree with this statement. Even with two HD4870X2s (I daresay stronger than a single Cypress GPU) there is much room for improvement.

      • Krogoth
      • 10 years ago

      For the most part, the reviewer’s statement is true.

      There aren’t really that many games that can make the 5870 beg for mercy. 5970 handles anything you can throw at it without a hitch.

        • sammorris
        • 10 years ago

        At any resolution up to 1920×1200 inclusive I’d agree, a 5870 is sufficient, but at 2560×1600 that just isn’t true. Add 4xAA and you get seriously low frame rates.

          • shank15217
          • 10 years ago

          Why would you add 4xAA at that resolution anyways?

            • Voldenuit
            • 10 years ago

            Because jaggies will show up even at high resolutions.

            And if someone has a 30″ monitor and a 5970 (or even a 5850/5870), you can bet they’ll want the best image quality they can get.

            • MadManOriginal
            • 10 years ago

            Visible jaggies have less to do with resolution than dot pitch for LCDs.

            • Meadows
            • 10 years ago

            X4 Antialias is always welcome though. If I can see jaggies on a 21″ screen at 2048ร—1536 (low dot pitch), and my screen is less sharp than an LCD even, then I can imagine that it’s an issue on pretty much any LCD.

            • MadManOriginal
            • 10 years ago

            Well yeah if you can do AA it’s always welcomed but your statement doesn’t make sense to me, either you were just being silly or I’m misunderstanding. If your screen is ‘less sharp’ (which I read as larger dot pitch – maybe you mean your CRT is ‘fuzzier’ or less focused?) than another screen, then jaggies will be i[

            • Meadows
            • 10 years ago

            By “less sharp” I meant “less sharp”. Nothing to do with the dot pitch. Even the best CRT screens don’t give a screen as sharply defined as LCDs do, other advantages notwithstanding.

            I get what you’re saying, but I’m saying that if jaggies /[

            • MadManOriginal
            • 10 years ago

            Yeah whatever dude, you don’t even name you computers so what you say is meaninglessssss!!!!!!!11

            • Meadows
            • 10 years ago

            I don’t need to, it’s “username”-PC by default so what do I care!

          • Krogoth
          • 10 years ago

          Have you even bother to read the review?

          5870 manages to obtain a playable framerate at 2560×1600 with 4xAA in almost all of the tested titles. The only title that managed to make it struggle was Crysis Warfare. The game is pretty brutal on all GPUs even the mighty 5970.

          • Krogoth
          • 10 years ago

          Damm double post.

    • adampk17
    • 10 years ago

    So I have a question, perhaps a dumb one. I’m sure someone will let me know if it is ๐Ÿ˜›

    Since more people play World of Warcraft than any other game why not include it in your benchmarks for new video cards? I understand the engine is dated but the new cards still add value to the game. 25 man raids can still bring good cards to down at times.

      • Meadows
      • 10 years ago

      WoW is sensitive to single-threaded CPU performance, and the more character models (or terrain distance) you have on screen, the harder it gets. The only way to stress the graphics in a consistent manner would be creating a TR guild, positioning 40 people somewhere outdoors with a good view, *[

        • adampk17
        • 10 years ago

        I have a e8400 at stock speeds and a GX260 216 running at 1680×1050 2x AA and with WoW’s internal graphics settings put at Ultra my frame rates are no where near maxed out. That’s because of my CPU?

        Seems like you could make a reliable benchmark by setting WoW to Ultra graphics settings and max view disatance and taking a 10+ minute flightpath in Northrend.

          • Meadows
          • 10 years ago

          g{

            • adampk17
            • 10 years ago

            Well, because I have V-Sync enabled I figured 60 FPS would be the most I could get.

            • Meadows
            • 10 years ago

            How about disabling it?
            It limits your hardware, even if you’re not hitting 60 fps.

            You /[

            • adampk17
            • 10 years ago

            I really hate the tearing I see with V-Sync off. What else can you do to combat tearing besides turning V-Sync on?

            • Meadows
            • 10 years ago

            Nothing.
            Frankly, I’m not bothered. I come from an FPS breed where a split frame’s up-to-date information is more valuable than bothering to sync frames, not to mention input lag.

            Just keep in mind that your graphics card is not limiting your game (even at maximum detail) due to its power and your low resolution, so if you want more oomph, then kick your CPU up a notch.

            • BoBzeBuilder
            • 10 years ago

            Why would you turn vsync off when your PC can handle it? Leave it on. 60fps is perfect for any game and I don’t see why you would need more. Hell, turn up the AA to 8x and you should still be getting a consistent 60fps.

            • Meadows
            • 10 years ago

            I believe he said he was _[

        • Bauxite
        • 10 years ago

        Or just run through dalaran on a high pop around 8pm server time while on a mount, pretty much the same effect.

        Really shows the power of a SSD and truly fast connection ๐Ÿ˜‰ but not too much gpu load.

      • Farting Bob
      • 10 years ago

      Its bloody hard to get consistant, repeatable and comparable benchmarks from an online multiplayer. In fact, its damn near impossible. So many more variables to consider.

    • Rakhmaninov3
    • 10 years ago

    These graphics reach 9000

      • Meadows
      • 10 years ago

      Not yet, but it’s over 5900.

    • ProzacMan
    • 10 years ago

    Yawn…wake me when there is a need for this card. Or for just about any of the cards in this review. Back to my 8800 Ultra still chugging along!

      • OneArmedScissor
      • 10 years ago

      lol you bought an 8800 Ultra…

        • TurtlePerson2
        • 10 years ago

        Yea, it’s kind of ironic that he’s complaining about how silly it is to buy a high end card that isn’t needed and he’s running a high end card that wasn’t needed when it came out.

          • Meadows
          • 10 years ago

          You *could* find a use for an 8800 Ultra back in the day.

          But graphics technology has come a very long way since, while graphics loads have not.

            • khands
            • 10 years ago

            Yay Crysis?

            • ProzacMan
            • 10 years ago

            No actually what is really funny is I got the card for free! Nice of you to assume that I bought it though. I did however buy a 8800 GTS 640 that is in another computer and same deal haven’t had a game yet that would make me consider paying for any of the new cards NVIDIA or ATI.

            Edit: Damn this was meant as a reply to OneArmedScissor

            • ProzacMan
            • 10 years ago

            No as I didn’t have the card then, I actually got it for free! I did buy an 8800GTS 640 and have found that to be the same way, no real reason to upgrade from that either. Works just fine for everything I through at it. Could I get some more frames per second sure, but why?
            At some point it is an investment of diminishing returns. Especially when the technology/games that would make these cards relevant isn’t even here yet and there most likely be a newer faster card out by that time. Gotta love the endless upgrade cycle AMD and NVIDIA want everyone to be on.

            • swaaye
            • 10 years ago

            I know where you’re coming from. It feels like we’re partly stuck in a perpetual 2006 level of graphics technology thanks to MMOs and the consoles and rising development costs.

            • clone
            • 10 years ago

            I was happy with my 3870 I paid $64 for it…… but I found a buyer who paid $50 for it so I pony’d up to a 5770……. I was talking to a friend about it and laughing, “it’s comical that my 5770 is faster and more feature rich while being a puny mid range card.”

            paid $159.00 for it a few weeks ago.

            • swaaye
            • 10 years ago

            Yeah you can get some monster GPU power at the $100-150 level these days. More than enough for any PC game out there…

            I bought a 3850 when they first came out. They were ~$180 at the time (3870 ~$240) and people were raving at the incredible value because the 8600GTS was ~$250 and 3850 just blew that thing away while using less power at the same time lol. 3870 was faster than 2900 XT most of the time while using 100W less power heh. 8800GT was selling at around $300 at the time.

            Now that we have cards like the 5770 at $150 and 4850s for $100 you can’t go wrong down at the “low end”.

    • Thresher
    • 10 years ago

    Long card is LOOOOOOOOOOOOOOOONG.

    • Convert
    • 10 years ago

    It would be nice if they could just disable the second GPU and memory while sitting at the desktop to conserve power.

      • wira020
      • 10 years ago

      In idle, it does disable the second gpu.. it’s not explained here, but other sites mention it..

        • Meadows
        • 10 years ago

        g{

    • burntham77
    • 10 years ago

    l[

    • Freon
    • 10 years ago

    About what I would expect from a 5870 X2. Scaling could be better, noise is slightly disappointing, but it’s the fastest single physical card video card available. If you’ve got $599 burning a hole in your pocket, here ya go.

    • asdsa
    • 10 years ago

    Where is Crysis? GRID could have also been added. I’m a bit disappointed of the game selections.

      • flip-mode
      • 10 years ago

      Crysis would have been nice to compare past results. Grid – I don’t care. Sooner or later some fresh games have to be put in. It’s really nice not to see HL2E2 in there anymore.

      The next big benchmark game will be Stalker: Call of Pripyat

        • yogibbear
        • 10 years ago

        The ONLY reason why HL2EP2 is not there is because L4D2 is out.

        Same engine. Just updated a bit. You fail at being on point.

        Though i agree with the Crysis and Stalker:CoP points…

        ๐Ÿ˜€

          • BoBzeBuilder
          • 10 years ago

          No one plays Stalker. We need Crysis, Battleforge, maybe Shattered Horizon and Dirt 2.

    • bLaNG
    • 10 years ago

    Yet another review of another site that misses to test Eyefinity. Even Anand did not include any benches, although he had an 3-monitor rig. Sorry guys, but I think this is a key feature of all new AMD/ATI cards, and especially an enthusiast piece as a 600$ graphics card is quite likely to run with such a config. Please, with sugar on top, write a sweet request to a monitor maker of choice and ask for some display port monitors to test. I mean, who really cares if Hawk is running 115 or 100 fps in 2560×1600. Bring this puppy to the limit!

    Another thing – why not bring a GPU-accelerated software into the benchmark course? Some h.264 encoding or similar? I guess a lot of people are interested how cards perform in such tasks.

      • stdRaichu
      • 10 years ago

      I’ve not yet seen any GPU H.264 encoders that could hold a candle to even the crappiest CPU-based encoders yet. There’s a few video apps that’ll do things like resizing/resampling on a GPU, but that’s so basic a task compared to modern 3D that they’re all much of a muchness really.

        • MadManOriginal
        • 10 years ago

        Yeah I’ve been disappointed with the advance of GPGPU programs I might actually use. The quality of video encoding just isn’t there, it’s good for mobile conversion or reformatting and then only because those things don’t need the ‘best’ quality but that’s about it .

    • yogibbear
    • 10 years ago

    Great review. Couple of questions…

    1. Why select COD:MW2 for a GPU test? (This is a console port)
    2. Why not select Crysis or Crysis: Warhead?
    3. Hopefully when Dirt 2 comes out in joins your standard benchmark list
    4. What about Shattered Horizon?… (Yeah i know… it’s futurefail)

    In fact… benchmark games for me:

    L4D2
    Stalker:CS
    Crysis
    Dirt 2 (eventually, otherwise GRID was good as a benchmark previously)
    AC2 (eventually)
    plus any UE3 game… so Borderlands is OK.

      • Freon
      • 10 years ago

      #1, 2 – Popularity and age, relevance. MW2 is relavent because so many people play it. Console port or not. Crysis and Warhead are aging. What’s wrong with Farcry 2? It is relatively popular (probably more so than Crysis) and a fairly recent release using similar technology.

        • yogibbear
        • 10 years ago

        Mmmm… well I disagree with your points, but I understand where you’re coming from.

      • Skrying
      • 10 years ago

      Why benchmark games no one plays? There’s no historical indication that Crysis/Warhead are good indicators for the future of gaming performance. If anything Crysis might be one of the least useful games used constantly in benchmarks.

        • swaaye
        • 10 years ago

        Well it’s all subjective. I don’t play any of the games reviewed here, and that’s also why I’m still on a 8800GTX. The reviews of these new cards tell me to save my money simply because of which “new awesome” games they showcase.

        TechPowerUp has a long list of games. I do play a few of those. But, am I gonna upgrade to play older games better? No. I’ve gone down that road before plenty of times (for much cheaper than a modern high end card too) and it’s not worth the money in the end.

        I think FarCry 2 is perhaps the best example of how corrupt/retarded game reviewers are today because the game is shit. Crysis/Warhead may be old but they still look (subjectively) better than the “new” shooters out there. What’s tragic about seeing them in new tech reviews is that they are 2-3 years old now and yet they are the most visually advanced games to test with!

    • alphaGulp
    • 10 years ago

    It really seems like it’s time for some physics-based games to come out. What else are people to do with all that extra processing power?

    • Meadows
    • 10 years ago

    The intro was well done.
    Except I really doubt Fatality is any sort of a cool person.

      • Fighterpilot
      • 10 years ago

      Jealousy at its finest…that would be in comparison to who…you?

        • Meadows
        • 10 years ago

        Do tell me one reason for being jealous.

        • ssidbroadcast
        • 10 years ago

        Eh, I really think Scott got the better bargain out of life.

          • Meadows
          • 10 years ago

          You haven’t seen anything yet.

            • indeego
            • 10 years ago

            He’s seen flames literally jumping out of his case. Not many of us can say the same, and live to tell about itg{<.<}g

      • TravelMug
      • 10 years ago

      I met him a few times and he was actually pretty cool. As in no douche attitude or anything. Seems friendly. Might be of course that he’s constantly exhausted from all that gaming, that was a few years back when his hardware branding business was really kicking off so he was on the road to promote the stuff.

        • Meadows
        • 10 years ago

        I sure as hell wouldn’t have any willpower left to stay “friendly” after constant energy drinks and gay clothes with hair gel (if what Scott says is accurate), but then again, it sounds better than a modem administrator.

        Lol.

      • flip-mode
      • 10 years ago

      The intro was well done, indeed.

      • Joshvar
      • 10 years ago

      I go way back with Mr. Fats as well – and I don’t know where all the attacks on the guy come from. Maybe he’s just more comfortable around folks he’s known for a while, but I’ve never seen him go into this supposed “douche” mode that so many ppl talk about ๐Ÿ˜›

    • thebeastie
    • 10 years ago

    I think the long size was deliberate. It was to ensure that people that buy a proper sized case that will help insure proper cooling and give better odds the video card not be sent back due to malfunction over heat.

    Most gamers are ignoring cases that have full dust filtering, adjustable fan speeds etc for in favor of simply a clear plastic window to see all their crap inside, these are the people you don’t want to the sell a 5970 to as it might just over heat and they will send it back.

    Cases like the Storm Sniper black edition are perfect for a card like this with the large fans and even the side intake fan being fully dust filtered.
    Pushing air into the case more then having it sucked out will give positive air flow and ensure good cooling operation.

    Are gamers falling over them selfs for this case? no way they just think it is pointless compared to something with a clear plastic window.

    Also good call on showing benchmarks on L4D2 that was interesting. Also looks like you are on to something with the non scaling of Nvidia GPUs, I guess there is going to be a driver fix for that sooner or later.

    For people saying these cards are stupid I think you are just plain wrong, they don’t use more power then a setup in CF and they save a PCIE slot which some people find ultra important, far more important then just about anything else they will be doing with their computer.

    Heaps of people go out and get a extreme PCIe wireless card, a good sound card and a TV card few other things and boom no more PCIe slots and they often have to take something back to the PC shop with their sorry excuse of being out of slots.

    No one is going to build a new PC and buy old crappy PCI stuff they want the PCIe stuff if it just for performance or the ability to use it on the next PC or sell it.

    Dual GPU cards are very useful at the end of the day and again they NEVER use more power then a CF or SLI setup. You guys are just undermining your selves, because you are clearly wrong just look at the charts ffs!

    • Voldenuit
    • 10 years ago

    Can we add some 5850 CF figures in the benches? Anand’s numbers show the 5850 CF matching or exceeding the 5970, and a 5850 (or pair of them) is much more likely to fit in existing cases than the 5970.

    It would also be interesting to see why the 5850 CF config is scaling better. Anand tested on an X58, but speculated that it is the extra PCIE bandwidth to the 5850s that gave them the legs – something that would be missing in a P55 test bench…

    PS Can anyone point out an Eyefnity compatibility list? Having peripheral vision would be a godsend in a game like L4D(2).

    • d0g_p00p
    • 10 years ago

    Queue up the comments on more money than brains.

    edit: already done on second post, well done!

    • Fighterpilot
    • 10 years ago

    This Overkill turns GTX295 into Roadkill ๐Ÿ˜‰
    Nvidia must be thrilled to see its $600 price tag however as they will feel right at home there.
    I give it 7.5/10

    • indeego
    • 10 years ago

    Why does Anand’s reviewer overclock his test bench *[

      • Meadows
      • 10 years ago

      Because a faster CPU means less instruction latency and a chance to extract about as much from a peripheral as possible.

        • indeego
        • 10 years ago

        Yeah but it skews temps/watt readings, and reduces the ability of normal sane people to extrapolate meaningful scores based on their own i7’sg{<.<}g Just seems weird, I'd think you'd want more control/consistency in benching.

          • Meadows
          • 10 years ago

          You’d probably want to bark up Anand’s tree then, and request a not-overclocked test suite, or one run of each case.

          • WaltC
          • 10 years ago

          Agreed. I’d also like to see less attention paid to the highest resolutions requiring the largest monitors. I love my 28″-er, which is the largest monitor I’ve ever owned, but maxes out at 1920×1200–which I’m completely happy with.

          Some games–like Dragon Age: Origins–won’t really let me run at even 1920×1200, because Bioware goofed up royally in not building in font scaling for its GUI, so I find that 1600×1200 is the highest resolution I can use while comfortably reading the text in the game–which is sort of *required* to play the game with any degree of enjoyment…;)

          Frankly, I don’t think a 3d-card review is much of a review if it doesn’t feature at least every resolution from 1280×1024 and up in terms of testing. I mean, there *is* such a thing as FSAA which is most useful the lower the resolution, imo.

          Besides that, there are people with smaller monitors who prefer wide screen and play at weird, seldom review-tested resolutions of ~1344×1200 (or something like that, anyway), and sometimes weird things happen at those resolutions that don’t happen at the more “popular” (re:highest physically possible on Earth) resolutions. Reviews could be a lot better and more informative with just a bit of extra work, it seems to me.

            • indeego
            • 10 years ago

            You have a Hans-G? Man 1920×1200 at 28″, I can’t imagine what that looks like. That is typically a 24″ res. That 4″ difference likely is why your fonts look so whackg{<.<}g

            • Lazier_Said
            • 10 years ago

            Unscaled fonts that are too small with .31mm pixels on the big Hanns-G will be even worse on a 24″ display at 0.27mm.

            • indeego
            • 10 years ago

            The smaller pixel pitch the better the picture, font, graphics quality, in general. I’m going to go for a lower pixel pitch on a 24″ LCD versus a larger display 28″ every time, resolution being the same. This is why the Hanns-G is so cheap for its size, the pixel pitch is off the scale of what monitors this size normally areg{<.<}g (Well it's a compromise, I'm not saying it's good or bad. certainly value could be seen in getting a larger display for less money.)

      • Krogoth
      • 10 years ago

      Because, the reviewer had an i7-920. He had overclocked it to match the speed of the i7-975.

        • Voldenuit
        • 10 years ago

        Yeah, I don’t think the sort of people who buy a 5970 are going to be running stock speeds on the OEM cooler, right? ^_^

    • anotherengineer
    • 10 years ago

    massive e-peen right there lol

    wouldnt fit in my sonata though, unless I pulled out my disk grinder with a zip disk and cut out part of the hard drive bay ๐Ÿ™‚

    • crazybus
    • 10 years ago

    It’s interesting to see the difference in power consumption between Scott’s and Anand’s tests. They’re drastically different. I believe Anand used OCCT for his test, which may be pushing the cards closer to their theoretical limits. Scott’s use of L4D is probably more representative of real-world power draw.

      • Scrotos
      • 10 years ago

      It’s measured at the wall socket, so wouldn’t the mobo and all the other components come into play?

      Or are you referring to the idle/load difference in power draw rather than the total?

        • crazybus
        • 10 years ago

        I’m referring to the relative difference between the Radeons and the Geforce cards. In the Anandtech test, the Radeons use much more power. Obviously the absolute numbers between the systems are different and not comparable.

          • crazybus
          • 10 years ago

          Here are the load power numbers I was referring to:

          TR:
          5870 – 290w
          GTX285 – 330w
          5970 – 382w
          GTX295 – 407w

          Anandtech:
          5870 – 401w
          GTX285 – 384w
          5970 – 529w
          GTX295 – 460w

            • DrDillyBar
            • 10 years ago

            529w IS a bit outside the spread.

            • Voldenuit
            • 10 years ago

            TR uses L4D as the ‘stress test’ whereas AT uses OCCT. The former is a pretty light load on modern GPUs, and the latter is an unrealistically high load for real world scenarios. The most likely real world power draw will lie somewhere in between the two.

            • Damage
            • 10 years ago

            Not quite right. We tested power draw with a selection of different games, and L4D had higher power draw than any of them, easily. That’s why we chose to use it for testing–it was the highest power draw we saw in real-world use. L4D is not a “light” load in the thermal/power sense. Don’t confuse shader complexity with power draw. Oftentimes, the simplest shader can light up the most transistors at once.

            • Voldenuit
            • 10 years ago

            Cool. I was wondering why you guys chose such a low-requirement game – good to know. I’m still pretty sure that most real world peak draws would be lower than using OCCT, however :p

      • FuturePastNow
      • 10 years ago

      Well, TR’s test system has an i7 965 at stock speed while Anand is using an i7 920 overclocked to 3.33GHz. And they’re using different motherboards. Power supply efficiency could also be drastically different between the two systems.

      So it’s not an exactly even comparison.

        • crazybus
        • 10 years ago

        If you’d bother to read what I said in #106, you’ll see I’m not talking about absolute power use, which as you said is obviously not comparable. I’m talking about the relative difference between the Radeon and Geforce cards in the tests.

    • ssidbroadcast
    • 10 years ago

    WOW. Thanks for adding the RE5 Benchmark, Scott. That’s way cool. I’ll have to tweak my system to see if I can get any numbers close to any of those.

    • tomc100
    • 10 years ago

    Why not do a test of dx 11 unigine heaven demo? Even though it’s not a game it’s a useful tool to see how well dx 11 hardware perform with dx 11 software.

      • Meadows
      • 10 years ago

      That’s what I’d like to see.

    • yokem55
    • 10 years ago

    Me thinks that video cards are finally catching up to the Bitchin Fast 3d from 10+ years ago. See here for reference: ยง[<http://www.russdraper.com/images/fullsize/bitchin_fast_3d.jpg<]ยง

      • OneArmedScissor
      • 10 years ago

      Boy howdy, that gave me quite a chuckle!

      • XaiaX
      • 10 years ago

      Wow, I remember when that was new.

      That makes me think of a couple things.

      1. Heatsinks and Fans used to be obscure tech for video cards, used only by the most ridiculous overclockers trying to bump up their cards by a few MHz.
      2. “256 MB of RAM” was intended to be a preposterous amount of RAM that didn’t make any sense at all for a video card.

      My my, how things have changed.

      • jstern
      • 10 years ago

      I recognize that guy in the pic. That’s Don Francisco, I think. I thought it was a joke, but now I see it was a real product.

      • DrDillyBar
      • 10 years ago

      Gold. LMNOPRAM.

      • FireGryphon
      • 10 years ago

      425 BungholioMarks don’t lie! ๐Ÿ˜†

      • burntham77
      • 10 years ago

      That is funny in spanish!

        • indeego
        • 10 years ago

        Que es divertido!

        I’m here–as is Google Translate–all weekg{

    • balzi
    • 10 years ago

    One thing I’d like Damage to look at is putting ambient temps as a note on the temps graphs and details.
    I saw the 5970 sitting at 90oC and I thought — hmm, i wonder what the room temp was. It’s late Autumn over there.. I wonder what the weather was like.
    the point really is, in summer if my house is 28oC, will this card get closer to 100oC and die.

    Correct me if I’m wrong, but the laws of thermodynamics means that the only numbers that are useful are those that are relative to the ambient.

    but then, maybe the intent is only to show how they differ amongst the cards. And the absolute figures are not important. Surely though, there is a absolute limit that these things will run at.. and if its 100oC, then someone who’s house is sometimes at 40oC will not want anything that exceeds 55oC above ambient.

    what do you think Scotty boy?

      • Skrying
      • 10 years ago

      I’m sorry for this comment if it seems… stupid… as I have no honest knowledge of how the other half of the world (meaning anything outside of the US, Canada and most places in Europe) lives (beside some ignorant, I’m sure, ideas) but… what type of person purchases this graphics card and the corresponding computer and does not have air conditioning?

        • Prion
        • 10 years ago

        So… I have to run the aircon (it’s a split unit, not central air but not your classic Motel 6 window unit either) every time I want to play a game from May through October? I’m not the target market for the 5970, to be sure, but having that kind of information is interesting and relevant to me.

          • balzi
          • 10 years ago

          Having ambient temps would provide a clear picture of what to do in the following circumstances.

          Lets say the reviewer was in a cooler climate and in his basement, with a heater on his feet to keep warm, but the ambient was 15oC. If his chip is running at 90oC under load, then that’s very interesting to the guy who lives in a house where his room is generally up at 25-30 in summer even with hte air-conditioning on.

          its only a very slightly exaggerated example. My computer for example has case temperatures ranging from 18 in winter up to mid 30s in summer.
          maybe after its running for a while the 18 will come up to mid 20s.

          but its still interesting. Does that seem like a more valid request now, poster #27?

      • JustAnEngineer
      • 10 years ago

      Two comments:
      1) Damage probably keeps the underground benchmarking sweatshop in a fairly narrow range of temperatures year-round.

      2) The fans on these components are thermally controlled. When they get hotter, the fan spins faster to maintain a set temperature.

        • balzi
        • 10 years ago

        well then its even more interesting.. what is that narrow range ? 20-22?

        and if the fans spin up more, then the noise is going to be higher.

        so the review’s stated noise of 56dBA, is conditional on an ambient of ~20oC. So now I know if the ambient is 30 then my fan will be even louder.

        I still say its worth mentioning!!
        ( perhaps he mentioned it when he reviewed the Geforce 2 MX and said he’d let us know if he ever upgraded the air-con. or heater ๐Ÿ˜‰ )

    • AmishRakeFight
    • 10 years ago

    cheer up Scott, his fashion sense is a bit suspect.

    • Tamale
    • 10 years ago

    I’m just laughing to myself that even this monster of a card will be a worthless PCB in ~10 years time. ๐Ÿ˜€

    As always, excellent review.

      • Kaleid
      • 10 years ago

      I doubt it takes even 10 years.

      • khands
      • 10 years ago

      try 3 to 5.

    • OneArmedScissor
    • 10 years ago

    Two oh so simple, yet oh so important, reasons this review is awesome, even though this card means absolutely nothing to me:

    1) Includes overclocked test runs.

    2) No Crysis.

    THANK YOU! Please, please, please let this be the standard from now on.

    • xtremevarun
    • 10 years ago

    This card is much bigger than mobos!

    • DrDillyBar
    • 10 years ago

    That’s a lot of Graphics Card!

    • lethal
    • 10 years ago

    Why wasn’t Crysis tested? L4D 2 is fun and all that but its barely more than a screen saver when it comes to pushing these video cards ;).

      • OneArmedScissor
      • 10 years ago

      Because Crysis is a turd that’s been inappropriately skewing benchmarks and peoples’ perceptions for the last two years and counting.

      • FubbHead
      • 10 years ago

      Yeah, I was thinking the same. It’s still one of the best looking games out, and the question always need an answer: Does it run Crysis?

      • tomc100
      • 10 years ago

      Because it’s poorly optimized and unfairly optimized for Nvidia cards. Furthermore, dx10 is broken which is why most games that uses it have unacceptable drop in frame rates when AA is enabled. Blame Nvidia for not supporting dx10.1 which corrected those problems and made things much more efficient, which was the purpose of dx10.

        • shaq_mobile
        • 10 years ago

        or because… it couldn’t run it?

          • tomc100
          • 10 years ago

          Dude, my 4890 can run crysis at 1080p with 2AA. You might as well ask why didn’t they test Saints Row 2 or GTA IV. If you want to see crysis benchmark then go to tom’s hardware or youtube. It’s unnecessary because it’s poorly optimized period.

    • SomeOtherGeek
    • 10 years ago

    To the people of TR:

    I have a concern here. On the first page, you show 4 video cards. My point is this: WHY WERE THESE NOT PART OF THE PRIZES? I bet that card the second from the right was mine!

    Very disappointed TR reader.

    BTW, I was just kidding! As always great review. Now you need to get more personal to tell us how you really feel about this card. Good weapon? Great only for bragging right? Or a must have or you die card?

    • donkeycrock
    • 10 years ago

    i love TR, but the Anand review was better this go around. lets try to get some more pep into your reviews next time. and it was late. just my honest opinion.

      • SomeOtherGeek
      • 10 years ago

      Well, you are full of dockeycrock! ๐Ÿ˜‰ I didn’t notice and I think it was good timing cuz it came out the same day as the availability.

    • Mystic-G
    • 10 years ago

    If it shot out flames I’d be more impressed. It seems more like ATI pushed this out just to be able to say they currently have the best graphics card.

    It’s extreme in all categories. A little too extreme for most.

      • shaq_mobile
      • 10 years ago

      apparently youve been sleeping everytime either team has done this each generation. Someone always manages to put out a card for teh sake of pleasing Prime1. ๐Ÿ˜‰

        • Mystic-G
        • 10 years ago

        Well I’m aware of that, it just seems like a waste of development just to be able to say such a irrelevant line. But it means something to some people I guess.

          • shaq_mobile
          • 10 years ago

          lol your mom is poorly optimized

            • LovermanOwens
            • 10 years ago

            I Lol’ed at work

          • ew
          • 10 years ago

          I doubt these dual chip on a stick cards are that hard to develop. From a software point of view and design of the chip point of view it’s the same as a dual card configuration. The difficulty is really only with the PCB and cooling design.

    • Buzzard44
    • 10 years ago

    Wow, talk about performance.

    Sure, I’m not willing to spend $600 on a video card, but dang, this is wicked fast. Maybe 2 of them in CF could play Crysis.

      • Jigar
      • 10 years ago

      LOL, you still want to try that ?

      • Mystic-G
      • 10 years ago

      O_O it can run crysis?

        • SomeOtherGeek
        • 10 years ago

        Good point. TR didn’t test it, so I guess it couldn’t run it… Um.

          • kc77
          • 10 years ago

          There are a few sites that did. Try Far Cry 2 at 98 FPS with 8X AA at 2560×1600. O.O

            • SomeOtherGeek
            • 10 years ago

            Not FC2, Crysis…

          • khands
          • 10 years ago

          I doubt they had two to Xfire.

      • WillBach
      • 10 years ago

      techPowerUP has a benchmark of 5970s in a CrossFire configuration running Crysis: ยง[<http://www.techpowerup.com/reviews/HIS/Radeon_HD_5970_CrossFire/7.html<]ยง Runs pretty well.

    • flip-mode
    • 10 years ago

    Eyefinity is disappointing (see anand)

    Unnecessary levels of performance.

    Too long.

    Availability problems.

    High price.

    Time should solve most if not all problems. Right now, not very attractive.

      • moriz
      • 10 years ago

      it is very attractive for use as a blunt weapon. i bet this sucker is very heavy to boot.

      anyways, it’s bound to be attractive to some people; namely to those which money is of no object.

        • flip-mode
        • 10 years ago

        yes, for those with cash and a full ATX length case – I recommend this card – no reason not to if you got the coin.

          • shank15217
          • 10 years ago

          Actually by the time DX11 becomes mainstream, it seems AMD might release their next generation chip. Rumors abound that they expect to come out with another chip by the end of 2010, which really doesn’t bode well for a $600 cards life.

      • indeego
      • 10 years ago

      I get these comment when I hit the corner all the timeg{<.<}g

        • kvndoom
        • 10 years ago

        It is the John Holmes of video cards. “There’s no WAY my case is gonna take all that!!”

        It’s got length r[

          • Scrotos
          • 10 years ago

          If only you had used “box” instead of “case”, that would have been FLAWLESS.

      • StashTheVampede
      • 10 years ago

      “Eyefinity is disappointing (see anand)”
      Drivers will mature

      “Unnecessary levels of performance.”
      You pay a high price for the absolute highest levels of performance. This card clearly smokes the rest.

      “Too long.”
      Clearly an issue for many users. Time will yield a shrink of this.

      “Availability problems.”
      Not everyone is demanding a $599 card. I’m sure this will be sold en masse.

      “High price.”
      You get what you pay for. Highest performing since card shipping, today.

      • Vasilyfav
      • 10 years ago

      If I make a hole in the middle of the card, I can use it as a batmobile for my batman action figures. Instant buy !

      • SomeOtherGeek
      • 10 years ago

      Hey, this card has other purposes! It is huge, so it is perfect as wallpaper and will probably keep the room warm.

      Ordering 100 of these!

      • ssidbroadcast
      • 10 years ago

      point-mode is on flip.

        • BoBzeBuilder
        • 10 years ago

        Flip-point is on mode?

          • SomeOtherGeek
          • 10 years ago

          Flipping off at mode’s point?

      • willyolio
      • 10 years ago

      “Unnecessary levels of performance.”
      ahahahaha, i can’t believe such a statement was made on a tech site.

        • flip-mode
        • 10 years ago

        It is an opinion.

          • OneArmedScissor
          • 10 years ago

          And a valid one, if you ask me.

          Basically all other components reached “unnecessary levels of performance” for desktop use, long ago.

          It’s a given that graphics cards will eventually hit that point, and considering how much more powerful they now become with each new generation, it’s already reaching a point where it’s a question of how much, not how fast.

            • MadManOriginal
            • 10 years ago

            This will undoubtedly get some crazy replies but I don’t care…

            If game graphic advancement for PCs wasn’t being held back by consoles there would very much be a ‘need’ for such continued speed increases.

            • flip-mode
            • 10 years ago

            No, that’s not a crazy statement at all – it rings true to me. Obviously it’s far into the realm of “if”. If we had _insert thing we don’t have here_ then we’d really need _insert thing we don’t currently have much need for here_!

            • MadManOriginal
            • 10 years ago

            Crazy replies not a crazy statement ๐Ÿ™‚

            • BoBzeBuilder
            • 10 years ago

            What a stupid statement. In a few year, devs will release games that’ll make the 5970 cry. How is more performance unnecessary when in time, applications become more demanding?
            But as the MadMan said, consoles are holding things back.

        • SomeOtherGeek
        • 10 years ago

        You need to think about it a little. Think how much a use one of these has, right now. Does the GTX295 have any purpose other than bragging rights? Does the world have 30′ 2560X1200 screens and play games all the time.

        I’ll be honest here tho, I would love to have one, just to say I had one and do all the testing and folding.

          • OneArmedScissor
          • 10 years ago

          In reality, there are probably only enough of these to go around for the people with 2560×1600 monitors, who still have enough lying around after buying such a thing to afford one.

          But this will be a single card in probably a year, nonetheless lol…

            • flip-mode
            • 10 years ago

            Fermi could surprise us and be an effective competitor to a dual Cypress.

            • Farting Bob
            • 10 years ago

            This IS a single card, right now. No need to wait a year.

      • swaaye
      • 10 years ago

      Well yeah the dualie cards and the SLI/CF setups are rather bleh if only from the software-annoyance angle. But hey if they float someone’s boat I don’t want to ruin the fun. I wonder if those people go from one generation to the next and keep buying new hugely expensive multi setups, or if they are all noobs, or if they figure out that it’s not the best value and go single GPU… ๐Ÿ˜€

      • asdsa
      • 10 years ago

      Aren’t we in a pessimistic mood.

      l[

        • flip-mode
        • 10 years ago

        q[< Aren't we in a pessimistic mood.<]q Always. I can't be your superman.

          • MadManOriginal
          • 10 years ago

          Can you be my hero at least?

Pin It on Pinterest

Share This