AMD’s Radeon HD 7970 GHz Edition

In the great game of one-upsmanship played by the two major graphics chip makers, one of the most prized goals is being first to market with a new generation of technology. AMD captured that waypoint late last year when it introduced the first 28-nm GPU, the Radeon HD 7970.

However, there are advantages to being later to market, because the competition has already played its hand. Nvidia smartly took advantage of that dynamic when it unveiled the GeForce GTX 680 several months ago. The new GeForce managed—just barely—to outperform the 7970, while consuming less power and bearing a price tag $50 lower than the Radeon’s. Nvidia couldn’t have pulled off that trifecta if not for the efficiency of its Kepler architecture, of course, but knowing the target surely helped in selecting clock speeds and pricing for the final product. The first reviews of the GTX 680 were uniformly positive, and the narrative was set: Kepler was a winner. Despite being second to market—or, heck, because of it—Nvidia had captured the mojo.

Then an interesting thing happened. Finding a GeForce GTX 680 card in stock at an online retailer became difficult—and the situation still hasn’t eased. Meanwhile, Radeon HD 7900-series cards appear to be plentiful. AMD’s spin on this situation is simply to point out that its cards are more readily available for purchase, which is undeniably true. Nvidia’s take is that it’s selling through GTX 680s as fast as it can get them—and that the problem is raging demand for its products, not just iffy supply. Since both companies rely on the same foundry (TSMC) for their chips, we suspect there’s some truth in Nvidia’s assertions. These things are hard to know for sure, but quite likely, the GTX 680 is outselling the 7970—perhaps by quite a bit.

If so, that’s just a tad insane, given how closely matched the two cards have been in our assessments. Evidently, capturing the mojo is very important indeed.

AMD’s answer to this dilemma is a new variant of the Radeon HD 7970 intended to reclaim the single-GPU performance crown, the awkwardly named Radeon HD 7970 GHz Edition. Compared to the original 7970, the GHz Edition has higher core (1GHz vs. 925MHz) and memory (1500MHz vs. 1375MHz) clock speeds, and it has a new “boost” feature similar to Kepler’s GPU Boost.

To understand the “boost” issue, we have to take a quick detour into dynamic voltage and frequency scaling (DVFS) schemes, such as the Turbo Boost feature in Intel’s desktop processors. AMD was the first GPU maker to introduce a DVFS scheme for graphics cards, known as PowerTune. PowerTune allows AMD to set higher stock GPU clock frequencies than would otherwise be possible within a given thermal envelope. The GPU then scales back clock speeds occasionally for workloads with unusually high demands, to enforce its power limits. Unlike the various Turbo and Boost schemes on other chips, though, PowerTune doesn’t raise clock speeds opportunistically in order to take advantage of any extra thermal headroom—at least, it hasn’t until now.

Like the Turbo Core feature in AMD’s FX processors, PowerTune works by monitoring digital activity counters distributed around the chip and using those inputs to estimate power consumption. These power estimates are based on profiles developed through extensive qualification testing of multiple chips. Somewhat uniquely, AMD claims the behavior of its DVFS schemes is deterministic—that is, each and every chip of the same model should perform the same. Intel and Nvidia don’t make such guarantees. If you get a sweetheart of a Core i5, it may outperform your neighbor’s; better cooling and lower ambient temperatures can affect performance, as well.

For the 7970 GHz Edition, AMD has refined its PowerTune algorithm to improve its accuracy. By eliminating some cases of overestimation, AMD claims, this revamped algorithm both increases the GPU’s clock speed headroom and allows the GPU to spend more time resident at its peak frequency. Furthermore, the 7970 GHz Edition adds an additional P-state that takes the GPU clock beyond its stock speed, to 1050MHz, when the thermal envelope permits. It ain’t much in the grand scheme, but this ability to reach for an additional 50MHz is the 7970 GHz Edition’s “boost” feature—and it is fairly comparable to the GPU Boost capability built into Nvidia’s Kepler.

The higher default clock speeds and the PowerTune wizardry are the sum total of the changes to the GHz Edition compared to the original Radeon HD 7970. GHz Edition cards should still have the same ~250W max power rating, with six- and eight- pin aux power connectors. Above is a picture of our 7970 GHz Edition review unit, which came to us directly from AMD. However, there is a bit of a catch. The card above is based on AMD’s reference design, but we understand retail cards from AMD’s various partners will have custom coolers and possibly custom PCB designs. You won’t likely see a 7970 GHz Edition that looks like that picture.

We’d like to show you a retail card, but those aren’t here yet. AMD tells us the first products should begin showing up at online retailers next week, with “wide availability” to follow the week after that.

Base

clock

(MHz)

Boost

clock

(MHz)

Peak

ROP rate 

(Gpix/s)

Texture

filtering

int8/fp16

(Gtex/s)

Peak

shader

tflops

Memory

transfer

rate

Memory

bandwidth

(GB/s)

Price
XFX HD 7950 Black 900 29 101/50 3.2 5.5 GT/s 264 $409
Radeon HD 7970 925 30 118/59 3.8 5.5 GT/s 264 $449
Radeon HD 7970 GHz 1000 1050 34 134/62 4.3 6.0 GT/s 288 $499

Here’s a look at how the 7970 GHz Edition compares to a couple of Radeon HD 7900 cards already on the market. As you can see, the GHz Edition’s higher core and memory clock speeds separate it pretty clearly from the stock 7970 in key rates like pixel fill, texture filtering, shader flops, and memory bandwidth.

In fact, although it’s not listed in the table above, the 7970 GHz Edition is the first GPU to reach the 1 teraflop milestone for theoretical peak double-precision floating-point math throughput. Double-precision throughput is irrelevant for real-time graphics and probably mostly useless for consumer GPU-computing applications, as well. Still, this card hits a target recently mentioned by both Nvidia and Intel as goals for data-parallel computing products coming later this year.

AMD says the GHz Edition will list for $499.99, placing it directly opposite the GeForce GTX 680. We’ve taken the prices for the other two Radeons above from Newegg. Street prices for the Radeon HD 7970 have recently dropped to $449.99, 100 bucks below its introductory price, perhaps in part to make room for the GHz Edition.

We’ve included all three of these cards in this review because they illustrate the current state of the high-end Radeon lineup. Video card makers have more leeway than ever to offer higher-clocked variants of their products, and that means alert enthusiasts can snag some deals by ignoring branding and focusing on specs instead. For example, XFX’s “Black Edition” version of the Radeon HD 7950 is so aggressively clocked that it essentially matches the stock 7970 in pixel throughput rate and memory bandwidth. The XFX 7950 does give up a bit of texel fill rate and shader processing oomph to the stock 7970, but we probably wouldn’t pay the extra 40 bucks for the 7970, given everything.


XFX’s Radeon HD 7950 Black Edition challenges the stock 7970

The competition

The GeForce GTX 600-series lineup hasn’t been sitting still since its introduction, either. Nvidia has long given its partners wide latitude in setting clock speeds, and the resulting cards in this generation are much more attractive than the stock-clocked versions. We’ve lined up several of them to face off against the 7970 GHz Edition and friends, including a pair of ringers from Zotac.


Zotac’s GTX 680 AMP!

If the Radeon HD 7970 GHz Edition wants to own the title of the fastest single-GPU graphics card, it’ll have to go through Zotac’s GeForce GTX 680 AMP! Edition. At $549.99, the GTX 680 AMP! costs a bit more than the newest Radeon, but what’s 50 bucks in this rarefied air? You will also have to accept the potential clearance issues created by the heatpipes protruding from the top of Zotac’s custom cooler and the fact that this thing eats up three expansion slots in your PC. In return, the GTX 680 AMP! is a pretty substantial upgrade over the stock GTX 680.


Yep, this is a different card: Zotac’s GTX 670 AMP!

Believe it or not, Zotac’s GeForce GTX 670 AMP! is also an upgrade over the stock GeForce GTX 680. Yes, the GK104 graphics processor in the GTX 670 has had a mini-lobotomy—one of its eight SMX units disabled—but Zotac more than makes up for it with aggressive core and memory clocks. Have a look at the numbers.

Base

clock

(MHz)

Boost

clock

(MHz)

Peak

ROP rate 

(Gpix/s)

Texture

filtering

int8/fp16

(Gtex/s)

Peak

shader

tflops

Memory

transfer

rate

Memory

bandwidth

(GB/s)

Price
Zotac GTX 670 AMP! 1098 1176 38 132/132 3.2 6.6 GT/s 211 $449
GeForce GTX 680 1006 1058 34 135/135 3.3 6 GT/s 192 $499
Zotac GTX 680 AMP! 1111 1176 38 151/151 3.6 6.6 GT/s 211 $549

Assuming the GPU typically operates at its Boost clock speed—and that seems to be a solid assumption to make with GK104 cards—then Zotac’s GTX 670 AMP! nearly matches the stock GTX 680 in texture filtering and shader flops. Since the GTX 670 silicon isn’t hobbled at all in terms of memory interface width or ROP count, the AMP! matches its bigger brother in terms of pixel fill rate (which corresponds to multisampled antialiasing power) and memory throughput, surpassing the stock GTX 680. On paper, at least, I’d expect the GTX 670 AMP! to outperform a stock GTX 680, since memory bandwidth may be the GK104’s most notable performance constraint.

Along those lines, notice that the fastest cards above have “only” 211 GB/s of memory throughput, while the 7970 GHz Edition is rated for 288 GB/s. That’s a consequence of the fact that Nvidia’s GK104 is punching above its weight class. This middleweight Kepler only has a 256-bit memory interface. The Tahiti chip driving the Radeon HD 7900-series cards is larger and sports a 384-bit memory interface. By all rights, AMD ought to be able to win this contest outright. The fact that folks are buying up GTX 680 cards for 500 bucks or more is vaguely amazing, given the class of hardware involved. But, as we’ll see, the performance is there to justify the prices.

Test notes

Before we dive into the test results, I should mention a couple of things. You will notice on the following pages that we tested games at very high resolutions and quality levels in order to stress these graphics cards appropriately for the sake of our performance evaluation. We think that’s appropriate given the task at hand, but we should remind you that a good PC gaming experience doesn’t require a $450+ video card. We’ve hand-picked games that are especially graphically intensive for our testing. Not every game is like that.


Diablo III doesn’t need extreme GPU horsepower

For instance, we wanted to include Diablo III in our test suite, but we found that, on this class of graphics card, it runs at a constant 100 FPS with its very highest image quality settings at our monitor’s peak 2560×1600 resolution. Diablo III is a pretty good looking game, too, but it’s just no challenge.

Along the same lines, we have tested practically everything at a resolution of 2560×1600. We realize that 1080p displays are the current standard for most folks and that they’re much more widely used than four-megapixel monsters. Here’s the thing, though: if you’re going to fork over the cash for a $500 video card, you’ll want a high-res display to pair with it. Maybe one of those amazingly priced Korean 27″ monitors. Otherwise, the video card will probably be overkill. In fact, as we were selecting the settings to use for game testing, the question we asked more often was whether we shouldn’t be considering a six-megapixel array of three monitors in order to properly stress these cards. Also, in the odd case where we did think 1920×1080 might be appropriate, we found that the beta Nvidia drivers we were using didn’t expose that resolution as an option in most games.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-3820
Motherboard Gigabyte
X79-UD3
Chipset Intel X79
Express
Memory size 16GB (4 DIMMs)
Memory type Corsair
Vengeance CMZ16GX3M4X1600C9
DDR3 SDRAM at 1600MHz
Memory timings 9-9-11-24
1T
Chipset drivers INF update
9.3.0.1019

Rapid Storage Technology Enterprise 3.0.0.3020

Audio Integrated
X79/ALC898

with Realtek 6.0.1.6526 drivers

Hard drive Corsair
F240 240GB SATA
Power supply Corsair
AX850
OS Windows 7 Ultimate x64 Edition

Service Pack 1

DirectX 11 June 2010 Update

Driver
revision
GPU
base

core clock 

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

Zotac
GeForce GTX 570
GeForce
304.48 beta
732 950 1280
Zotac
GTX 670 AMP!
GeForce
304.48 beta
1098 1652 2048
GeForce
GTX 680
GeForce
304.48 beta
1006 1502 2048
Zotac
GTX 680 AMP!
GeForce
304.48 beta
1111 1652 2048
Radeon
HD 6970
Catalyst
12.7 beta
890 1375 2048
XFX
Radeon HD 7950 Black
Catalyst
12.7 beta
900 1375 3072
Radeon
HD 7970
Catalyst
12.7 beta
925 1375 3072
Radeon
HD 7970 GHz Edition
Catalyst
12.7 beta
1000 1500 3072

Thanks to Intel, Corsair, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

Some further notes on our methods:

  • We used the Fraps utility to record frame rates while playing either a 60- or 90-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We tested each Fraps sequence five times per video card in order to counteract any variability. We’ve included frame-by-frame results from Fraps for each game, and in those plots, you’re seeing the results from a single, representative pass through the test sequence.
  • We measured total system power consumption at the wall socket using a Yokogawa WT210 digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

    The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Arkham City at with DirectX 11 at 2560×1600 with FXAA enabled.

  • We measured noise levels on our test system, sitting on an open test bench, using an Extech 407738 digital sound level meter. The meter was mounted on a tripod approximately 10″ from the test system at a height even with the top of the video card.

    You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

  • We used GPU-Z to log GPU temperatures during our load testing.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Battlefield 3

We tested Battlefield 3 with all of its DX11 goodness cranked up, including the “Ultra” quality settings with both 4X MSAA and the high-quality version of the post-process FXAA. Our test was conducted in the “Kaffarov” level, for 60 seconds starting at the first checkpoint.

Let me apologize in advance for what follows, because it is a bit of a data dump. We’re about to do some unusual things with our test results, and we think it’s best to show our work first. The plots below come from a one of the five test runs we conducted for each card.

Frame time
in milliseconds
FPS
rate
8.3 120
16.7 60
20 50
25 40
33.3 30
50 20

Yep, those plots show the time required to produce each and every frame of the test run. Because we’re reporting frame times in milliseconds, lower numbers are better. If you’re unfamiliar with our strange new testing methods, let me refer you to my article Inside the second: A new look at game benchmarking for an introduction to what we’re doing.

The short version is that we’ve decided traditional FPS averages aren’t a very good indicator of the fluidity of animation in real-time graphics. The problem isn’t with reporting things as a rate, really. The problem is that nearly every utility averages frame production rates over the course of a full second—and a second is a very long time. For example, a single frame that takes nearly half a second to render could be surrounded by frames that took approximately 16 milliseconds to render—and the average reported over that full second would be 35 FPS, which sounds reasonably good. However, that half-second wait would be very disruptive to the person attempting to play the game.

In order to better understand how well a real-time graphics system works, we need to look closer, to use a higher-resolution timer, if you will. Also, rather than worrying about simple averages, we can consider the more consequential question of frame latencies. What we want is consistent production of frames at low latencies, and there are better ways to quantify that sort of thing. Since we’re going to be talking about frame times in milliseconds, I’ve included a handy table on the right that offers conversions from milliseconds to FPS.

We’ll start by reporting the traditional FPS average. As you can see, the Radeon HD 7970 GHz Edition just outperforms the stock GeForce GTX 680 in this metric, although the difference is too small to worry about, really.

If we’re thinking in terms of frame latencies, another way to summarize performance is to look at the 99th percentile frame time. That simply means we’re reporting the threshold below which 99% of all frames were rendered by each card. We’ve ruled out that last 1% of outliers, and the resulting number should be a decent indicator of overall frame latency. As you can see, the differences between the top few cards are even smaller by this measure.

A funny thing happens, though, to our two legacy cards, the GeForce GTX 570 and the Radeon HD 6970. Although the GTX 570 has a slightly higher FPS average, its 99th percentile frame time is much higher than the 6970’s. Why? Well, the 99th percentile is just one point on a curve, so we shouldn’t make too much of it without putting it into context. We can plot the tail end of the latency curve for each card to get a broader picture.

The GeForce GTX 570 is faster than the Radeon HD 6970 most of the time, until we get to the last 3% or so of the frames being produced. Then the GTX 570 stumbles, as frame times shoot toward the 100-millisecond mark. Scroll up to the frame time plots above, and you can see the problem. The GTX 570’s plot is spiky, with a number of long-latency frames interspersed throughout the test run. This is a familiar problem with older Nvidia GPUs in BF3, though it appears to have been resolved in the GK104-based cards.

In fact, all of the newer cards are nearly ideal performers, with nice, straight lines in the high 20- and low 30-millisecond range. They only curve up modestly when we reach the last one or two percentage points.

You’ll recall that our 99th percentile frame time measurement ruled out the last 1% of long-latency frames. That’s useful to do, but since this is a real-time application, we don’t want to ignore those long-latency frames entirely. In fact, we want to get a sense of how bad it really is for each card. To do so, we’ve concocted another measurement that looks at the amount of time spent working on frames for which we’ve already waited 50 milliseconds. We’ve chosen 50 ms as a threshold because it corresponds to 20 FPS, and somewhere around that mark, the illusion of motion seems to be threatened for most people. Also, 50 ms corresponds to three full vertical refresh intervals on a 60Hz display. If you’re gaming with vsync enabled, any time spent beyond 50 ms is time spent at 15 FPS or less, given vsync’s quantization effect.

Predictably, only the two legacy cards spend any real time beyond 50 ms, and only the GeForce GTX 570 really has a problem. The GTX 570 has a real but not devastating issue here; it spends 1.4 seconds out of our 60-second test session working on long-latency frames. Our play-testing sessions on this card felt sluggish and clumsy. Remember, though: when we started, the GTX 570 had a higher FPS average than the Radeon HD 6970. That FPS number just turns out to be pretty meaningless.

Max Payne 3

Max Payne 3 is a new addition to our test suite, and we should note a couple of things about it. As you’ll notice in the settings image above, we tested with FXAA enabled and multisampling disabled. That’s not the most intensive possible setting for this game, and as you’ll soon see, Max 3 runs quite quickly on all of the cards we’ve tested. We wanted to test with MSAA, but it turns out multisampling simply doesn’t work well in this game. Quite a few edges are left jagged. Even the trick of combining MSAA with FXAA doesn’t seem to work here. Enabling both disables FXAA, somehow. We couldn’t see the point of stressing the GPUs arbitrarily while lowering image quality, so we simply tested with the highest quality setting, which in this case was FXAA.

Also, please note that this test session wasn’t as exactly repeatable as most of our others. We had to shoot and dodge differently each time through, so there was some natural variation from one run to the next, although we kept to the same basic area and path.

All of these plots look really good. Remember, they only come from a single test run with some inherent variation, so those spikes on a few cards aren’t necessarily a major problem.

Can you feel the parity? The 7970 GHz just edges out the stock GeForce GTX 680 in the FPS sweeps, only to tie it in the 99th percentile frame time results. Let’s see if the larger latency picture tells us anything new.

Nope. This time, there’s very little drama involved. The reality is that, on all of the newer cards, the vast majority of the frames are produced in under 16.7 milliseconds—or over 60 FPS, if you will. That means, on a 60Hz monitor, all of the newer cards have a frame ready at almost every single display refresh interval.

Our typical measure of “badness” breaks down here, since none of the cards spend any time working on frames over 50 milliseconds. In fact, ratcheting things down to 33.3 milliseconds (the equivalent of 30 FPS) produces a big goose egg, too. Only when we take the next step down to 16.7 ms do we have any separation between the cards, and then it’s only the older ones that show us anything.

DiRT Showdown

We’ve added the latest entry in the DiRT series to our test suite at the suggestion of AMD, who has been working with Codemasters for years on optimizations for Eyefinity and DirectX 11. Although Showdown is based on the same game engine as its predecessors, it adds an advanced lighting path that uses DirectCompute to allow fully dynamic lighting. In addition, the game has an optional global illumination feature that approximates the diffusion of light off of surfaces in the scene. We enabled both the new lighting path and global illumination in our tests.

This is a fantastic game, by the way. My pulse was pounding at the end of each 90-second test run.

Well, I suppose this is what happens sometimes when a GPU maker works closely with a game developer to implement some new features. Showdown simply runs better on Radeons than on GeForces, and it’s not even close. We’ve seen lots of similar scenarios in the past where Nvidia took the initiative and reaped the benefits. Perhaps this is karmic payback.

The GeForces are just overmatched here. You’d want to dial back the image quality settings or lower the resolution to play Showdown on any of the GeForces, including the Zotac GTX 680 AMP! card. The GTX 570 is nearly unplayable, although I did my best to muddle through the testing.

The Elder Scrolls V: Skyrim

Our test run for Skyrim was a lap around the town of Whiterun, starting up high at the castle entrance, descending down the stairs into the main part of town, and then doing a figure-eight around the main drag.

We set the game to its “Ultra” presets with 4X multisampled antialiasing. We then layered on FXAA post-process anti-aliasing, as well, for the best possible image quality without editing an .ini file. We also had the high-res texture pack installed, of course.

Even at the game’s highest image quality settings, our Skyrim scenario doesn’t challenge any of the newer cards much. The parity at the 99th percentile frame time is pretty remarkable and suggests a CPU limitation may be coming into play in the toughest last few percent of frames.

The only card that struggles at all here is the GeForce GTX 570, and we suspect that it’s bumping up against some VRAM size limitations; it has the smallest video memory capacity of the bunch.

Batman: Arkham City

We did a little Batman-style free running through the rooftops of Gotham for this one.

We’re used to seeing those latency spikes in our Arkham City test sequence. We’re moving through a very large and detailed cityscape, and the game engine has to stream in new areas as we glide into them.

Remember what I said about karmic payback? Here’s an excellent game that happens to run better on GeForce cards—and Nvidia worked with developer Rocksteady Studios on it.

The Radeons’ 99th percentile frame times are relatively high given their FPS averages; even the 7970 GHz Edition falls behind the GeForce GTX 570. Why?

The Radeon’s latency curves shoot upward for the last 5-7% of frames. You can spot the problem in the plots up above. Although the plots for the GeForces show quite a few latency spikes, the spikes are more frequent on the Radeons. That extra helping of long frame times puts the Radeons at a disadvantage.

Our measure of “badness” captures the scope of the problem. The 7900-series Radeons spend two to three times as long working on especially high-latency frames as the GeForces do. Interestingly, though, the 7970 GHz Edition avoids these slowdowns much more effectively than the stock 7970 and XFX 7950 do. Perhaps the new, more aggressive PowerTune algorithm is paying off here.

Crysis 2

Our cavalcade of punishing but pretty DirectX 11 games continues with Crysis 2, which we patched with both the DX11 and high-res texture updates.

Notice that we left object image quality at “extreme” rather than “ultra,” in order to avoid the insane over-tessellation of flat surfaces that somehow found its way into the DX11 patch. We tested 60 seconds of gameplay in the level pictured above, where we gunned down several bad guys, making our way across a skywalk to another rooftop.

The 7970 GHz Edition takes first place in the FPS average results, but it’s in third when it comes to the more latency-focused 99th percentile frame time. You spotted the reason in the plots, right? There are periodic spikes on all of the Radeons, spikes that are missing from the GeForce plots.

A broader look at the latency picture reveals that it’s just a sliver of the overall frames, the last 1% or so, that cause trouble for the Radeons. The GeForces are exemplary, by contrast.

There’s not much reason to fret about the occasional high frame times on the Radeons, though. Even the slowest card of the bunch spends less than a fifth of a second on long-latency frames throughout the 60-second test run. That nicely backs up our subjective impression that most of the cards handled this test scenario reasonably well.

Power consumption

We like to test power draw under load by running a real game rather than a synthetic worst-case, power-hog application. This time around, we chose Arkham City to generate that load. Turns out that game induces higher power draw than Skyrim, which we’ve used in the past, or Max Payne 3.

Forgive me for leaving out the two older cards here. Time limits prevented us from testing them for power and noise.

For the unfamiliar, the Radeon HD 7900 series has the ability to drop into a special low-power state, called ZeroCore Power, when the display goes into power-save mode. In this state, the GPU’s power draw drops to just a few watts, and the video card’s cooling fans stop spinning. That’s why the Radeons are so much more efficient in the first set of results above. Otherwise, at idle, the GPUs are more or less at parity.

Interesting. These results are pretty different from what we saw when we used Skyrim to generate the load. Really didn’t expect to see the stock 7970 drawing less power than the GeForce GTX 680. We may have to use multiple games next time around, if time permits.

Regardless, the 7970 GHz Edition draws quite a bit more power than the stock 7970.

Noise levels and GPU temperatures

ZeroCore power confers a slight advantage on the Radeons in the noise department when the display is off. Otherwise, the stock coolers from AMD and Nvidia are pretty evenly matched, and the custom coolers from XFX and Zotac are a bit louder at idle.

The big winners here are the two Zotac cards, whose triple-slot cooler manages to maintain by far the lowest temperatures and the lowest noise levels of the bunch. Our 7970 GHz Edition review unit is pretty loud. The saving grace is that, as we’ve noted, you’re not likely to see this exact card on store shelves. The third-party coolers from AMD’s various partners will hopefully be quieter, although they will have quite a bit of heat to dissipate, given the 7970 GHz Edition’s additional power draw.

Conclusions

So, has AMD gotten the mojo back? Let’s boil things down to one of our famous value scatter plots to see. As always, we’ve sourced prices from Newegg and Amazon, and the performance results are averaged across all of the games we tested. We’re relying on our 99th percentile frame time metric for our performance summation, but we’ve converted the result to FPS to keep our scatter plot readable. As always, the better values will be positioned closer to the top left corner of the plot.

The Radeon HD 7970 GHz Edition has indeed recaptured the single-GPU performance title for AMD; it’s even faster than Zotac’s GTX 680 AMP! Edition. And at $499.99, the 7970 GHz Edition is unambiguously a better value than the stock-clocked GeForce GTX 680. Everything seems to be going AMD’s way—even our power consumption results turned out to be closer than expected. I’d say that qualifies for mojo reclamation status, but I suppose the market will decide about that one. We’re curious to see whether GTX 680 cards will continue to be scarce once the 7970 GHz Edition lands on store shelves.

For those of us who are willing to accept something a little less than the top-of-the-line offering, there are some other nice values on our scatter plot, including the notable duo of the Zotac GTX 670 AMP! and the original Radeon HD 7970. Those two cards cost the same and perform very similarly, so they overlap almost completely on our scatter plot. Either of those cards will cost you 50 bucks less than the 7970 GHz Edition, with only a slight drop in overall performance. Given how well all of the newer cards handled our test scenarios, we’d say sensible folks who are shopping in this price range might want to save a few bucks and snag one of those.

With that said, we suspect the story of the 7970 GHz Edition hasn’t been completely told just yet. AMD’s partners haven’t delivered their customized cards, and as we’ve noted, our review unit is more of a reference design than an actual product. We expect to see higher boost clocks and potentially superior custom coolers once the actual cards arrive. Meanwhile, AMD has supplied us with a slew of potentially interesting new material for testing alongside the 7970 GHz Edition, including some GPU computing-focused applications, a couple more games with fancy new effects, and (at long last!) a version of ArcSoft Media Converter capable of using the Tahiti chip’s built-in H.264 video encoding hardware. Comically, we had only four working days to prepare this review, and the new material arrived in our inbox this past Monday evening. I suppose a follow-up article may be in order.

For now, we at least have a fresh reminder of how close the battle for GPU supremacy is in this generation of chips. You can’t go wrong with either team this time around, although the mojo is liable to change hands again at any moment.

I’m not nearly as wordy on Twitter.

Comments closed
    • Bensam123
    • 7 years ago

    “in fact, as we were selecting the settings to use for game testing, the question we asked more often was whether we shouldn’t be considering a six-megapixel array of three monitors in order to properly stress these cards.”

    Sort of interesting prospect I took a bit to think on. TR hasn’t done anything with Eyefinity or Nvidias tech for close to two years. I think it would be more likely for gamers to own a 3×1 setup then one massive $1,000 IPS panel. At least at this level, for these cards. It’d be interesting to take another look at what has been done on that front over the last few years.

      • Cyril
      • 7 years ago

      [quote<]TR hasn't done anything with Eyefinity or Nvidias tech for close to two years.[/quote<] You sure? [url<]https://techreport.com/articles.x/22890/4[/url<] [url<]https://techreport.com/articles.x/22922/[/url<]

        • Bensam123
        • 7 years ago

        Interesting… Thanks for the links. They were used in the testing, but the prospect of Eyefinity as a whole hasn’t been revisited. Have they worked out the bugs? Have they made things easier? Does it still have similar issues with warping and such? Have the prospects and recommendations of the product changed? Basically a piece on the technology itself like the one from 2010 and how it’s improved (or not improved).

        I don’t think TR ever did a formal take on the Nvidia equivalent either.

    • flip-mode
    • 7 years ago

    FYI Scott and Co. – here’s a program that uses the GPU to do architectural rendering and animation:
    [url<]http://www.twinmotion.com/twinmotion2/requirements.html[/url<] It even supports SLI, XF. It's supposed to be a [i<]phenomenal[/i<] rendering program. I don't know if they'd freely offer you a license to use for benchmarking or not, or if you're even interested.

    • sschaem
    • 7 years ago

    I check newegg and shock!
    The cheapest 7970 3GB version sell for $419, while the cheapest 680 GTX 2GB sell for $519

    So the 7970 is now $100 cheaper then the 680 GTX and is in the ‘same’ price range as the 670 GTX.
    And looking at the benchmark the 7970 seem a league above the 670 GTX.

    So you can pay an extra 100$ for the 680 GTX,but you still endup with some games running slower (like Crysis2)
    But the focus should be on Dirt Showdown … its possible we see the weak compute side of the nvidia new gaming card as the game include some compute heavy lighting. So this is possibly a sign of whats to come for future titles.

    I would be concerned because sure the 680 GTX is 8% faster then the 100$ cheaper 7970 in BF3,
    but future games might show a complete reversal the more compute heavy function they use…

    • CaptTomato
    • 7 years ago

    In Australia, name brand 680’s are over $200 more than a Gigabyte triple fan OC 7970

      • Fighterpilot
      • 7 years ago

      That’s correct.
      Most HD7970s are priced from $450-$520 in Australia.

      GTX680s cost $680 or more.

      Radeon easily wins at those prices.

    • PainIs4ThaWeak1
    • 7 years ago

    Yea, but does it fold? [/i kid]

    But really, I’m curious to see how the 7970’s are doing on GPU3 units – I’ve been out of the loop for a while in regards to folding… Anyone have knowledge of a decent/thorough PPD database broken down by hardware?

    • pedro
    • 7 years ago

    Well done to Damage for having this review posted over at Ars. Seems to be going down a treat too!

    • sparkman
    • 7 years ago

    The famous value scatter plots are great, of course.

    But wouldn’t it also be helpful to attach a simple list of the cards ordered by “distance from the upper-left corner on the value scatter plot”… a.k.a. The Best Values.

    • LaChupacabra
    • 7 years ago

    I’m not sure if you guys are looking to add another graph to the mix, but it might be interesting to see the total absolute value of the time difference between each frame. It would be a quantifiable way to describe the “jitter” of a card even if it is below the 50 ms target frame rate.

      • derFunkenstein
      • 7 years ago

      Isn’t that what the big plot line shows? The time difference between each frame?

        • LaChupacabra
        • 7 years ago

        Yea it does. And as far as the experience goes you can get a good idea how games will play based on that. The idea was to get a number that would differentiate cards that don’t go beyond the 50 ms target. A lot of the cards in this review didn’t go above that, rendering the chart showing that kind of useless. A number totaling the time difference between each frame would be a good way of comparing two cards that spend all of their time rendering frames in under 50 ms.

        Also it was really boring to read a review where the data basically said “it’s all the same, buy what’s cheapest, move along.”

          • derFunkenstein
          • 7 years ago

          Well once you hit 4MP and everything plays reasonably, what else is there to say?

            • LaChupacabra
            • 7 years ago

            If that were the case then the methodology of Hardocp should be good enough. Their reviews reflect the opinion of the writer on what the appropriate balance of graphical features vs. resolution vs. jitter a given card can handle. When TR started going inside the second it took that concept, but instead of evangelizing how fantastic their methodology is TR was able to quantify playability in a scientific and objective way.

            TR does, without a doubt, the best graphics card reviews on the net. It’s the reason why I don’t visit many other sites anymore, because their information is almost laughably basic in comparison. Asking for more information isn’t a bad thing. By taking the absolute value of the difference in render time for each frame it would be another metric to scientifically and objectively measure jitter, and potentially would help differentiate 2 cards that in the review looked the same.

            If the nVidia card has a total time jitter of 3000 ms over the course of the minute and the AMD card had a total time jitter of 200 ms over the course of a minute it would make it a lot easier to recommend one card over the other. The information is already there, it would be cool to see what the actual number is.

    • ptsant
    • 7 years ago

    They really need to either send a non-reference part (ie asus, sapphire) or use a much better cooler. It’s a pity that everyone will say the 7970 is “loud” (as in anandtech’s article) because of the horrible reference cooler. Look what NVidia did with the 690, helps get better initial impressions…

    • CaptTomato
    • 7 years ago

    People vote me down because they can’t handle my truths or maybe didn’t understand them, but IMO, it’s a bad time to buy GPU’s in general, and an especially bad time to buy lowly 7770 class cards as they’re “pissweak”.

    I think the 7950/7970 should be at 7850/7870 prices, and I also like the 3 gig of ram, but even MaxP3 can use over 6gig of ram 1600p maxed….so what I’d really like is something 30-40% faster than a 7970+4gig of ram, so next gen GPU’s for me.

    With hindsight, 5870’s and 6850/70’s were good deals, but things have slowed to a crawl lately, and buggered if I’m going to use XF/SLI.

    • beck2448
    • 7 years ago

    AMD has had to push the 7970GE harder than ever to catch up to the GTX 680, and as a result the 7970GE’s power consumption and noise levels are significantly higher than the GTX 680’s. From Anandtech.

      • ish718
      • 7 years ago

      But the GTX 680 was pushed hard from the jump, see clock speeds.

        • erwendigo
        • 7 years ago

        I see the clock speeds, but… so what?

        Different arquitectures, differents clocks, from kepler, almost all the card in the wild can make OC clocks about >1200MHz with the reference voltage (1,175v max), and many of these cards can make 1300 MHz with turbo.

        GCN and the 7970 GHz edition has a 1,2v voltage, and it makes 1,050 GHz or with OC about 1,15GHz as upper limit. If you want more, you must up the voltage.

          • cynan
          • 7 years ago

          I don’t know where you’re getting your voltage vs OC information for these cards. Both Radeons and Geforces vary quite a bit in terms of voltage depending on “leakiness” or ASIC quality of the transistors in the chips and the ability of the power supply and cooling provided by OEMs.

          My “original” HD 7970 came with clock speeds of 1000MHz and voltage of 1.05V. It was stable at pretty close to 1.2GHz at 1.15V…

          The point is that there is a lot of variability in the voltage applied to these cards, especially with these new dynamic clocking/voltage features. I don’t think anyone really has a good sense on average voltage per clocks these chips require, save for maybe the manufacturers.

          • ish718
          • 7 years ago

          You are right, but Nvidia’s new Kepler architecture is more similar to AMD’s stuff when compared to Nvidia’s older architectures

      • xeridea
      • 7 years ago

      The power consumption scales decently with the big leap in clock speed, and the 7970 was better than the 680 in 99th percentile frame times before anyway. Also the 7970 is GPGPU capable, while the 680 is severely neutered.

      • BestJinjo
      • 7 years ago

      “Already the Radeon HD 7970 is (in the reference design) during a 3-D sequence without doubt one of the loudest and most disturbing graphic cards we’ve ever had in the lab. And since GHz Edition uses the same cooling system, we suspected before the test series to no good. Unfortunately, our fears were confirmed, as the Radeon HD 7970 Edition GHz under load with 60.5 decibels rich yet only 2.5 decibels louder than the regular version!”

      [url<]http://www.computerbase.de/artikel/grafikkarten/2012/test-amd-radeon-hd-7970-ghz-edition/8/[/url<] Reference HD7970 GE @ load noise levels = worthless for real world gaming unless you enjoy the sound of a vacuum cleaner/hair dryer However, without a doubt aftermarket HD7970 GE cards should address this. But ya, in the reference form this card is a joke (even worse than the GTX480!!), only "defeated" by FX5800U and HD6990 as the 2 louder cards ever made.

    • Prion
    • 7 years ago

    So going by the overclocking results from the original review, this is basically the card that the hardcore users have been playing with all along, now in retail box form! Have the benchmark results for either new architecture (SI, Kepler) changed much as the drivers have matured over the past few months?

    • CaptTomato
    • 7 years ago

    bah, what games do you guys play with 7770?
    IMO, the slowest card one should buy is a 7950…..but IMO, it’s a bad time to buy GPU’s atm, but if you do, don’t expect much grunt from a measly 7770.

    • flip-mode
    • 7 years ago

    Steam Hardware Survey does indeed verify the 680 is more popular than 7970, even after being available for just half as long. That means Nvidia has got to be selling them at a blistering pace. And why shouldn’t it be so – 680 launched cheaper, ran cooler and quieter, and consumed less power, and performed a little faster. That’s a decisive victory.

    Also, [H] has had the opportunity to slam Radeons for poor game support every time a new game launches – AMD just plain takes longer than Nvidia, on average, to get driver support and XF support for games. You can blame it on TWIMTBP or whatever you like but at the end of the day its real.

    Furthermore, AMD may have done itself some damage with launch prices for these cards. Even now the 7800 series cards aren’t priced low enough. Nvidia grabbed some major mind (and wallet) share, IMO, with GTX 680 pricing. Nvidia has, no pun intended, played its cards very, very well.

    Edit: AMAZINGLY, neither the 7870 or 7850 show up AT ALL on the SHS.
    [url<]http://store.steampowered.com/hwsurvey/videocard/[/url<] Edit 2: Thanks to Dave Baumann - sorting the data by %change reveals the 7800 cards and the lead the DX 11 charts according to % change. Also interesting is that according to % change the GTX 680 is selling almost 2 for every 7970.

      • Krogoth
      • 7 years ago

      SHS actually reveals one key thing.

      The vast majority of the steam userbase go for the best bang for the buck at the time. It is no surprise that 6xxx and 460/560 are in top. The 560 is inflated by a little bit as SHS doesn’t separate 560 family into its several different variants (from severely-crippled 288 CUDA units 560 to the 488 CUDA units 560 that is almost a 570). The venerable G92b and RV770 families still make a sizable portion of the chart.

      You can easily see as the high-end user base is towards the bottom of the list.

        • Firestarter
        • 7 years ago

        I find the SHS to be pretty useless for the high-end. I mean, where are the HD7950’s and GTX670’s? You’d think them being the value alternative to the proper high end would result in a blip on the steam radar. That said, I guess the numbers for the GTX680 should finally put that availability argument to rest.

          • SomeOtherGeek
          • 7 years ago

          Yea, I’m just curious how often the list gets updated? Does it collect info every time the user logs in? Or is it a one shot thing from way back when?

            • Bensam123
            • 7 years ago

            It used to be opt in, but I think it just collects your system specs in the background every now and then now.

          • erwendigo
          • 7 years ago

          The HD7950 is there, it´s difficult to see because has very little % of quote, and the GTX 670 isn´t yet in the survey, because the data shows data from May, and this is the launch month of the card (it appears in the June survey, for sure).

          About high end, you can see very EASY the high end cards that are seling well, the cards with pathetic % (highend or lowend) doesn´t appear in the public survey at all (<0,50% of quote in all sections of the survey).

          As example, the GTX 580, is the 21º card of all cards (and the 11º DX11 card). It´s easy find it.

          The GTX 680 has the 0,56% of the DX11 users, and the DX11 graphic card is around 45% of the total park. You can make the maths (40 million of users):

          0,45*0,0056*40,000,000 = >100,000 GTX 680 cards in the steam users base.

          ALL the GCN cards combined in the steam user base represents around 300,000 cards.

            • Firestarter
            • 7 years ago

            [quote<]The HD7950 is there[/quote<] Where? I can't find it anywhere.

            • BobbinThreadbare
            • 7 years ago

            Not all 40 million respond to the survey.

        • xeridea
        • 7 years ago

        Yeah, its super annoying how Nvidia has multiple versions of every card, and they aren’t even remotely similar. There are 3 versions of the 640, some are Fermi, some are Kepler…. interesting.

        • rrr
        • 7 years ago

        488? Did you mean 560Ti Core 448 perchance? Or am I missing something?

      • DaveBaumann
      • 7 years ago

      [quote<]Edit: AMAZINGLY, neither the 7870 or 7850 show up AT ALL on the SHS[/quote<] Sort by % Change this month, look at the DX11 section - they are listed as combined as the highest moving DX11 product(s).

        • flip-mode
        • 7 years ago

        Ah ha. Thanks. I just did a ctrl-f and searched for them and turned up nothing.

      • Bensam123
      • 7 years ago

      I’m not sure what ‘poor game support’ entails, but I’ve been a Radeon user since the R9700s. I haven’t encountered a game that was unplayable in it’s launch state due to the graphics card alone. Perhaps some graphic glitches, but not completely unplayable on Radeons. If they had some issues a simple driver update fixes them.

      Not sure that is worthy of being ‘slammed’ for it. Nvidia graphics cards have had similar issues.

      The launch price of 680s were cheaper then 7970s, that doesn’t mean they stayed there when they’re sold out immediately. The price of 7970s was readjusted right away too. This post really is what I’m pointing out in mine, there is such a huge bias present in your post that it makes Radeons seem like a steaming PoS.

      Yet, we all JUST READ that Scott said that Radeons are very capable. It’s a complete win for their cost at their price point. Who cares what was supposed to happen at launch? It’s the perpetual stank that follows Radeons around after everything changes.

      I don’t sway towards either side, but I definitely know when things are unfounded.

        • xeridea
        • 7 years ago

        I have also had only ATI/AMD cards for a while, since my 9800 Pro which was total domination. Had 9800 Pro, x800 Pro, x1950XT, HD 4850, HD 6850, have couple extras for mining. I have never had issues with games not working, or having glitches.

        • flip-mode
        • 7 years ago

        Just because you don’t like it doesn’t mean it’s biased. If anything I’m biased toward Radeons. I’ve gone from a 4850 to a 5770 to a 5870. I’m regularly scathingly critical of many of Nvidia’s actions.

        You know when things are unfounded? Lets do a fact check:
        Fact 1: 680 is more popular than 7970 on steam – TRUE
        Fact 2: 680 launched months after 7970 – TRUE
        Fact 3: HardOCP has documented cases where Crossfire doesn’t work for 7970 on new games – TRUE
        Facts 5, 6, 7, 8: 680 is faster, cooler, quieter, and cheaper at launch than 7970 – TRUE

        If you have an issue with fact 3, take it up with Kyle, not me. I’ve never personally had a problem with any of my Radeons, but I’ve never run Crossfire either.

        Please refrain from accusations of bias just because someone says something you don’t agree with. If you want to debunk something I said then please do so, but you can’t debunk a statement with you opinions, anecdotes, and accusations. That’s really uncouth.

          • Bensam123
          • 7 years ago

          I don’t like when people use hyperbole to make something seem better then it really is, then add a thin coat of FUD over the top of that to scare away people. Like I said, this is part of what is perpetuating the AMD stigma.

          Fact 1-2 relate to popularity… I’m not entirely sure how that is pertinent, but alright…

          Nvidia SLI setups also have problems dealing with new games as well… That includes both cards not working and having ridiculously low frame rates. Both of these have been known for awhile and it’s part of why people are told to stay away from duel-GPU configurations. You can also fix these problems by simply disabling the second graphics card.

          You’re referring to the launch, which doesn’t matter, just like popularity. Go read the review again, your heavily biased opinion has nothing to do with how well Radeons perform in the overall picture. You cherry pick certain aspects and time frames in order to make Nvidia seem ultimately superior in every way, shape, and form. They aren’t. That’s part of what my entire post was about

          Like I said, you’re part of the problem. Blatant flaming fanboism (as well as being a spaz) doesn’t help.

            • flip-mode
            • 7 years ago

            Please, can’t we have an objective discussion rather than character attacks? I’m not going to trade personal jabs with you. The overarching point of the original post was to suggest that Nvidia did an excellent job on the 680 itself and did an excellent job placing it in the market. I’ve been extremely critical of the lack of 680’s in stock, but it really does seem that the reason is not to due with production – Nvidia is producing bunches of these – but it’s to do with demand and the fact that Nvidia is selling them as fast as they can be made.

            I think AMD overshot the pricing with the 7000 series. That’s my opinion, and it doesn’t help Radeons sell when Kyle is running feature articles that focus on the problems with Crossfire. Kyle’s doing that – not me (!). But I believe that 10s of thousands of people read [H] and that their purchasing decisions are very much influenced by what they read there. So I’m not hyperbolizing anything – just stating that [H]’s articles are possibly hindering the sales of Radeons.

            • Bensam123
            • 7 years ago

            You aren’t having an objective discussion! That was my whole point. Hyperbole and sensationalism has nothing to do with being a ‘personal jab’ it’s your writing style and how people perceive you. You make it seem like there is only one possible answer and AMD is completely terrible. You can’t be objective without recognizing the pros and cons of BOTH things you’re comparing. Not just the pros of one and the cons of another.

            Have you looked at their current pricing? Did you read the article? It doesn’t matter if Nvidias 680 as $50 less then the 7970 at launch. It didn’t stay there and immediately became more expensive then the 7970 as well as AMD readjusted their pricing.

            • flip-mode
            • 7 years ago

            Your interpretation of my comments is dumbfounding. Time for me to eject. Later.

            • sweatshopking
            • 7 years ago

            don’t bother with bensam. he’s a difficult cookie.

            • Bensam123
            • 7 years ago

            Don’t perpetuate hate SSK.

            • sweatshopking
            • 7 years ago

            stop posting annoying crazy things. nobody hates you, but you’re posts are often contrary and you try to be difficult.

            • Bensam123
            • 7 years ago

            It’s hard to make people look at things from another direction when they’re set on their own… let alone not having a direction, such as being impartial and unbiased. You have to be a bit resilient or they take you for all you’re worth.

            I believe every response I made to Flip was fair. Simply looking up the definitions of the words I used would make you see they’re applicable. I explained out why I thought what I did in a very logical fashion. That isn’t the same for what he did or what you’re doing, nor the bandwagon you’re trying to start.

            • Bensam123
            • 7 years ago

            You do understand what hyperbole and sensationalism is?

            You also understand why I said in order to have an objective and unbiased opinion you need to weigh the pros and cons of both items you’re comparing. You gave the pros of Nvidia cards and only the cons of AMD cards. Which means it’s biased. The way you described it made it hyperbole and sensationalism also describes your style.

            This isn’t helpful to any sort of constructive conversation and perpetuates stereotypes which aren’t true.

            I haven’t marked down a single one of your comments.

            • flip-mode
            • 7 years ago

            You’ve missed your chance to have a reasonable discussion with me. I can’t have a discussion with someone who can’t recognized the context of the discussion and who can’t recognized that facts are not hyperbole. You’re just not a reasonable person when it comes to the topic of AMD vs Nvidia.

            • Bensam123
            • 7 years ago

            I am recognizing the context… That’s why I called it hyperbole and sensationalism.

            Prons and Cons of BOTH AMD and Nvidia = Impartial, Unbiased, Objective.
            Pros of Nvidia and Cons of AMD = Biased and Opinionated.

            Please sir, tell me what I’m missing out on since I missed my chance at a reasonable discussion with you?

        • BestJinjo
        • 7 years ago

        “I’ve been a Radeon user since the R9700s. I don’t sway towards either side, but I definitely know when things are unfounded.”

        Those 2 statements are contradictory. If you were impartial, how in the world did you manage to not buy NV during HD2900/3800 period, unless you skipped them entirely but then you’d have been gaming on an outdated X1900 series until June of 2008?

        Also, I owned the 9700 series and the drivers for that series were very poor.

          • Krogoth
          • 7 years ago

          What are you smoking?

          Radeon 9700PRO drivers were pretty solid throughout its lifespan. The only major problem with 9700PRO was in beginning where there were some issues with certain VIA chipsets trying to handle AGP 8x mode (This problem also effected GF4 Ti4800 [FYI, it was a GF4 Ti4400 with 8X AGP]). The next driver issues from VIA and ATI resolve this problem.

          The Catalyst driver suite was the turning point for ATI’s infamous sub-par driver quality.

            • BestJinjo
            • 7 years ago

            No they weren’t. For starters, ATI didn’t improve performance in OpenGL game engines until HD4000 series. It’s well documented. All those GeForce 3, 4, 5, 6, and 7 series had far superior performance in all OpenGL titles. Also, in the early stages of ATI, their drivers were piss poor, Radeon 8500, 9700/9800 series and even X800 series. BSOD, driver crashes, trilinear filtering problems in textures/mip-map transitions. Their Anisotropic Filtering was a joke for 5-6 generations too, all the way to HD4000 series as well. I’ve owned all of those Radeon series mentioned above and remember discussing these issues with other ATI owners. Don’t tell me I am smoking something just because you want to close the past. ATI didn’t get a bad name for drivers for no reason. Their drivers are very good today but before 2006, not even close to NV’s.

            • Krogoth
            • 7 years ago

            Sounds like a case of looking at the past with “green-tinted” glasses to me.

            9700PRO was faster than GF4 Ti4600 when it came out (yes, even in OpenGL titles) and that was the case until debut of Geforce FX 5800U. The 5800U was a bit faster at OGL titles (it had more fill-rate and memory bandwidth then the R9700), but anything involving shaders, it fell completely short (especially stuff using SM 2.0) because of its architectural shortcomings. This was pretty much the case for the entire FX family, until Nvidia resolved this shortcoming with their Geforce 6xxx family. From that point, both companies were trading blows until 8xxx/HD 2xxx generation where Nvidia clearly had the advantage. ATI managed to get back in the game with HD 3xxx/4xxx series. They have been trading blows since then.

            Nvidia had its own set of issues with their own AA and AS schemes that require several hardware and software revisions to get it resolved as well. The most outstanding issues that I can remember off-hand were Vista + Geforce 8xxx series BSODs until a release driver or two later and 196.75s messing-up fan-controller profiles which end-up killing perfectly working hardware. I’m not going even touch “Bumpgate”, but that’s more of a manufacturing defect that Nvidia tried to sweep under the rug.

            Both companies were caught red-handed with their own optimizing tricks that inflated scores/FPS that involved sacrificing IQ. They also had driver issues that broke certain games and cause some CTDs/BSODs over the years, but the issues were typically resolved within the next driver release or two. IIRC, they usually occurred with bleeding edge hardware.

            • torquer
            • 7 years ago

            As far as OpenGL performance, back at the end of the NV 5800 era Doom 3 was kind of “the” OpenGL game at the time. Benchmarks below:

            [url<]http://www.anandtech.com/show/1416/3[/url<] Of course you could say id software heavily favors Nvidia hardware, but if you've got some benchmarks showing other OpenGL games from this era and comparative benchmarks I'd be interested. The bottom line is no one's going to get their minds changed by this or any other comment thread. I personally favor Nvidia because it is what I'm used to and I do prefer their drivers. However, I've suggested (and purchased) ATI cards for my friends when they were the clear winner at a certain price point (case in point - 6850s).

            • Washer
            • 7 years ago

            Either you sent the wrong link or you missed the part where there are no GeForce 5800 cards in those graphs.

            • torquer
            • 7 years ago

            [url<]https://techreport.com/articles.x/7200/5[/url<] From our own beloved TR. You can see the NV cards outperforming their Radeon counterparts.

            • Washer
            • 7 years ago

            I’d point out that those charts are missing both the 9800 Pro and 9800 XT, both cards available at the time. I’d be willing to bet both would also be above the 5900 in the results.

            • torquer
            • 7 years ago

            Its possible, but I’m way too lazy to look any deeper. We are, after all, talking about cards that are something like 8 or more years old now. It is amazing though to see how technology has progressed. Here you had Doom 3 still being tested at 800×600 on mid range hardware, and today’s stuff doesn’t even break a sweat at 1900×1200. Crazy.

            Either way, I’ll never stop being amused by the fanboyism on both sides. ATI has had superior cards sometimes, and Nvidia has had superior cards sometimes. I stick with Nvidia for my own PCs because of personal preference. I have no desire to convince anyone to change their choice based on my own preferences or perceived superiority.

          • Bensam123
          • 7 years ago

          You can learn about products without buying a different brand. Just because I have a preference doesn’t make it any sort of extreme bias. I can still maintain a objective opinion of something without having to buy whatever is logically superior. For instance I chose to support ATI through the years because I like being different and none of my real life friends have ATI cards. It has nothing to do with how I logically perceive either Nvidia or AMD cards. My friends all have Nvidia cards and I read about them from other news sources.

          HD2XXX series didn’t exist. That was a OEM thing. I went from R9700 to x800 to x1800 to a x3800. I bought a 4870, later upgraded to a 4890, skipped 5xxx series, upgraded to a 6950 flashed to a 6970. I also owned a R8500 at one time too, I forgot about that.

          Yuh, they switched to the Catalyst initiative right around the transition from the R8500 to the R9700. That was about 10 years ago.

            • BestJinjo
            • 7 years ago

            HD2xxx series didn’t exist? HD2900Pro and HD2900XT sold on the desktop for months before being replaced by HD3800 series. It was on sale for purchase on Newegg….

            Also, the fact that you only buy ATI/AMD cards which included the awful HD3850/3870 series shows that you are biased. Someone who is objective and impartial would have never purchased that series over 8800GT or 8800GTS. The fact that you have a preference for one brand than the other not related to price or performance or price/performance shows that you are biased. You are willing to buy AMD products no matter what it seems. How are you objective exactly? How did you manage to buy the short-lived X1800 series is also odd. Most enthusiasts skipped that series for X1900/1950XT series. Some odd buying decisions there, esp. going from HD4870 to HD4890 (in reference form HD4890 was louder than a GTX470), a pretty awful card overall in terms of reference cooler.

            • cwj717
            • 7 years ago

            The 3850/3870 were not awful, they were competitive at their price points. The 8800GT/GTS was faster but more expensive and used more power at load and idle. The 3850 was better than the 8600GTS, it was faster, used less power and cost about the same at launch.

            • Bensam123
            • 7 years ago

            Yeah… that doesn’t seemingly matter to some people. If you want to talk about biased opinions that’s a very good place to start.

            • Bensam123
            • 7 years ago

            The 2XXX series was available to consumers, but they were almost exclusively limited to OEMs during their very short duration on the market. They used slightly revised version of the same chip on the 3XXX series, which was fully introduced to the market and had a standard duration.

            Why am I biased because I bought a inferior card? I think you’re confusing my preferences for my impartial opinion of a product. I don’t need to buy something that is superior because it is so. I can go out and buy a Bulldozer even though I know it’s worse then most Intel equivelants. Heck I can do it just to spite you. Does it have anything to do with my view of the actual processor and it’s capabilities? No.

            • Washer
            • 7 years ago

            Seriously?

            You bought a ATi/AMD because you thought it made you different? WTF… even I thought more highly of you than that.

            • Bensam123
            • 7 years ago

            Sure? First hand experience is great for getting to know products you wouldn’t otherwise come in contact with. In the scope of everything that goes on, a difference of 5-10 fps has little to no actual meaning to me.

            As far as crushing your hopes and dreams, I’m sorry. I’ll try to be more of a douchebag like you.

      • xeridea
      • 7 years ago

      You need to get your facts straight. First off, the power consumption difference isn’t that big, and is even minute in some cases (read this review). Performance is extremely close, but 7970 tends to do better at 99th percentile frame times, and high resolutions (things that actually matter). Part of running quieter is that Nvidia clearly favors running hotter over making some noise. Look at the performance per dollar chart, the 7970 has higher performance (where it matters), and less cost.

      About the steam thing, it is expected that cards will sell slower 6 months after launch. Your comparison is misleading, and worthless. In the end, they have similar share. There are other things to consider, such as steam share isn’t a sure way of determining sales and marketshare, not everyone plays on steam, and those who do may tend to prefer some cards over others, not necessarily in line with the average as a whole. For instance, the 680 sucks at GPGPU tasks (part of how they got power down), so many more will get the 7970 for this, though it won’t show up on steam.

      By the way, taking the steam numbers, there was the same first month demand, but there was no 7970 shortages. Nvidas high demand BS is false. They have crappy yield issues, on a smaller chip, 6 months later. Its ok, you are not the first one to try this steam argument. I agree about the 7800 series, they aren’t worth it per dollar right now, they didn’t get the price cuts, probably waiting for the 660 to be released….sometime.

        • BestJinjo
        • 7 years ago

        GTX680 reference design runs cooler and far quieter. The reference HD7970 Ghz is basically worthless for air cooling (even Computerbase.de admitted it’s basically unusable without headphones), being even louder than the GTX480:

        [url<]http://www.anandtech.com/show/6025/radeon-hd-7970-ghz-edition-review-catching-up-to-gtx-680/16[/url<] and [url<]http://www.techpowerup.com/reviews/AMD/HD_7970_GHz_Edition/27.html[/url<] I look forward to aftermarket HD7970 Ghz editions from Asus, Sapphire and MSI because the reference cooler cannot hope to cope with 1100mhz+ overclocks, just like it was a hair dryer for the original HD7970. You also didn't address the darkhorse in the room: $400 GTX670 variants such as Gigabyte Windforce 3x has no competition from AMD and $450-470 after market versions of vanilla HD7970 such as Sapphire Dual-X, Asus Direct CUII are far better buys than the reference HD7970 Ghz edition. Performance wise, I'd give the edge to the 7970 once overclocked to 1200mhz+ over an overclocked 680. At those rates, it comes very close to GTX680 in BF3, but wins big in Crysis /Warhead and Metro 2033, which really need the extra power. It also has 3GB of VRAM which will help in GTA V and helps in SKYRIM. However, AMD still has not addressed the 670 directly. $350-400 HD7950s simply make no sense and $429 reference HD7970 is a loud hair dryer that no one wants.

        • flip-mode
        • 7 years ago

        The Steam numbers tell us this much: while being on the market for a short period of time, GTX 680 still outnumbers HD 7970 according to the SHS.

        That’s all we’ve got. We don’t (I don’t, at least) know the percentage of 680 and 7970 buyers that use Steam, nor the percentage of those that take part in the survey. So there is much that is not known. But the fact still remains that the survey results show that among those that take the survey, the 680 is more popular than the 7970 and has achieved a higher market saturation in about half the time of availability. That’s still meaningful to ponder.

          • xeridea
          • 7 years ago

          You say the SHS tells a little bit, but then unintentionally discredit it by saying there are a lot of unknowns, possibly which could completely skew previous statement. Steam is just one use for cards, there are many others that could sway the total numbers sold.

          The only thing you could take for sure from the survey is there is a good chance there are more 680s, but nothing is certain.

      • HisDivineOrder
      • 7 years ago

      The interesting thing is that AMD basically set the “value price” by putting the 7970 at its obscene $550 price point in the beginning and by the time the Geforce 680 came out months later, everyone was so against the 7970’s pricing they were paying $500+ for a 680 that was only a smidgen faster.

      But because it WAS cheaper (not to mention cooler, quieter, and slightly faster) its mojo was greatly increased. It might have had something to do with how shocked people were that the 460/560-like part ended up destroying AMD’s high end so hard even with a 1gb gap in video memory and lower bandwidth.

      If AMD had just undercut the 580 a smidgen (apparently by $50) at launch, it appears they would have all these users for themselves.

      The other rationale is everyone waited for nVidia because they just love nVidia.

        • clone
        • 7 years ago

        AMD did undercut Nvidia’s 580 at launch by $50 but AMD’s brand reputation isn’t strong in the high end, when HD 7970 launched, instead of being seen as a victory for AMD it was seen as a sign of 28nm’s potential. AMD’s low to mid reputation is solid but the upper mid and high end single gpu has traditionally been Nvidia’s sandbox until dual gpu’s came in to play.

        ppl assumed Nvidia would beat HD 7970 and force it’s price down….. I’m still disappointed by GTX 680, the only reason it had it’s brief lead is because AMD beat Nvidia to market and gave them a bar to jump over.

        now here we are with AMD owning the single gpu perf crown…. if they can do it one more time Nvidia’s upper high end reputation will be gone, if they can’t this round will be seen as a fluke.

        I guess I’ll be buying AMD this year to replace the GTX 460….. meh.

          • Airmantharp
          • 7 years ago

          It will be a fluke. Nvidia is beating the piss out of AMDs top-end GPU with their mid-range silicon. Nvidia still has a larger, faster GPU that they can throw into the ring!

            • xeridea
            • 7 years ago

            7970 (stock and OC/GHz) is faster than 680 at 99th percentile frame times, and totally destroys it at GPGPU. Big kepler is aimed at workstation cards, and will have higher power draw per FPS due to focus on GPGPU.

            • clone
            • 7 years ago

            first off you just told an outright lie, AMD currently owns the single gpu crown, that’s not me saying it that’s TechReport so in reality if you want to make mountains out of meager differences AMD is the one beating the piss out of Nvidia’s best desktop GPU’s at the moment.

            to your 2nd point, the strawmans arguement that Nvidia has GK 110 waiting in the wings, first off they can’t produce GTX 680 in quantity and more to the point about the magical GK 110 that is vapor so far the problem will be 3 fold, heat, power and yields….. a tri fecta of failure facing GK 110 that currently is so bad that Nvidia can’t offer them in desktop and can only just get enough out for enterprise…… if that to be honest.

            even more problematic is the implication that AMD won’t have a response for GK 110 by the end of the year if and when it finally arrives which is at this point totally in doubt.

            • Airmantharp
            • 7 years ago

            Nvidia has a bigger GPU in the works, slated for production. AMD does not.

            Nvidia has a dual-GPU card NOW. AMD will not for months.

            Single-GPU performance is neck and neck, but Nvidia wins every other category at the top end. Remember that these are just overclocked versions of the same cards we’ve already seen.

            • clone
            • 7 years ago

            sigh, claiming AMD has nothing in the works….. c’mon already, hinting it makes you look foolish let alone typing it and as for Nvidia’s “slated for production.”… Nvidia held up painted wood for all to see and called it Fermi, nuff said.

            Nvidia has a dual gpu video card but then AMD has HD 7750, HD 7770, HD 7850, HD 7870, HD 7950, HD 7970 available in quantity, note GTX 670 is a fantastic card but so lonely given the lack of GTX 680’s even now.

            single gpu perf isn’t neck and neck btw, AMD has officially got the lead and if you ever mentioned Nvidia’s compute perf in the past you have to give the nod to AMD this round…. please stop telling lies.

            when a company officially raises it’s frequencies is that overclocking or are the cards higher clocked?

            • Airmantharp
            • 7 years ago

            I like how you’re still vehemently an AMD fanboy :).

            Thing is, at the top end, it’s a draw on performance- AMD may be marginally faster until Nvidia answers back. AMD loses at every other aspect for gaming at the same time.

            GPGPU is great and all, but AMD still doesn’t have CUDA, and never will. Nvidia got to this game first, and is still ahead, even if half-Kepler isn’t GPGPU oriented, GCN can’t run CUDA AT ALL.

            In the mid-range, AMD has the advantage, with newer, more efficient cards, for sure- but guess what? THAT’S NOT IN THIS ARTICLE. They still don’t have a dual-GPU card, and what they do come up with won’t match the GTX690 across the board, even if it does end up being slightly faster.

            When a company officially raises it’s frequencies marginally, at levels lower than people were getting overclocking the original cards, just so they can meet the competition in performance while losing more in every other category, I call that unimpressive, if not disappointing.

            • Firestarter
            • 7 years ago

            [quote<] Nvidia got to this game first, and is still ahead, even if half-Kepler isn't GPGPU oriented, GCN can't run CUDA AT ALL. [/quote<] By that logic, AMD had better stop producing graphics cards altogether instead of even trying to compete.

            • Airmantharp
            • 7 years ago

            The logic wasn’t intended to be all-inclusive; it’s just a snapshot of how things are right now; just like this article is a snapshot of the fastest cards available, right now.

            AMD is competing quite competitively in GPGPU, but lacking CUDA support right now, they are behind the curve- which means that their effort won’t be immediately, or totally, realized today.

            • flip-mode
            • 7 years ago

            CUDA needs to die. Nvidia’s efforts at proprietary lock-in will all hopefully fail.

            • Airmantharp
            • 7 years ago

            3Dfx made a proprietary API, and kicked off the 3D gaming revolution. Then they died, along with their API.

            I don’t expect Nvidia to die, but I agree that CUDA shouldn’t go on forever as the standard for GPGPU. I also cannot ignore Nvidia’s contribution to the advancement of GPGPU computing through CUDA, nor how the lack of CUDA support hampers AMDs GPGPU efforts.

            • flip-mode
            • 7 years ago

            Dunno, can’t say I agree with that. I don’t think it’s CUDA that has kept AMD from gaining in that area so much as it has been that AMD hasn’t had a good piece of GPGPU hardware until now. Nvidia has had the far better hardware for years now. It’ll be interesting to see if GCN produces GPGPU gains for AMD.

            • Firestarter
            • 7 years ago

            I’m certain it will. What I wonder is how Nvidia’s gamble of specializing segregating the gaming / GPGPU markets will work. I mean, part of the argument for using GPUs was that it’s consumer hardware, with pricing to match. If AMD needs to cut the pricing on the GCN GPUs to keep pace with Nvidia’s gaming GPUs, they could end up being a far better bargain for non-gaming applications.

            • Airmantharp
            • 7 years ago

            @Flip- it’s not that CUDA hindered AMD so much that it was first, and on effective Nvidia hardware, which allowed for a commanding gain in market share. That AMD hasn’t had a real GPGPU part until GCN has crippled their inroads into that market. My point here is only that AMD is fighting both a hardware battle and an API battle at the same time, because Nvidia captured the market first.

            @FS- I expect their strategy to continue to work well, just as I expect AMDs strategy of propagating GCN uncircumcised through their product line, all the way to Fusion I hope, to aid in their attempt to break into the market.

            Nvidia has the clear advantage of market share, branding with Quadro and Tesla, and product development with big and little versions of both Fermi and Kepler.

            • clone
            • 7 years ago

            I love that you place more weight on non gaming categories when considering the superiority of a gaming graphics card.

            this thread is built upon an article that conclusively states that AMD is #1 in the top end, your admission of AMD being marginally faster is confirmation and hyping up non gaming categories is a comment a fanboy would make…. an Nvidia fanboy.

            CUDA is vapor, public relations hype from Nvidia for Nvidia fanboys to push when all else fails…… just like PhysX was until it totally died and you are correct I suspect AMD will never bother to offer up CUDA because CUDA still doesn’t matter especially when AMD’s compute perf is so much higher and can be tailored to the task…. pretending CUDA an Nvidia proprietary feature as an ace in the hole is an admission of failure.

            agreed AMD owns the mid range with newer more efficient cards for sure, dual GPU is vapor although it is ironic that previously when AMD owned dual GPU just last generation btw it was the Nvidia loyal who cried foul and claimed single gpu was king while Nvidia’s dual gpu cards were exploding in tests.

            I love that you are left embracing dual GPU as the last bastion given the power consumption of todays GPU’s a dual HD 7970 shouldn’t be a problem, I’ve never recommended dual GPU to anyone and still won’t but it will be funny if AMD takes the last category Nvidia has to hide in for this generation.

            when a company only has to raise it’s frequencies marginally in order to surpass the competition I call it a testament to the design, in truth Nvidia can’t play that game because their card can’t scale which is a failing of it’s design.

            p.s. I like how you’re still vehemently an Nvidia fanboy 🙂

            • Airmantharp
            • 7 years ago

            You know, I’ve switched back and forth so many times, based on performance/dollar- and I don’t own stock in either company (though it’d be Nvidia stock if I did, due to AMDs miserable CPU performance for years on end).

            You, on the other hand, own AMD stock- it’s in your best interest to promote their strengths and downplay their weaknesses at every venture. That makes you uniquely qualified to be an AMD fanboy :).

            While AMD and Nvidia trade performance blows at every price point, at the top-end, AMD loses in power draw, heat output, and noise. These things are important, and they’re the reason I went with Nvidia this generation, from AMD last generation (where the situation was in the reverse). Oh, and Crossfire sucked balls, introducing massive micro-stutter into BF3. My single GTX670 feels twice as fast as my HD6950s did, and they were technically faster.

            • flip-mode
            • 7 years ago

            I can vouch for you. No way an Nvidia fanboy puts two 6950’s in his computer. The assertion is ridiculous.

            • clone
            • 7 years ago

            after 8 3dfx, a cppl Kyro’s, approx 40 ATI/AMD’s and 50 Nvidia graphics cards and a GTX 460 in my current rig I’m not sure “AMD fanboy” applies.

            so anyway, no AMD stock and no particular interest in promoting AMD aside from on the basis of merit.

            power draw, heat output and… noise?…. these are your metrics?

            personally I’d go 1st performance, 2nd price, 3rd overclocking potential, 4th noise, 5th warranty, 6th….power draw… maybe?…. unless of course said power draw is egregious and affects multiple metrics.

            I agree that SLI support sucks balls right along with Crossfire support.

            power draw and heat output differences in the age of 28nm are negligible with all of the cards mentioned consuming less power overall than a 4 year old GTX 260….. noise is your last “metric” but given each manufacturer uses it’s own heatsink/fan combo noise is more often than not a draw.

            of course all of these have nothing to do with performance or price which apparently doesn’t matter to you?…… in regards to high end gaming graphics cards.

            • Airmantharp
            • 7 years ago

            Aside from your inability to properly use the English language (after four edits!), I’ll have to say that price/performance is my foremost concern, but:

            Within an acceptable margin (say ~+/- 5%), other factors come into play. After price/performance, noise is the biggest, and is also directly related to power and heat.

            My point is, where AMD bests Nvidia slightly in performance and price/performance (and were behind Nvidia previously), they’ve been behind all along in noise, heat, and power draw, all of which bring in other considerations in any system build for which the chosen card would be used.

            This is of course only looking at half-Kepler vs. big-GCN, not considering little-GCN and not taking into account GPGPU, which would lead someone to buy a different card altogether (probably Fermi, for strong GPGPU and CUDA support at the same time).

            • CaptTomato
            • 7 years ago

            Noise and heat are controlled by the AIB partners and has little impact on cost over standard.

            • Airmantharp
            • 7 years ago

            That depends on the card you wish to buy-

            For those with spaceship-themed cases, cards with two or three traditional fans that exhaust 1/4th or 1/6th of their airflow out of the case work great. People with those cases are usually 14.

            For the rest of us that want an unassuming enclosure, be it mITX or EATX, a cooler that exhausts all of its airflow out of the case is in order, and those are usually made by the OEMs themselves.

            Both work, but I think the reviews don’t tell the whole truth. On a bench, an open-air cooler is going to perform better than a blower, ostensibly being quieter and cooler, as it has an entire room to exchange air with. But in an enclosure, especially one not riddled with holes and gigantic fans, the more effective, and usually quieter solution is the OEM blower, simply because it doesn’t choke on it’s own heat. The example gets even more extreme when you add a second or third GPU.

            • CaptTomato
            • 7 years ago

            I think people need to be smart about what they buy, and I don’t buy or advocate small enclosures….I have a HAF, and I’m free to put the biggest and baddest GPU in there as it’s so accommodating, though it’s currently supporting a Saphire 6850 which never breaks 75C under load, not bad given my 27c+ room temps.

            • Airmantharp
            • 7 years ago

            The HAF, and it’s entire related line, is a dust-magnet :).

            Put enough airflow through it to account for a pair of decent cards (for which the case appears to be designed), and it will be both noisy and dusty… and it’ll still look like a spaceship.

            Not dogging your case intentionally; the above reasons are why I personally didn’t buy one, or anything from Cooler Master, for my last upgrade.

            • CaptTomato
            • 7 years ago

            Maybe, but nothing a clean won’t fix every 6months or so, plus all older cases in the past collected some dust but were a PITA to work on…HAF/X use a clever design that makes life easy when going inside the case.
            Btw, thanks for not dogging on my case, hate to see it if you were, LOL.

            • Airmantharp
            • 7 years ago

            Dust isn’t just something that needs to be cleaned out for aesthetics- it’s also detrimental to any exposed fan.

            Keeping the dust out means not having fans get noisy and fail over time; you just clean the filters instead.

            • CaptTomato
            • 7 years ago

            My HAF is 2.5yrs old, I’ve yet to have a single component failure other than my 4850…,,which was still working but developed a slight problem cleanly processing some video’s, anyway, it still lasted 3 yrs.
            The 78xx suck on power/price, so they’ll never see the confines of any case I own.

            • clone
            • 7 years ago

            don’t the HD 7850 use notably less power than the old 4850 you had and an HD 7850 costs $235 at Newegg…. how is that a horrible price?

            • Airmantharp
            • 7 years ago

            I don’t think it is. I mean, people still buy them.

            • clone
            • 7 years ago

            found his comment that power and price for HD 78xx is so bad he’d never put one in his case odd given HD 78xx offers a lot more performance and features while using less power and costing about the same despite inflation as the HD 4850 he used the 3 years prior.

            • Airmantharp
            • 7 years ago

            Oh I agree with you; it’s not a bad proposition, and you’re right on all points.

            I just think that he’s saying the HD7850 (and I’m expounding here) is overpriced for it’s performance bracket relative to other cards, and he may very well be right.

            We’ll need to see Nvidia’s mid-range stuff before AMD willingly drops prices on those cards.

            • clone
            • 7 years ago

            true enough.

            • CaptTomato
            • 7 years ago

            Yo Mr AMD shareholder.

            I think I got approx 100% performance boost with a 6850 over a 4850.
            My 4850 cost me $270{HIS Cooler}, my 6850 cost me $165 with Saphire Cooler.
            Now, if I wanted a serious boost in performance you think I’d buy a $260-$280 7850 that’s maybe 50% faster….

            • Airmantharp
            • 7 years ago

            You made the prudent decision, I salute you.

            A while back, in between GPUs, I put an HD4870 1GB in my system and proceeded to play BF:BC2 at 2560×1600.

            After dropping the settings, the game was more than playable, and I was surprised. I could have continued to get along with that card for some time.

            The only game that I’ve needed to truly upgrade for has been BF3. It’s amazing graphics and tremendous amount of interactivity have driven my upgrade decisions, and guaranteed that I’ll be able to play pretty much anything that comes out acceptably over the next few years, at least.

            • clone
            • 7 years ago

            no AMD stock but I’ll take the compliment.

            if the performance of the card didn’t matter and all I cared about was getting a big improvement I guess you’d have a point but minimum performance is the 1st criteria in any video card purchase I make followed by price.

            if the card can’t do what I ask of it, it’s worthless at any price.

            using your logic going from integrated Intel gfx to an HD 6570 for $50 is by far the best deal ever because it’s 400% faster while still being too slow for 1080p gaming.

            • flip-mode
            • 7 years ago

            I’ll repeat my question: Why do you expect 6800 -> 7800 (one gen) to be as good as 4800 -> 6800 (two gen)?

            Furthermore, at $270 you bought your 4850 at about $70 more than anyone else, and that’s right at launch. At $165 you bought your 6850 many months after launch. (I got a 4850 for $180 about a month after launch)

            If you want your comparison to be fair, you need to wait for the 8850 to launch and give it 8 months to ramp and settle.

            • CaptTomato
            • 7 years ago

            Don’t ass-ume…..the reality is, I rejected the stock 48xx’s as inferno’s and waited for a proper HIS ICE COOLED model and had to pay for the privilege of getting a satisfactory product.

            • CaptTomato
            • 7 years ago

            For some it would be okay, but for 6850/70 owners there’s no meaningful upgrade short of a 7970 IMO.

            • Airmantharp
            • 7 years ago

            You’re right- at 1080p, there really isn’t even a need to. About the only thing you’d gain is more MSAA.

            Hell, I thought about adding a second GTX670, but then I realized that I’d be paying $400 for MSAA (at 2560×1600), for one game. I figure I can live without MSAA :).

            • CaptTomato
            • 7 years ago

            7850 is a horrible upgrade path for a 6850 though.

            • Airmantharp
            • 7 years ago

            Yup.

            • clone
            • 7 years ago

            hd 6850 wasn’t mentioned, going from HD 4850 to HD 7850 was, on a side note if I started with an HD 4850 and was choosing between HD 6850 or HD 7850 I’m not sure HD 68xx would win, while it is cheaper I don’t know if I’d be willing to go that low given the price bracket.

            your debating a $50 difference between 2 cards, 1 that can do 1080p gaming with some details maxed out vs the other that can manage better than 1080p gaming with all details maxed.

            • CaptTomato
            • 7 years ago

            Need to learn to read and/or pay attention their champ, as I said that the upgrade path for 6850/70 owners sux compared to the 48xx+68xx’s.
            And FYI, 7850 can’t max any proper PC game at 1080p 60FPS, heck, even the gigahertz 7970 can’t, and that’s why I’m meh over this whole gen of HWare, including shitty CPU’s and “taking forever to become sizeable and affordable” SSD’s.

            I want double the grunt of a 7970 without the extravagant cost and possible microstutter, and I’ll be keeping my adequate PC till that becomes a reality.

            • clone
            • 7 years ago

            nothing missed ace, you’ve marginalized yourself into a hole so deep I can’t be bothered.

            lol so yes you may as well sit around for 3 more generations telling everyone how happy you are with by your own admission inadequate hardware…. thankfully us “serious/casual” gamers will buy better and when the newer tech arrives will flip the old and buy even better until the new tech arrives at which point will flip and buy even better while you’ll be sitting back and waiting while applying double standards.

            • CaptTomato
            • 7 years ago

            Happy??….are you semi-retarded?
            I’m complaining about the state of HW, so how does that translate into happiness?
            Face it “ace”, you don’t bother reading what I say or struggle to comprehend it, and I’ve been concise and direct with my views.

            • flip-mode
            • 7 years ago

            Why do you expect 6800 -> 7800 (one gen) to be as good as 4800 -> 6800 (two gen)?

            • CaptTomato
            • 7 years ago

            It’s feels like 2 gens to me,…………….the 78xx are a measly improvement over the 6950/70 and not a worthwhile upgrade, especially at the 7870 prices.

            • clone
            • 7 years ago

            thanks for giving me the home run.

            given the cheapest HD 7970 is $419 after $20 MIR vs $509 for a GTX 680 leaving a 19% before tax price difference favoring AMD you just announced/endorsed HD 7970 as the best card given your own “metrics” of price and perf being primary with a +- 5% tolerance before other criteria gain relevance…. AMD wins on price by 20%+ after tax.

            the rest of your “points” are trash given your stated methodology.

            p.s. the edits are slash and burn, their is a 5th to slash out more of the original post and their will be no fixes, nothing exposes the unimaginative like those that whine about grammar.

            • Airmantharp
            • 7 years ago

            The HD7970 (base model) competes mostly with the GTX670; that’s why AMD made the GHZ edition in the first place.

            Your price comparison is therefore moot; and the rest of my points are always in play. I won’t buy the best price/performance card if I can’t also make it sufficiently quiet.

            And using a language properly is simply professionalism. I’m making light of your refusal to make an effort because, really, it makes me not want to read your posts. This isn’t a ‘text-message’ board.

            • flip-mode
            • 7 years ago

            You want him to become a professional fanboy?

            • clone
            • 7 years ago

            original edition HD 7970 beats GTX 680 on price while only being a little slower, original edition HD 7970 beats GTX 670 in performance while notably close in price and Ghz edition HD 7970 beats GTX 680 on price and performance.

            those are your stated metrics being used and Nvidia loses.

            trying to argue that it’s not fair to compare a card offering similar performance to the GTX 680 at the price of a GTX 670 is a special kind of ridiculous.

            professionalism…… you’ve lied, exaggerated and tried to shift the discussion to non gaming criteria, upon complete failure you go grammar……. and now it’s about professionalism….. no offense but lol.

            • Airmantharp
            • 7 years ago

            Look at the difference in performance. It’s like you’re claiming that even +.1% faster means that the other product is suddenly irrelevant. Regardless of what the performance difference actually is, and it depends highly on the testing situation, cards with the same basic performance should be compared on more metrics than just price.

            And I haven’t lied, exaggerated, or tried to shift the discussion. My point all along has been that with cards of similar performance and price, other metrics start mattering a lot more.

            And whether or not noise and heat are things you worry about, they definitely are things that many informed gamers worry about, and are quite pertinent to a discussion involving gaming cards.

            • clone
            • 7 years ago

            Airmantharp you claimed Nvidia is beating the piss out of AMD, that is indeed an outright lie.

            you then exaggerated non gaming criteria, power consumption, heat, noise regarding cards that use less and generate less than a 4 year old GTX 260 and on noise its not a weakness of the architecture but a choice made by the vendor and a tie between both AMD and Nvidia.

            I’ve stated quite properly that price and performance are king followed by the others, you accepted this and claimed that is true until the differences fit within +-5%……. given AMD beats Nvidia on price by around 20% while matching and beating them in performance using your metrics it’s a solid win for AMD against both GTX 670 and GTX 680.

            that is your reality, those are your criteria.

            you keep trying to talk up noise, noise but the noise…… it’s a vendor issue, their are quiet AMD cards and noisy ones, their are quiet Nvidia cards and noisy ones depending what the vendors choose to equip their cards with….. neither design consumes egregious amounts of power and are nothing like the GTX 580 and GTX 480 power whores of the last 2 generations.

            • Airmantharp
            • 7 years ago

            They’re beating the piss out of AMD in sales, bro.

        • xeridea
        • 7 years ago

        Stock 7970 is faster than stock 680 in 99th percentile frame times. 680 is also really bad for GPGPU, though power draw is a bit less. 7970 has zerocore power though, so total average power consumption is probably similar in the long run.

        • Airmantharp
        • 7 years ago

        I actually want to come back and respond to this (also with a +1):

        The last line about ‘loving Nvidia’ is probably the wrong way to put it- but it is going in the right direction.

        When the HD7900’s came out, I saw them in the tint of my HD6950’s. One card wasn’t faster, was damn expensive, and was still loud and hot. And it was slower than I, and many others (including Nvidia!) had expected.

        So I waited. When Nvidia released their GTX680, being faster, cooler, quieter, drawing less power, and forcing AMD to lower their prices, I thought, ‘cool, well, we’ll see what happens.’

        The very day the GTX670 was released, I bought one. Release day! Never done that before. But to me, one GTX670 cost as much as my two HD6950 2GB cards were worth altogether, so it was an even trade. And while the GTX670 was technically slower, I had just been through AMDs driver issues with BF3 and then Skyrim, and I wasn’t satisfied with the ‘feel’ of BF3 still.

        Go figure that the GTX670 felt faster than my CFX setup. If I sound like an Nvidia fanboy right now, it’s because I am- they’ve earned it!

          • CaptTomato
          • 7 years ago

          bah this gen of everything sux, we still have microstutter and IMO slow single GPU’s that’re also expensive relative to the value I got out of my old 4850 from 4yrs ago.

            • Airmantharp
            • 7 years ago

            While I agree the ‘value’ argument has changed, you do realize that AMD came to market with their first competitive GPU in two generations, and chose to under-price the HD4000’s in order to regain face and market share?

            Also, micro-stutter is a problem, but mostly on multi-GPU configs.

            • CaptTomato
            • 7 years ago

            Then their 78xx’s are total junk…..not to worry, I feel like waiting till 2014 before doing a huge upgrade including SSD and proper/better CPU.

            • Airmantharp
            • 7 years ago

            Replying despite your sarcasm:

            The 78xx’s are totally not the right card to stress the thermal capabilities of a HAF case; as it stands, it’s a mid-range card. Two (or more) high-end cards would be more appropriate.

      • Alchemist07
      • 7 years ago

      “Also interesting is that according to % change the GTX 680 is selling almost 2 for every 7970.”

      I dont think you understand maths…

        • flip-mode
        • 7 years ago

        I’m listening…. according to Steam Survey as of end of May:

        7970 = 0.05% in 4.5 months = 0.111% gain per month
        680 = 0.56% in 2 months = 0.28% gain per month

        What am I doing wrong? My “guesstimate” was pretty loose. It’s more like 1.4:1 instead of 2:1.

      • Kaleid
      • 7 years ago

      Even if AMD has a better product Nvidia will still sell better, just like Intel will sell better than AMD.

    • dpaus
    • 7 years ago

    You know, Scott, for all that I applaud your ‘Inside the Second’ approach, it’s becoming increasingly obvious that it was a brilliant idea that arrived just a bit too late. These cards are so close to each other in raw performance, that you need a completely new paradigm to measure the differences that [i<][b<]really[/i<][/b<] count. Fortunately, I have a modest proposal 🙂 What you need now are a completely new set of graphs. To capture the metrics that really seem to be making a difference in game performance, why don't you graph the number of lunches bought for the game developers per week by both AMD and Nvidia. Maybe graph two series; lunches bought for the actual developers vs for the development managers. Then a second graph showing the number of cases of beer and/or Red Bull + vodka that are mysteriously mis-delivered to each studio. Hmmm, for the x-axis on that, use the number of weeks before launch date (you may have to make that a logarithmic axis). And if you really want to set a new industry standard for comparison, and your testing methods are up to it, how about a graph of the studio executives bank balances right around the time where they make the key platform support decisions. Add a second series showing the total retail value of 'vacations' they take to visit a mysteriously long-lost friend who just happens to live in Bali now. Or Vegas, or Paris, wherever... Then take the differentials from [i<]that[/i<] graph and plot it against the final game performance on two otherwise-identical cards. Trust me, they'll be [i<]really[/i<] interesting graphs. ps: shine enough light on these cockroaches and their practices, and you'll have a whole new kind of 'scatter plot'

      • PrincipalSkinner
      • 7 years ago

      If you’re trying to be funny, it didn’t work on me. Maybe it’s me.

        • Kurotetsu
        • 7 years ago

        “Yo, Tech Report! I’m really happy for you, Imma let you finish, but your graphs suck! But there’s an easy fix, replace all your graphs!”

        Yeah, I think it was a joke.

        • dpaus
        • 7 years ago

        I’m guessing you’ve never had the pleasure of seeing well-funded ‘corporate relations’ teams at work. Like, say, having a conference manager tell you that he can’t let one of your customers give a presentation on how they solved a common problem, because they would likely mention your company, and how presentations that mention a company’s name are highly unethical, and strictly against their principles. Except that when you show up at the conference, your competitor is giving a 1-hour product presentation right after the keynote address. When you ask how [i<]that[/i<] doesn't violate their principles, they kindly explain that it's because the competitor made a $50,000 'sponsorship donation' Not that that recently happened to anyone we know, just, you know, if it [i<]did[/i<]... Or seeing two graphics cards that are neck-and-neck performance-wise, but each of them trounces the other in a particular game, because the vendor 'worked with' the developer. The real accomplishment here is seeing stuff like that happen and still keeping a sense of humour about it at all 🙂 EDIT: Oh, it only just occurred to me that you don't find it funny because you're [i<]on[/i<] one of those corporate relations teams. NVM, just ignore this thread....

          • Firestarter
          • 7 years ago

          [quote<] NVM, just ignore this thread.... [/quote<] Was going to anyway!

      • cynan
      • 7 years ago

      While this would no doubt make for highly entertaining journalism and rampant name-calling and fostering of animosity by opposing fanbois (hence the +1), at the end of the day, isn’t in-game performance of games that people actually want to play what matters? I suppose these sorts of shenanigans would go to far, if for example, AMD paid a developer to go out of their way to make their game run worse on Nvidia architecture, but there are already laws in place to restrict these sorts of practices are there not?

      Are you of the mind that game development should follow some sort of socialist paradigm where every game developer is given the same common-denominator advantage when it comes to optimizing their games to run on their customers’ hardware? How does this best serve the customer – the ultimate goal of any business creating a consumer product – in the end?

    • Lans
    • 7 years ago

    I’m not understanding the Max Payne 3 numbers… The frame time vs. frame number graph for GTX 680 AMP! and HD 7970 GHz and the average FPS numbers are not making sense to me… maybe the color is swapped in the graph? At least with graph the way it is, I would think HD 7970 GHz should have higher average FPS.

    EDIT: I guess same thing with GTX 680 and HD 7970?

      • xeridea
      • 7 years ago

      Interesting. Looks that way to me to, the 7970 produced more frames, but has lower average FPS. Well the 680 has slightly better frame times, which is what matters, but the 2 cards are so close it doesn’t really matter in this game.

        • Damage
        • 7 years ago

        The results come from a single test run in which the 7970 happened to be faster. It turns out not to be very representative of the larger picture from five runs. Remember, as I noted, Max 3 was more variable from run to run than most test scenarios. I will get you guys some numbers later to illustrate what’s happening.

          • CaptTomato
          • 7 years ago

          MP3 can also eat BIG RAM, so if you can enable AA, the 2gig cards will be exposed.

          • Damage
          • 7 years ago

          Ok, so here are the FPS averages for the individual test runs.

          7970: 79.0, 81.4, 85.0, 82.2, 84.3 — Median: 82.2

          GTX 680: 82.3, 82.7, 80.9, 82.1, 82.1 — Median: 82.1

          We grabbed the data for the plots from the third test run for each card. Hence the disparity.

    • rrr
    • 7 years ago

    Not impressed. Sure, it caught up to 680 and maybe even edges it out at times, but this is offset by increase in power consumption/noise/temperatures. Very brute-force approach vs more elegant (at least for gaming) Kepler uarch.

      • sweatshopking
      • 7 years ago

      it’s a fully feature rather than neutered card. it DOES use more power consumption, but it also has more capabilities. if you’re looking at gaming ONLY, then you’re right. Kepler is a more elegant approach.

      • Airmantharp
      • 7 years ago

      Basically my opinion. GPGPU isn’t used by gamers, and thus is a non-argument- if you need GPGPU, you were buying something else anyway.

      AMD uses a bigger chip, drawing more power and emitting more heat and noise to achieve the same basic performance. It’s a win for Nvidia even if the AMD card is slightly faster.

    • Bensam123
    • 7 years ago

    And yet it doesn’t seem to matter for some reason. Nividia almost completely dominates the GPU market in terms of sales. All you have to do is look at Steams breakdown.

    Time and time again, even in this day and age I still hear a lot of unfounded hate about Radeons though… The drivers are poor… AMD is terrible (usually referring to Bulldozers, but Radeons now catch flak for that as they’re ‘AMD’s too)… I’d never buy a AMD. I kinda feel sorry for them, especially after they deliver a very capable and competitive product.

      • Krogoth
      • 7 years ago

      I don’t get it either.

      The entire AMD driver sucks and Nvidia drivers are flawless meme continues to persist even through neither case has been true for years.

      They each have their own set of minor issues with the occasional “serious” issue that gets resolved within a few days to a week. The vast majority of the issues typically occur on bleeding edge platforms.

        • Bensam123
        • 7 years ago

        Yup and they both have them. Neither is supremely better then the other one and even when they are AMD usually matches price/performance.

        • rechicero
        • 7 years ago

        Nividia always spent more (and better) marketing dollars. I wouldn’t be surprised if they had a viral marketing team reviving that meme once and again…

        • willyolio
        • 7 years ago

        in my personal experience, it’s always been the reverse…

      • esterhasz
      • 7 years ago

      I have also followed the steam HW survey and Nvidia is indeed doing quite well there. The numners should be taken with a grain of salt though: there is a certain geographical bias (AMD has recently done better in Europe, where steam penetration is lower, than in the US) and probably also a bias in terms of user profile. Valve also detected a significant data collection error recently – more reason to be a bit skeptical.

      Numbers from market research indicate a relatively stable 60/40 in favor of Nvidia and it looks like the ASP of this round will be higher than the last one. So things don’t look that bleak for AMD.

      They have recently made a real step forward in terms of getting developers to use GPGPU and there is the real possibility that AMD gets a (small) slice of the Tesla cake. Stock is at $5.72. I am going back in.

        • sweatshopking
        • 7 years ago

        even though they make almost no money on GPU’s and the cpu side is bleeding? i wouldn’t buy that stock without a clearly improving outlook. do what you like bro, but damn, AMD isn’t something i’d buy.

          • esterhasz
          • 7 years ago

          That’s precisely the reason I am thinking about going back in. If there was a clearly improving outlook, the stock price would already be much higher. General bleakness with only a hint of a silver lining is when the stock market smells of roses. I’m just not sure whether to wait just a bit more for it to get darker still. But the world is flush with scared cash, waiting to bubble once again and tech companies resonate stronger than others with the hive mind of global capitalism. Between late 2008 and late 2009 AMD stock quintupled in value. That’s not going to happen again, but I’m itching…

          • clone
          • 7 years ago

          AMD is walking away from traditional markets and if AMD’s traditional market performance was all I was looking at I’d agree but AMD isn’t looking at them and it’s where they are looking & headed that has me interested in them as a stock purchase.

          on a side note I believe AMD is making goodly amounts of coin on GPU’s, this generation isn’t cheap and AMD HD 7xxx sales are stronger across the board especially in the $200 realm.

            • Airmantharp
            • 7 years ago

            Now I know why you’ve been such an AMD fanboy lately…

            I remember a retiree defending his purchase of RAMBUS stock in 2000. He was religious about it.

            • clone
            • 7 years ago

            the problem being Rambus did very well through litigation until 2009 so the retiree was making the right choice.

            • Airmantharp
            • 7 years ago

            They did, but their gravy train stopped quick.

            From my perspective at the time, which hasn’t changed, Rambus was both pushing a crappy solution backed up by Intel- their ‘800MHz’ memory for the Pentium3 at the time imposed a considerable latency penalty, while the much lower latency PC100/PC133 provided more bandwidth than those CPUs could use. It was slower, and cost more on the scale of eight to ten times as much.

            Meanwhile, Rambus was trying to litigate the DRAM makers to death using patents on technology they developed while participating in the JEDEC standards body that defined the standards for the DRAM that those manufacturers were producing. Big WTF there.

            So while that retiree was standing on his stump for Rambus, I was looking at him like he was a raging lunatic for it, since I was applying more logic to the situation than just financial statements and litigation strategies.

            And I was right in the end, and good prevailed :).

            • WaltC
            • 7 years ago

            Yes, in the end the truth came out, but IIRC, it took years of litigation and appeals to prevail. Some companies, rather than join the legal fight, caved and paid RAMBUS’ “licensing fees” as they deemed those cheaper than fighting RAMBUS in court. You no doubt have heard the saying: “Only a lawyer could screw up a wet dream”….;) The damage that lawyers have done to technology advancement and price-competition in the marketplace is incalculable. Lots of consumers don’t understand that legal fees are ultimately paid by the customer in terms of higher prices for goods and services–of course, the lawyers don’t care where it comes from, by hook or crook, so long as it comes.

            Then you get the occasional company like RAMBUS, propelled into the spotlight by an Intel anxious to corner the bus market through its agreements with RAMBUS–and all hell breaks loose. RAMBUS makes no products–has never made them–and is staffed almost wholly by lawyers who constructed a technological facade to masquerade their chief ambition: becoming a legal troll. The JEDEC story is proof positive of that–as if the definition of RAMBUS as a legal troll needed substantiation.

            Then the copycats emerged–SCO suing everybody, what’s left of Novell trying to blame the failure of Word Perfect on Microsoft–when everyone knows why WP failed–Microsoft Word, just like IE in comparison with Netscape’s browsers, simply became a better product with time and development. And that was “time and development” that wasn’t put into Word Perfect. Etc. ad infinitum. Apologies if my timeline here may be a bit off…;)

            But, in hindsight, it’s a good thing that Intel bet on the wrong horse at the time. Otherwise, it might’ve been Intel owning “DDR” bus standards instead the market at large. But there’s simply no way that Intel would have ponied up a $1B RAMBUS buy-in without the anticipation of a much larger payback further on down the line. Meanwhile, unfortunately, RAMBUS still exists–and is [url=http://investor.rambus.com/litigation.cfm<]still suing[/url<] people at the moment. The RAMBUS link that tickled me the most was the "Careers at RAMBUS" link...;) I can well imagine what it says in abbreviated form: "Tried and tried again to pass the BAR but just can't seem to make it? Or, do you have a law degree on the wall in your "me space" at home, but can't seem to figure out what to do with it? Well, bring yourself and your moldy law degree, if you have one, to RAMBUS, because we've got a job for you!...."

        • erwendigo
        • 7 years ago

        The most important bias is the user profile (gamer), not the penetration of Steam in Europe. But the bias is a positive bias for the market, the steam user base doesn´t represent all the market but it´s a very good subset of it, because it agglutinates much of the gamer market.

        The bug in the data collection method has been fixed, the fix was applied in the survey data of the March survey, all the followind surveys are without this bug.

        • Bensam123
        • 7 years ago

        I would trust Steams hardware survey over ‘market research’. Unless they’re getting their numbers directly from the manufacturers, it doesn’t account for squat… Not sure why you’re pointing out the Steam survey as being bad and market surveys as being good, when the steam survey shows roughly 60/40 as well. Things have been like that for years though.

        Intel recently showed up on the Steam survey, but that’s because all their new processors have a IGP. I’m surprised Steam doesn’t add a annotation for them and a check box to enable or disable the displaying the Intel IGPs if there is a dedicated graphics processor present.

          • BestJinjo
          • 7 years ago

          I think both are good data points. Taken together, we can get a very accurate picture.

          [url<]http://jonpeddie.com/press-releases/details/graphics-add-in-board-shipments-seasonally-down-just-2-from-last-quarter/[/url<] NV = 61.8% AMD = 37.2% It's around 60/40 for mobile though where AMD is leading.

          • esterhasz
          • 7 years ago

          Market research does indeed get their numbers from manufacturers (and big distribution channels).

          The 60/40 on steam is not market share but installed base. Market share refers to current sales over a set time frame, e.g. “the last three month”.

          Steam does allow to guesstimate on sales data by looking how share moves from one month to the next. Because of the before mentioned biases, going from installed base to sales numbers is simply not that straightforward though.

      • rythex
      • 7 years ago

      Yeah, I don’t understand all the AMD hate, the fan-kiddos live in some sort of apple-esque dimension.

      I’m pretty sure If it wasn’t for AMD/ATI, Nvidia would gladly be selling Geforce3’s for $600 right now.

        • Firestarter
        • 7 years ago

        And vice-versa!

      • Ushio01
      • 7 years ago

      Last time i bought an ATI card was the 4870 when i got through 3 card’s from 3 different partners in 5 months then was offered a GTX 275 as a replacement which I used until the GTX 670 was released problem free. So that’s my reason.

      • Ifalna
      • 7 years ago

      Agreed, very happy with my 7870.

      • Kaleid
      • 7 years ago

      The drivers are not poor…well perhaps with the exception for crossfire but who really cares about that anyway.

      I’ve been able to replicate BSOD for well over a year now with Ntune, and that is without any overclocking.

      Nvidia sells more than Ati, Intel sells more than AMD, Coca Cola sells more then Pepsi, even if the product isn’t better.

        • Airmantharp
        • 7 years ago

        I was using Crossfire to drive BF3 and Skyrim on a 30″ LCD, up until I picked up a GTX670. In BF3, it feels twice as fast, likely due to the lack of micro-stuttering, even though it’s technically slower than HD6950 CFX.

          • Kaleid
          • 7 years ago

          That could be engine related too. Optimized for Nvidia.

      • rrr
      • 7 years ago

      Radeon’s drivers are indeed poor. I called one for a ride when I was drunk – it turned out he was just as drunk as I was and nearly crashed us into a tree. He was literally poor because of all that booze he bought.

    • Krogoth
    • 7 years ago

    7970 1Ghz Edition = counter-attack against vanilla GTX 680, trades blows with it at a somewhat higher loaded power consumption.

    It is amazing that either of these cards managed to deliver a playable framerate (30-40FPS) at 4Megapixels with healthy doses of AA/AF on top. They most certainly can effortlessly handle 2Megapixels with the same level of AA/AF.

    It wasn’t that long ago that this used to be only possible with SLI/CF setups with at least two high-end cards.

    7970 and 680 are a steal for what they can do.

      • dpaus
      • 7 years ago

      You’re….. impressed?!?

        • DeadOfKnight
        • 7 years ago

        Don’t be fooled, it’s not like games are demanding a lot more these days.

          • indeego
          • 7 years ago

          I like how we plaster games on more screens and marvel at the performance and features but conveniently ignore the HUGE FREAKING BEZELS THAT KINDA BLOCK THE IMMERSION FACTOR.

            • BobbinThreadbare
            • 7 years ago

            There are genres where this is acceptable, mostly driving things (flight and racing sims).

            • indeego
            • 7 years ago

            Acceptable, not ideal. All things considered I’d rather have smaller/few displays with higher pixel count than many screens with bezels. My eyes are faster than my neck craning.

            • CaptTomato
            • 7 years ago

            Won’t see me ever migrating to triples unless the bezels are wafer thin, and that assumes I like triples, which I don’t.

        • DeadOfKnight
        • 7 years ago

        I bet he’s still using a GeForce 6800

        • Arclight
        • 7 years ago

        I smell sell-out. Someone sponsored him to act abnormally to atract attention and when you least expect it BAM he drops an add slogan.

    • Firestarter
    • 7 years ago

    Any word on what the dynamic voltage is when it clocks itself to 1050mhz? It could give some insight into yields, or if some cherry picking is going on. As many regular 7970’s can do that 1050mhz (and more) at stock voltage, I’m guessing that AMD picked the best of the litter here and lowered the peak voltage so that the power consumption stays within limits at the higher speeds.

    Please disregard this post if it’s in the article and I missed it, I’m just skimming it at work 😉

    edit: The sensors tab in GPU-Z includes voltage/speed/temperature monitoring and logging. IIRC it records both the desired and the actually measured voltage, if the power wizardy on the card permits.

    • can-a-tuna
    • 7 years ago

    Is Techreport actually become more nvidia biased site than Tom’s? GTX 680 Amp! and GTX 670 Amp! Really?

      • Krogoth
      • 7 years ago

      7970 1Ghz Edition is technically a factory-oveclocked part like 680 Amp! and 670 Amp!

      So it is fair game.

        • Hattig
        • 7 years ago

        Except it’s an official manufacturer SKU rather than a reseller tweaked SKU. As the article says, there will be factory overclocked SKUs for this GHz edition product, so it is surely fair to compare the AMP! products against those when they’re available.

        Similarly there are factory overclocked standard 7970s, but they weren’t included. Of course it is interesting that these AMP!s arrived eh?

        OTOH they can only include what they’ve got, and in the end the 7970GHz beat out the AMP! for $50 less. And volume production.

        • shank15217
        • 7 years ago

        No its not, its a new higher binned part. Anytime Intel or AMD releases a faster cpu model do you say its factory over clocked? Stop calling these factory overclocked because they aren’t.

      • Cyril
      • 7 years ago

      Yes, we’re so biased toward Nvidia that we concluded the 7970 GHz Edition is “even faster than Zotac’s GTX 680 AMP! Edition” and “unambiguously a better value than the stock-clocked GeForce GTX 680.”

      😉

        • sweatshopking
        • 7 years ago

        you guy you.

        • HisDivineOrder
        • 7 years ago

        Awesome that you’re so honest about it. Haha, j/k.

        • Jigar
        • 7 years ago

        You biased people, i hate you for saying AMD still wins… NOooooooo…. 😉

      • erwendigo
      • 7 years ago

      Mmm… ok.

      Techreport is a biased site because make a review with:

      1.- A “new card” 7970 GHz revision, with the same PCB that the original 7970, the same heatsink, the same gpu and same memory, but with a new and bumped vesion of BIOS and clocks.

      2.- A “new card” GTX 680 “AMP! edition”, with the same PCB that the original GTX 680, BUT distinct heatsink, and the same gpu and memory, and as bumped version of BIOS with higher clocks.

      Come on!! It´s the same thing, the difference is that in the GTX 680 AMP! edition, is ZOTAC the company behind the “new card”, and in the 7970 GHz revision is AMD behind the “new card”.

      But the two cards are slighty modified/tuned cards from the reference design.

      • HisDivineOrder
      • 7 years ago

      Hard not to be a smidgen biased toward the guys popping out the cheaper, better, quieter, cooler cards that keep driving down the costs of ALL the cards in the same performance level.

      I mean, hey, I’d love anyone who came in and knocked $75-100 off the price of the high end. Anyone.

      AMD used to be that guy, but they got greedy. Now nVidia’s that guy and sure, they’re not doing it nearly as hard as AMD… but still they ARE knocking them prices down.

      First with the 680 bringing the 7970 down. Then the 670 came in and knocked the 7950’s teeth in, driving the 7970 even lower. Imagine what the 660’ll do to the 7950 and 78xx series. You know AMD’s hoping that card doesn’t come out soon…

      I’m biased toward the company making video cards cheaper and actually bringing them down into true mainstream pricing. No, $400 is not mainstream pricing. Nor is $300+. Imo, mainstream pricing is and has always been $100-$200, maybe $250 if the card is incredible and/or able to be turned into a high end card through a bios flash or overclocking.

      • Arclight
      • 7 years ago

      I should agree with you but i don’t since they had the vanilla GTX 680 and 7970 for refence as well….plus their conclusion regarding the 7970 GHZ Edition being overall better then the reference GTX 680 and the AMP! version…..

      But regardless of the reference cards being included or not, the bias that sites have can be noticed in the text of the article and especially on the conclusion page and this article did not struck that tone, imo.

      • kc77
      • 7 years ago

      I think some (Krogoth, Hattig, and shank excluded) people are utilizing logical fallacies (ignoratio elenchi) here. The OP didn’t say anything about who won being apart of his argument. He is objecting to the fact that AMP! cards made an appearance in a review that he/she thought shouldn’t be there. They aren’t in the following reviews:

      7970/7950
      7870/7850
      GTX 690
      GTX 680/670

      His objection isn’t the conclusion of the article it’s the parameters for which the cards were included. Now is that fair or not? That’s something one can debate.

      • Deanjo
      • 7 years ago

      TR is absolutely biased, but not only towards nvidia but also amd/ati. Seriously, when was the last time you read a S3 card review? 😛

        • FireGryphon
        • 7 years ago

        About six years ago: [url<]https://techreport.com/articles.x/9898/1[/url<]

    • pedro
    • 7 years ago

    Over at AT, they tested DiRT 3 @ 5760×1200. The new Radeon did an astonishing 71.6 FPS!

    I think there’s a real case to be made for testing on >4 megapixel displays.

      • Chrispy_
      • 7 years ago

      especially when even $150 cards can handle a single 1080p so well, and easy Eyefinity means you can get 3 1080p displays for less than the cost of one of these cards!

        • BestJinjo
        • 7 years ago

        You can’t run the demanding games games at 1080P on a $150 videocard at 50-60 fps with 4xMSAA. Crysis 2 (Crysis 3 to follow), Metro 2033 (that means Metro 2034), Dirt Showdown, Shogun 2, Anno 2070, etc. Anyone using Eyefinity is buying 2 high-end cards or is going to have to turn down settings a lot. And next year when sequels of these games come out, a GTX680/HD7970 GE on its own will be useless for multimonitor gaming. Crysis 3 is going to murder those cards at 2560×1600 since it’ll be even more demanding than Crysis 2. The sweet spot for these cards is 1920×1080/1200P 60 fps+.

          • Arclight
          • 7 years ago

          I agree with you regarding the fact that in a year the GTX680 and 7970GE might be useless in multi-monitor resolutions with the launch of new games but idk if i agree with:
          [quote<]Crysis 3 is going to murder those cards at 2560x1600 since it'll be even more demanding than Crysis 2[/quote<] Afaik Crysis 3 is built with the same engine as Crysis 2.....and knowing Crytek's commitment to consoles i don't think they will spend that much time on the PC version making high quality textures and what not. Yes yes, i know they promised this time the game will launch on the PC with the hi-Q textures, but haven't we heard that before? Plus the "Press Start" thingy makes me even more skeptical.

        • sschaem
        • 7 years ago

        $450 for 3 card (do you live in Alaska for wanting this?) or $439 for a 3GB 7970 (getting close to $400 with rebates)
        [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16814131468[/url<] I would take the cheaper 7970 in a heartbeat... so in 2 year I can add a second one for cheap so my PC can keep up with the new console titles (On my new retina 30" monitor 🙂

      • BestJinjo
      • 7 years ago

      That benchmark is old. It will be replaced by Dirt Showdown, which hammers videocards like no tomorrow:

      [url<]http://gamegpu.ru/images/stories/Test_GPU/Simulator/DiRT%20Showdown/ds%202560.png[/url<]

        • CaptTomato
        • 7 years ago

        yes. but on what basis?
        DShowD doesn’t look any better than Dirt2.

          • BestJinjo
          • 7 years ago

          Did you read the review?

          “Although Showdown is based on the same game engine as its predecessors, it adds an advanced lighting path that uses DirectCompute to allow fully dynamic lighting. In addition, the game has an optional global illumination feature that approximates the diffusion of light off of surfaces in the scene.”

          Dirt Showdown
          2560×1600 4xMSAA
          GTX680 = 26 fps (unplayable)
          HD7970 GE = 43 fps (better but still not 60 fps that many racers want)

          Some screenshots here of these graphical changes: [url<]http://videocardz.com/33633/club3d-releases-radeon-hd-7970-ghz-edition[/url<] Unfortunately it's a crappy a more arcade watered down version of Dirt 3. Also, games like BF3 running at 2560x1600 at 41 fps as this review has shown is not playable to most FPS gamers. Thus, the reality is that neither HD7970 nor GTX680 are fast enough for the most demanding games at higher resolutions without turning down settings. Then there is Witcher 2 with Uber Sampling, etc. It's not accurate to imply that these cards are only good for above 1920x1080/1200P since some people don't want to play with 30-40 fps averages! This also means in 12 months when other more demanding games come out a single HD7970/680 will be choking. Also, a lot of people play on large LCD/LED/Plasma screens spanning 37-50 inches in size. There is no way they'd downgrade to some tiny 27 inch 2560x1440 monitor for PC gaming.

            • CaptTomato
            • 7 years ago

            Thing is, I have pcars, and that looks epic, and I get solid performance with my 6850, yet Dirtshow demo was a stutter fest and looked worse.

            • BestJinjo
            • 7 years ago

            Agreed. Pcars is amazing. Dirt Showdown doesn’t look any better than Dirt 3 and runs 3x worse.

            • CaptTomato
            • 7 years ago

            GFX yes, physics/FFB no.

            • BestJinjo
            • 7 years ago

            Dirt Showdown is an arcade racer mate. What physics? It’s not Forza motorsports….

            • CaptTomato
            • 7 years ago

            I don’t consider GT5/FM to be sims anymore than I do Dirtshowdown, but I think I was referring to pcars……

            • xeridea
            • 7 years ago

            Dirt Showdown runs poorly on Nvidia cards, it it playable on AMD cards. As long as you keep solid 30FPS+, it is playable, perhaps a bit higher for twitch games, and at 2560×1600 you can turn down the MSAA a bit, as you won’t notice it much, especially a fast moving game.

            60FPS is somewhat of an arbitrary number. The more important thing is consistent frame times. If you had consistent 40FPS vs inconsistent 60FPS, the 40FPS would win. In Dirt Showdown, the AMD cards are fairly consistent, while Nvidia is all over the place, which doubles the effect of low average FPS.

            • BestJinjo
            • 7 years ago

            30 fps in a racing game is not that great in a racing game. Even Forza Motorsports on Xbox is 60 fps. No serious racer games at 30 fps in a racing game on the PC. Furthermore, this game has garbage graphics for performance vs. Dirt 3. And finally, it’s actually a far worse game than Dirt 2 and Dirt 3 were.

            Also, if you turn off global illumination, the performance skyrockets with no impact to visuals. Have you considered that there is a bug in the game that affects Kepler cards as well? Let’s see if they fix it in the next driver update.

            • clone
            • 7 years ago

            their is no “serious” racer game, they are games.

            30frames is fine while 60frames is very fine, nothing serious about it.

            • CaptTomato
            • 7 years ago

            hahahahaahahaha, what about proper PC racing sims, goto any sim forum and try and pass off 30FPS as fine.

            • clone
            • 7 years ago

            is that the serious racers talking?

            30 is fine, 60 is very fine, ppl that cry about 30 not being fine are blaming their hardware because they lost.

            • CaptTomato
            • 7 years ago

            Hilarious…..you obviously don’t race PC sims or don’t care about smooth gameplay and trying to eliminate lag where possible.
            And as if the casual gamers would be demanding 60FPS…..well except on console where NO SIM exists.

            • clone
            • 7 years ago

            not really, gamers like smooth gameplay, no serious, no casual about it, 30’s fine, 60’s very, I’m talking minimum frames and I’m not sure any games run steady at 1 framerate.

            • CaptTomato
            • 7 years ago

            oh minimum, well most people use 60FPS average as the ideal.

            • Airmantharp
            • 7 years ago

            Actually, I consider a ‘sustained’ 60FPS as the ideal, because I don’t own a 120Hz monitor (I’ll keep my 4MP, thanks).

            Sustained does mean that the game stays at 60, but dips in the 30’s and 40’s are acceptable if they don’t adversely affect the game play. Hell, loading screens/points can drop frames to the teens or lower, and that’s not a problem either.

            I just want the damn game to be smooth. 60FPS with consistent frame times rule on a 60Hz display.

            • CaptTomato
            • 7 years ago

            That’s what 60FPS normally means….as such, under heavy load, one hopes the dips don’t break the 24-30FPS barrier.

    • shank15217
    • 7 years ago

    I still don’t believe that 680 cards are that much more unavailable because of their demand. I think it pure Nvidia fud and fan boy trolling. Nvidia can’t get these cards on the shelves therefore people can’t buy them. The demand for high end cards aren’t that much to begin with. Here is the proof… the 670 is in stock almost all the time in newegg and it’s not much slower than the 680 and its about a $100 cheaper. By Nvidia’s logic the 670 should also be scarce. I think the yield on the kepler gpu is pretty bad for volumes required for consumer parts and most of the good parts are being stockpiled for gpgpu cards which go for much higher asp. Out of the remaining, only a very few are good enough to be 680 and the rest are binned as 670.

      • novv
      • 7 years ago

      Don’t forget that Kepler GK104 is not a real GPGPU card like HD7970 or GTX580!!!

        • xeridea
        • 7 years ago

        Yeah, 7970 has slightly higher average power consumption, but is faster in useful tests (99th percentile), and any GPGPU application. 680 sucks at GPGPU. Being $50 cheaper, and actually in stock, pretty much from day 1 to top it off. Its sad that 3 months later, the 680 is not in stock, its clearly crappy yields… on a smaller chip with less memory.

          • Phishy714
          • 7 years ago

          7 different 680’s in stock @ newegg right now..
          5 different 680’s in stock @ Amazon.
          7 different 680’s in stock @ Tigerdirect.
          4 different 680’s in stock @ buy.com.

          Tell me again how the 680 is not in stock?

            • shank15217
            • 7 years ago

            Everyone of them are marked up above 500, the lowest at 520, the highest at 600.. priced to sell..

      • kc77
      • 7 years ago

      Yeah I’m not buying it either. In order for Nvidia to make an argument that the lack of 680’s is purely demand related then you would also have to accept the notion that top end SKU’s outsell lower end ones.

      I’m pretty sure that’s not the case. Do we believe that GTX 690 outsells the 680 and 670?

      • erwendigo
      • 7 years ago

      About trolling, saying that alternative thinking about a problem/issue/question is just that, trolling.

      You talk about proofs, but you don´t show any of them. Do you want proofs?

      Take them:

      [url<]http://store.steampowered.com/hwsurvey/videocard/[/url<] Go to the DX11 card section, and you see all the proofs that you need. The GTX 680 is the first card of 28 nm in user´s steam base (+40 million of users). The second card (28 nm) is the 7970, and the low end cards sells very poorly if you look to the highend cards. Oh yeah, the world is different now to your eyes.

    • Johnny5
    • 7 years ago

    We want a 7830! (Or just a cheaper 7850!)

      • chµck
      • 7 years ago

      You want a 7830, I want mid-ranged mobile gpus.

      • xeridea
      • 7 years ago

      The 7850 badly needs a price cut. Its only ~35-40% faster than 7770, but costs nearly twice as much. I nabbed the 7770 for $100 w/MIR. The 7800 series didn’t get price cuts like the others, perhaps they are waiting on the 660, which will come…… eventually.

        • Corrado
        • 7 years ago

        This is why I have 2 7770’s in Crossfire. I got them both for a total of $205, and they’re retail Gigabyte cards with the non-standard, smaller PCB and cooler.

        • BobbinThreadbare
        • 7 years ago

        It’s not any faster than a 6870 that sells for like $100 less.

          • cegras
          • 7 years ago

          The 7850 is ~15% faster, and that was with launch day reviews.

            • derFunkenstein
            • 7 years ago

            a premium that is not in any way justified by the performance, then.

            • rrr
            • 7 years ago

            I think that’s up to an individual buyer to decide.

          • TDIdriver
          • 7 years ago

          and the 6870 has more memory bandwidth than the 7770

            • rrr
            • 7 years ago

            Yet difference in actual performance isn’t as big as difference in memory bandwidth.

            [url<]http://www.hwcompare.com/11924/radeon-hd-6870-vs-radeon-hd-7770/[/url<] [url<]http://www.anandtech.com/bench/Product/536?vs=540[/url<]

          • rrr
          • 7 years ago

          7770? Sure.
          7850? It matches and even slightly beats 6950.

      • flip-mode
      • 7 years ago

      It would equiv to 6870. I’d rather a 7930.

        • rrr
        • 7 years ago

        Who cares? OC’d 7870 match up 7950, and not like there is an enormous price gap between two.

        OTOH don’t get me started about a price AND performance gap between 7770 and 7850.

          • flip-mode
          • 7 years ago

          I could be wrong but my impressing is that the 6870 IS the option between 7770 and 7850. But I agree with you, it would be nice to have a GCN option in there. It would also be fantastic if Nvidia came to that party, too.

            • rrr
            • 7 years ago

            Yep, that’s what I meant. After all, 6xxx series supplies WILL dry out eventually.

            And I do agree about nVidia needing to show sth compelling in that segment.

            • clone
            • 7 years ago

            6xxs supplies won’t dry out unless AMD wants them too, they are still manufacturing 6xxx even now, the older die process is cheaper and mature offsetting the cost of the additional sand and 6xxx’s are filling in for 7xxx shortfalls.

      • alienstorexxx
      • 7 years ago

      or a “gtx660 non ti” on that performance range

      • HisDivineOrder
      • 7 years ago

      I think the gap between the 7770 and the 7850 is pretty bad. I think the gap between the 7870 and the 7950 in price is kinda sad.

      Seems like the two problems could be solved by price drops across the 78xx series with accompanying pricing tweaks to the areas above and below.

      I think the 7950 does very poorly against the 670 and should be considerably less in cost, probably down to $350. If they want it to move and become the next 5870 or 4870 or Ti 200, they would drop it to $325 and put the 7870 at $250 with the 7850 at $200. This would have the side benefit of drawing everyone currently muttering about buying old gen cards to seriously consider AMD’s latest and greatest in the mid-range, generating positive word of mouth and gains of marketshare in the enthusiast market. This would add to what they were trying to do by reclaiming the performance crown with the 7970 GHZ by giving them the price/performance win in a time when nVidia doesn’t really seem to have enough parts to do the same.

      I also happen to think consumers buying the 7970 ghz should pay less ($475) than the nearly equal performer with insanely less noise and heat 680 to take such a loud card with the stock cooler and let the non-reference cooler cards occupy that $500 space, hoping they do a better job of the noise to cooling ratio than AMD has. They can use OC cards to differentiate farther into the high end above $500.

      But that’s just me and I happen to think that 2560×1600 should be EASILY doable by $200-$225ish cards nowadays. I also think the positive word of mouth from this would be akin to when AMD has had its greatest successes in the past, claiming the top end while claiming the value end, too. With nVidia’s supply problems constraining them, it’s really AMD’s marketshare to lose, but if they wait too much longer, nVidia will show up and do it first.

Pin It on Pinterest

Share This