Does the Radeon HD 7950 stumble in Windows 8?

You may be interested to see the side-by-side high-speed videos we captured of our Skyrim test scenario, which illustrates the differences in smoothness between the 7950 and GTX 660 Ti.

Let’s pick up where we left off. Last week, we published a rematch of the classic battle between the Radeon HD 7950 and the GeForce GTX 660 Ti. We tested with a host of the latest games, including Borderlands 2, Hitman: Absolution, and Guild Wars 2.

Going into the contest, we thought the Radeon HD 7950 was the heavy favorite for a number of reasons. Among them is the fact that AMD has released new graphics drivers in recent weeks that promise a general performance increase. Also, the firm has worked closely with the developers of many of the holiday season’s top games, even bundling several of them in the box with the Radeon HD 7950.

To our surprise, though, the Radeon didn’t fare particularly well in our tests. Although it cranked out FPS averages that were competitive with the GeForce, the 7950 produced those frames at an uneven pace—our frame time plots for the Radeon were riddled with latency spikes. As a result, the Radeon’s scores were rather poor in our distinctive, latency-oriented performance metrics. Not only that, but our seat-of-the-pants impressions backed that up: play-testing on the GeForce felt noticeably smoother in many cases.

We noted that we’d taken various steps to ensure that the Radeon’s disappointing results weren’t caused by a hardware or software misconfiguration. Confident that our results were correct, we concluded that the most likely cause of the 7950’s poor showing was our transition to all-new games and testing scenarios. Our tentative conclusion: playing the latest games on the Radeon HD 7950 just isn’t as good an experience as playing on the GeForce GTX 660 Ti.

However, we did note the possibility that having upgraded our test rigs to Windows 8 might have played a role. I suppose we should have known that many of our readers would call for more testing in order to confirm our results. Quite a few of you asked us to run the same tests in Windows 7, to see if the Radeon’s performance problems are the product of the transition to a new operating system. Others asked us to re-run a familiar test scenario from one of our older articles to see whether the Radeon with the latest drivers would experience latency spikes in places where it previously hadn’t.

Those seemed like reasonable requests, and we were curious, too, about the cause of the 7950’s unexpected troubles. So we fired up our test rig, installed Windows 7 with the same drivers we’d used with Win8, and got to testing.

The results, naturally, are enlightening. Read on to see what we found.

Wait, er, latency what? What about FPS?

If you’re confounded by our talk of latency-focused performance metrics, you’re probably not alone. Gaming performance has been measured in frames per second since the dawn of time, or at least since the 1990s, when today’s PC enthusiast scene was first starting to form. However, FPS averages as they have been used tend to have some very big, very fatal flaws. We first explored this problem in the article Inside the second: A new look at game benchmarking. I recommend you read it if you want to understand the issues well.

For those too lazy to do the homework, though, let me extract a quick section from that article that explains the why FPS averages don’t tell the whole story of gaming performance.

Of course, there are always debates over benchmarking methods, and the usual average FPS score has come under fire repeatedly over the years for being too broad a measure. We’ve been persuaded by those arguments, so for quite a while now, we have provided average and low FPS rates from our benchmarking runs and, when possible, graphs of frame rates over time. We think that information gives folks a better sense of gaming performance than just an average FPS number.

Still, even that approach has some obvious weaknesses. We’ve noticed them at times when results from our FRAPS-based testing didn’t seem to square with our seat-of-the-pants experience. The fundamental problem is that, in terms of both computer time and human visual perception, one second is a very long time. Averaging results over a single second can obscure some big and important performance differences between systems.

To illustrate, let’s look at an example. It’s contrived, but it’s based on some real experiences we’ve had in game testing over the years. The charts below show the times required, in milliseconds, to produce a series of frames over a span of one second on two different video cards.

GPU 1 is obviously the faster solution in most respects. Generally, its frame times are in the teens, and that would usually add up to an average of about 60 FPS. GPU 2 is slower, with frame times consistently around 30 milliseconds.

However, GPU 1 has a problem running this game. Let’s say it’s a texture upload problem caused by poor memory management in the video drivers, although it could be just about anything, including a hardware issue. The result of the problem is that GPU 1 gets stuck when attempting to render one of the frames—really stuck, to the tune of a nearly half-second delay. If you were playing a game on this card and ran into this issue, it would be a huge show-stopper. If it happened often, the game would be essentially unplayable.

The end result is that GPU 2 does a much better job of providing a consistent illusion of motion during the period of time in question. Yet look at how these two cards fare when we report these results in FPS:

Whoops. In traditional FPS terms, the performance of these two solutions during our span of time is nearly identical. The numbers tell us there’s virtually no difference between them. Averaging our results over the span of a second has caused us to absorb and obscure a pretty major flaw in GPU 1’s performance.

The bottom line is that producing the highest FPS average doesn’t prove very much about the fluidity of in-game animation. In fact, even fancy-looking graphs that show second-by-second FPS averages over time don’t tell you very much, because tools like Fraps average the results over one-second intervals. Those plots simply make the same mistake over and over again in sequence.

Once we realized the nature of the problem, we decided to test performance more like game developers do: by focusing on the time in milliseconds required to produce each frame of the animation—on frame latencies, in other words. That way, if a momentary slowdown happens, we’ll know about it. This sort of analysis requires new tools and methods, which we’ve developed and refined over the past year. This article and other recent ones here at TR show those methods in action. We think they provide better insights into gameplay fluidity and real-time graphics performance than a traditional FPS-based approach.

The question of the hour is: Why does the Radeon HD 7950 struggle on this front in the current crop of games? Can switching back to Windows 7 alleviate the problem? Let’s have a look.

Borderlands 2

First up is my favorite game of the year so far, Borderlands 2. The shoot-n-loot formula of this FPS-RPG mash-up is ridiculously addictive, and the second installment in the series has some of the best writing and voice acting around.

Our game benchmarking methods are different from what you’ll find elsewhere in part because they’re based on chunks of gameplay, not just scripted sequences. Below is a look at our 90-second path through the “Opportunity” level in Borderlands 2.

As you’ll note, this session involves lots of fighting, so it’s not exactly repeatable from one test run to the next. However, we took the same path and fought the same basic contingent of foes each time through. The results were pretty consistent from one run to the next, and final numbers we’ve reported are the medians from five test runs.

We used the game’s highest image quality settings at the 27″ Korean monitor resolution of 2560×1440.


You can click the buttons above to switch between the results for the Radeon and the GeForce. Please note that we have only one set of results for the GeForce, and those are from Windows 8. The plots above show frame rendering times in milliseconds, so lower times are better.

As you can see, the plots for the 7950 in Win7 and Win8 look to be very similar, with little spikes to 40 ms or so throughout our test session. By contrast, the plot for the GTX 660 Ti is smoother; the frame latencies are generally lower, without so many high-latency frames interspersed throughout.

The GeForce is a little faster in the traditional FPS average, although the Radeons still reach over 60 FPS, which is usually considered sufficient.

The performance difference between the two cards becomes clearer in the latency-focused 99th percentile frame time. This number is simply the threshold below which 99% of all frames were rendered. The GTX 660 Ti produces all but the final 1% of frames in under 19.6 milliseconds—equivalent to about 50 FPS—while the Radeon’s threshold is over 30 milliseconds—or about 30 FPS.

The 99th percentile frame time is just one point on a curve, and here’s the curve. As you can see, the Radeon’s performance tracks with the GeForce’s until we reach the last 7% or so of the frames rendered. Those higher-latency frames are where the Radeon trips up, and from the plots, we know they’re distributed throughout the test session. Interestingly enough, the Radeon appears to perform marginally better in Windows 8, although the differences are pretty minor.

Our final latency-sensitive metric singles out those frames that take an especially long time to render. The goal of this measurement is to get a sense of “badness,” of the severity of any slowdowns encountered during the test session. What we do is add up any time spend rendering beyond a threshold of 50 milliseconds. (Frame times of 50 ms are equivalent to a frame rate of 20 FPS, which is awfully slow.) For instance, if a frame takes 70 milliseconds to render, it will contribute 20 milliseconds to our “badness” index. The higher this index goes, the more time we’ve spent waiting on especially high-latency frames, and the less fluid the game animation has been.

Again here, the 7950 falls behind the GeForce GTX 660 Ti, and going back to Windows 7 isn’t any help. That said, the Radeon doesn’t perform terribly; it only spends a small amount of time working on frames beyond our 50-ms threshold, so in-game animations should be relatively fluid. In fact, that’s what we experienced during our testing sessions. The game was playable enough on the Radeon, under both versions of Windows. The thing is, playing on the GeForce was simply a little bit smoother, both subjectively and by the numbers.

Why is that? For more insight into exactly what’s going on, we can zoom in on a small portion of our test run and look at the individual frame times. This is just a tiny bit of our overall data set, but it does help illustrate things. Click through the buttons to see each config’s results.


The 7950 suffers from these momentary hiccups. They’re not terribly dramatic, but still, the GeForce avoids them. Often, the long frame times are succeeded by several shorter frame times, as if the system is waiting on some long operation to complete the first frame and is ready with portions of the others once it finishes.

The presence of little hiccups of this size isn’t a big deal on a small scale, but it becomes more of an issue once we know that those longer frames comprise roughly 5-7% of the total frames produced, distributed throughout the test session. Given the size of the latency spikes here, they’re still not really a big problem, but they do bring the Radeon down—and make the GeForce look (and feel) like the superior solution.

Guild Wars 2


Once again, the move to Windows 7 doesn’t affect the Radeon HD 7950’s performance much at all. In fact, as with our Borderlands 2 test session, the 7950 looks to perform a little better in Win8, if anything.

Since the Radeon’s frame time plots show lots of latency spikes in this test scenario, here’s a small chunk of the test run, so we can see the problem up close.


Again, one or two bumps in the road like this wouldn’t be anything to worry about, but when those blips are everywhere, they begin to matter. Subjectively, the Radeon’s performance doesn’t feel horrible, but if you were play back to back versus the GeForce, you’d end up picking the GTX 660 Ti as the better option.

Sleeping Dogs

Our Sleeping Dogs test scenario consisted of me driving around the game’s amazingly detailed replica of Hong Kong at night, exhibiting my terrifying thumbstick driving skills.


This game’s graphics are intensive enough that we were easily able to stress these GPUs at 1080p.


Dropping back to Windows 7 doesn’t benefit the 7950 in this game, either. As in Win8, the 7950 posts a higher FPS average than the GeForce, but a small number of high-latency frames detracts from the 7950’s performance.

Assassin’s Creed III

Since the AC3 menu doesn’t lend itself to screenshots, I’ll just tell you that we tested at 1920×1080 with environment and shadow quality set to “very high” and texture and anti-aliasing quality set to “high.” I understand that the “high” AA setting uses FXAA HQ with no multisampling. This game also supports Nvidia’s TXAA, but Nvidia has gated off access to that mode from owners of Radeons and pre-Kepler GeForces, so we couldn’t use it for comparative testing.


Well, look at that. We finally have a difference of note between Win8 and Win7. For whatever reason, the 7950 struggles more in AC3 in Windows 8. Then again, the 7950 doesn’t perform especially well in this game in either version of Windows—not compared to the GTX 660 Ti.

Hitman: Absolution

In this game, Max Payne has sobered up and gotten a job with a shadowy government agency, yet somehow things still went totally sideways. He’s decided to stop talking about it so much, which is a relief.

I wasn’t sure how to test this game, since the object appears to be avoiding detection rather than fighting people. Do I test by standing around, observing guards’ patrolling patterns? Also, it seems that some areas of this game are much more performance-challenged than others, for reasons that aren’t entirely clear. Ultimately, I decided to test by taking a walk through Chinatown, which is teeming with people and seems to be reasonably intensive. I can’t say that good performance in this scenario would ensure solid performance in other areas of this game, though.

And we’ve finally found a good use for DX11 tessellation: bald guys’ noggins.


Although the Radeon’s FPS average is much higher than the GeForce’s, repeated latency spikes once again weigh it down, and rolling back to Windows 7 doesn’t offer any relief.

Both of these cards perform quite acceptably here, we should note. We’ve had to lower our “badness” threshold to 33.3 milliseconds since neither card surpasses the 50-millisecond mark for even a single frame. The thing is, the Radeon’s higher FPS average suggests it’s easily the faster solution for this workload, but some of its potential is wasted by the stubborn presence of higher-latency frames throughout the test run.

Medal of Honor: Warfighter
Warfighter uses the same Frostbite 2 engine as Battlefield 3, with advanced lighting and DX11 support. That’s fortunate, because I struggled to find any other redeeming quality in this stinker of a game. Even play-testing it is infuriating. I actually liked the last Medal of Honor game, but this abomination doesn’t belong in the same series. If you enjoy on-rails shooters where it constantly feels like you’re in a tunnel, bad guys pop up randomly, and your gun doesn’t work well, then this is the game for you.



Yawn.

The Elder Scrolls V: Skyrim

No, Skyrim isn’t the newest game at this point, but it’s still one of the better looking PC games and remains very popular. It’s also been a particular point of focus in driver optimizations, so we figured it would be a good fit to include here.

We did, however, decide to mix things up by moving to a new test area. Instead of running around in a town, we took to the open field, taking a walk across the countryside. This change of venue provides a more taxing workload than our older tests in Whiterun.

Note: do not aggro the giants.


This is one of the larger differences we’ve seen between Windows versions, and the 7950 definitely performs better in Win8. Notice, though, how much smoother the GTX 660 Ti’s frame time plot looks compared to either Radeon plot. Let’s zoom in for a closer look at a small slice of time.


For whatever reason, the 7950 doesn’t handle this new test scenario where we walk through the countryside very well.

Skyrim overtime: return to Whiterun

Some of you lamented the fact that our latest round of tests changed all of the games and test scenarios wholesale, so one couldn’t compare to familiar tests from past reviews to see whether the Radeon’s latency problems were introduced by recent driver updates or some other change. With that in mind, we’ve returned to our familiar test scenario where we make a loop around Whiterun. We’re using the same image quality settings that we did in our initial GTX 660 Ti review, although the OS and graphics driver revisions have changed.


Interesting. There isn’t much change from our older review. Let’s line up the numbers for a quick comparison:

Zotac GTX 660 Ti AMP! Zotac GTX 660 Ti AMP! Radeon HD 7950 Boost Sapphire
HD 7950 Vapor-X
Sapphire HD 7950 Vapor-X
Win7 Win8 Win7 Win7 Win8
GeForce
305.37 beta
GeForce
310.54 beta
Catalyst

12.7 beta

Catalyst

12.11 beta 8

Catalyst

12.11 beta 8

Average FPS 87 88 91 86 86
99th percentile frame time (ms) 17.8 16.5 17.7 18.3 18.0

The Radeon HD 7950 appears to have regressed a bit with the move to newer drivers, with a slight drop in FPS averages and a corresponding increase in 99th percentile frame times. That’s true even though we’ve switched to a Sapphire 7950 card with a 25MHz higher Boost clock in our recent tests. Meanwhile, the Zotac GTX 660 Ti has improved somewhat with newer software and the move to Windows 8. The biggest change may be in its frame time plot, which looks tighter, with less variance than in our prior review.

However, the differences overall are very minor, and they appear to affect FPS averages and 99th percentile latency to similar degrees. As a bridge to the past, I think this outcome tells us nothing too major has changed, other than the way we’re testing Skyrim. Our basic hardware configs are working as expected, and the OS and driver changes haven’t introduced any new frame latency problems in this older test scenario.

The larger takeaway is that the results from both test scenarios are very likely valid. They’re just different. The Radeon HD 7950 handles the graphics workload in our Whiterun loop quite competitively, essentially matching the GeForce GTX 660 Ti, with nice, low frame latencies and relatively minor variance from frame to frame. However, the 7950 doesn’t process the more difficult workload in our cross-country test nearly as gracefully as its GeForce rival does.

Summary metrics and deep voodoo

Let’s see how the move to Windows 7 affects the 7950’s placement on our famous value scatter plots. As ever, we’ve converted our 99th percentile frame time results into FPS, so both of the plots below can be read the same, with the best values being closer to the top left corner of the plot area.


Predictably, there’s not much movement, because the 7950’s performance doesn’t change much from Win8 to Win7. Also, since Windows 7 doesn’t alleviate the latency spikes that plague the 7950 in many of these newer games, the Radeon continues to drop into a much less desirable position on the 99th-percentile plot than the GTX 660 Ti.

We should note that the overall performance number above is a geometric mean of the results from our seven game test sequences. (We’ve excluded Whiterun since we didn’t use it last time around.) We started using the geomean back in August in order to reduce the impact of outliers on our overall performance scores. In the past, games with vendor-friendly optimizations like HAWX 2 and DiRT Showdown tended to push the average a long way in one direction, so we excluded them from our calculations, prompting controversy. We were hopeful the switch to the geometric mean would curb outliers without manual intervention.

That said, some folks still objected to our inclusion of Assassin’s Creed III in our overall index, since it performs so much better on the GeForce than on the Radeon. I figured we might as well oblige them by taking a look at the overall scores with AC3 excluded, to see whether it moves the needle.


Well, things don’t change drastically, but excluding AC3 allows the 7950 to pull ahead of the GTX 660 Ti in our overall FPS average. In fact, that change produces virtually the same sort of outcome in the overall FPS numbers as we saw back in August, when the Radeon HD 7950 reference card with Boost edged out this same Zotac GTX 660 Ti AMP! card by a margin of several frames per second.

Trouble is, that doesn’t really matter. A moral victory in the borderline-meaningless FPS sweeps doesn’t overcome the fact that the Radeon HD 7950 has a persistent problem with high-latency frames across a range of test scenarios based on the latest games. The 99th-percentile frame times reflect that reality. Our latest round of tests shows that Windows 8 is not the problem. On the contrary, Windows 8 generally improves the latency picture somewhat.

When we first published our rematch between the 7950 and the GTX 660 Ti, we pinged AMD to ask if they could explain the Radeon’s struggles in recent games. AMD spokesman Antal Tungler told us that our article had “raised some alarms” internally at the company, and he said they hoped to have some answers for us “before the holiday.” He also noted that AMD is continually working to improve its drivers and that the 7950 does perform well in FPS-based benchmarks.

We’re hopeful that we may have a more detailed answer from AMD before too long, but in the interim, we have an even firmer grasp of the reality that caused us to recommend the GeForce GTX 660 Ti over the Radeon HD 7950 in our last article. Again, the outcome of our testing may be counter the expectations of many folks; they certainly weren’t what we expected when we set out to stage this rematch.

The tragedy here is one of wasted potential. The Radeon HD 7950 is, on paper, clearly a more powerful GPU than the GTX 660 Ti, with double the memory interface width and a theoretical edge in peak ROP rate and shader flops. AMD is giving you more hardware for your money when you buy a 7950. For whatever reason—and we suspect the main culprit is graphics driver software—the 7950 can’t convert that advantage into consistently smoother in-game animation. As one of my fellow TR editors pointed out to me the other day, this wouldn’t be the first time Radeon owners were let down by driver issues during the holiday rush. AMD was plagued by a painful series of driver issues last year, too.

To those who would claim that other “professional” review sites haven’t seen results like ours in their comparisons of the 7950 and GTX 660 Ti, I would simply respond: of course not. Virtually nobody else tests like we do. We’re working on persuading folks to focus on latency, to use a timer with finer granularity, but such changes are hard. They take time and effort. Heck, we may be nearly alone in using this approach for a long time yet.

The question you have to ask is what matters to you. Do you want the graphics card that scores best in the FPS beauty pageant, or do you want that one that gives you the smoothest gaming experience when you fire up a freshly downloaded game this Christmas? If you just want bragging rights, by all means, choose the Radeon HD 7950. If you’re looking for the friction-free fluidity that only comes from consistently quick frame delivery, though, our recommendation remains the GeForce GTX 660 Ti.

You may be interested to see the side-by-side high-speed videos we captured of our Skyrim test scenario, which illustrates the differences in smoothness between the 7950 and GTX 660 Ti.

I sometimes provide low-latency responses on Twitter.

Comments closed
    • Rigel84
    • 7 years ago

    I have a small request for future articles. On the “Summary metrics and deep voodoo” page, could you make it possible to manually define the graphic card price? That would give a much more accurate picture.

    • marvelous
    • 7 years ago

    [url<]https://techreport.com/review/23150/amd-radeon-hd-7970-ghz-edition/7[/url<] [url<]https://techreport.com/review/24022/does-the-radeon-hd-7950-stumble-in-windows-8/8[/url<] It's funny how 7950 in exact same game is showing 0 ms time spent beyond 50ms and showing 28ms in another. It seems to me somebody is getting $wayed by certain people in the industry.

    • ashleyw2934x
    • 7 years ago
    • aspect
    • 7 years ago

    It would be great if you can also test out the 7970 in Windows 7 and 8 to see if it also has such problems.

    • stacey1x0pp
    • 7 years ago
    • mtcn77
    • 7 years ago

    Liability to accept previous experiences cause what is called “learned impairment”. Such that, you cannot be creative using the tools at your disposal, you have to set an ideal and try to bind the tools to your liking. Just as reverse engineering does not have the same creative element of the actual cascade that generates a design.
    This review is false because you are basing your statement on a finite number of trials using the FRAPS tool, it is just a tool. What if it computed some other data than what you expected it to? What if the test were conducted wrong? What if there are some variables you did not rule out, or you could not stay true to the outcome by commenting either good, or bad.
    I’m saying that you should accept that you have your own opinion of this subject and let it not interfere with the actual representation. You should accept that errors do occur and that you can only be certain for a limited confidence interval unless you formulate the contingency with 100 success rate.

    • Kaleid
    • 7 years ago

    AMD needs to take a hint here.Smoothness is far more important than higher average framerates.

      • zimpdagreene
      • 7 years ago

      Yeah more than a hint. There software people need to step it up! Make there software to put out performance instead of limiting it!

    • erwendigo
    • 7 years ago

    SCOTT WASSON:

    “That said, some folks still objected to our inclusion of Assassin’s Creed III in our overall index, since it performs so much better on the GeForce than on the Radeon. I figured we might as well oblige them by taking a look at the overall scores with AC3 excluded, to see whether it moves the needle.”

    Oh my Lord. Are they treathing you? shame for them!

    About the sentence: “since it performs so much better on… blablabla”.

    Well, AC3 runs better on the Geforce, this is true, BUT in other games, like Sleeping Dogs, run better on the Radeon. And the same with Hitman, and with Medal of Honor. All of them are “gaming evolved” games, and they are “packed” or discounted with many Radeons, so, if you are going to please everybody, “please”, Scott, for me:

    Make the graphics witout all the suspicious games, not only AC3. And yes, then all that we will see in theses graphics are a performance/price ratio of zero, in all the cards. No games, no performance.

    But if you are going to please all the demands, this is the way.

    PD: Don´t give in the way of these blackmailed demands, if they aren´t reasonable. You know that the collection of titles of this test have games that are partial to one or another side. If you give in for demands like these about AC3, you aren´t making your review, THEY are REmaking your/their review.

    AC3 is pronvidia. Well, HELL, other games are proAMD and I´m not going to demand you that you retire them for the performance/ratio!!

      • Pantsu
      • 7 years ago

      Games that are played are the ones that should be tested, as long those games warrant performance analysis to begin with. Just because Nvidia/AMD has sponsored a game doesn’t mean it should be removed from the performance tests. These days most of the AAA games are either AMD or Nvidia sponsored. Of course they should try to include a balanced number of AMD/Nvidia titles so it doesn’t skew the conclusions if they are based on avarages.

      In the end it’s up to the reader to make their own conclusions based on the information given. There shouldn’t be any reason to spoon feed TR readers. None of the performance reviews are completely exhaustive or give the right conclusion for all use cases.

    • sammied54413
    • 7 years ago
    • liquidsquid
    • 7 years ago

    How do we know that nVidia drivers don’t bail before finishing a frame to keep the latency spikes down? Any way to test this?

      • superjawes
      • 7 years ago

      Ask Nvidia how they work?

      Looking at the high-speed video, that might be the case…I’m okay with that, though, and something simple like that could probably knock out those spikes and launch the 7950 ahead of the 660 Ti.

        • liquidsquid
        • 7 years ago

        I don’t know about you guys, but I think this is sort of “cheating” in an effort to keep latency low and frame rates high. Sure it will look great in bench marks, but if the card isn’t really doing what it is advertised to do… only drawing 3/4 of a frame before beginning the next one. Meanwhile the competitor is finishing each and every frame properly.

        In reality, it probably is an effective way to prevent the appearance of stuttering due to high latency and a legitimate way to provide fluid-looking graphics.

          • superjawes
          • 7 years ago

          Since TR seems to be the only place that checks frame latencies, I don’t think partial rendering is cheating at all. On the other hand, rendering most of your frames in a second very quickly (like faster than 60), then stopping completely for some reason pads the FPS number, despite that average FPS not being stable within the second.

          Actually, this is something I like about Borderlands rendering style. I don’t have any metrics, but I never feel like things are getting gummed up after I load into a map, partially because I see things being drawn as they can be. That is definitely more preferable to a game like Half-Life 2 (much older, I know), which required load times in the middle of a map.

          Sure, that might be apples and oranges, but I feel like a balanced approach ends up feeling a lot better than breaking up fluidity in favor of a marginally improved resolution and bursts of high FPS.

      • MadManOriginal
      • 7 years ago

      Could you explain what you mean better? If it’s based on something you see in the video I think that’s just tearing due to disabled vsync. If it’s just a question, I’ll answer assuming you mean nvidia only renders parts of certain frames…

      If the frame were not fully rendered and displayed things would look really funny. There’s a few possibilities I can think of: 1) If a portion wasn’t rendered at all and send to the frame buffer that way it would be completely obvious – the unrendered portion would just be blank (? or totally screwed up in some other obvious way, maybe a wireframe without textures.) 2) If they wanted to cheat without being so obvious they could use a portion of the previously rendered frame and merge it with a portion of the new frame. I think this would be pretty obvious too because you’d see big chunks of the screen, whether a big section or a bunch of polygons randomly spread over the screen, seem to ‘stop in time’ because they would be displayed exactly the same for more than one frame.

      There may be other ways to do this but I can’t think of anything that would allow re-use of a previous frame (when it shouldn’t be reused) that would not be very visible.

        • superjawes
        • 7 years ago

        Personally, I felt like–in the high speed videos–the 7950 images were sharper than the 660 Ti’s. Could just be card abilities, but I think visuals are rendered starting with basic geometry and ending with polished textures (this I could be completely wrong on).

        But if this is the case, if the card is having particular difficulty with the more advanced stuff like lighting and extra texture resolution, they can just skip after “enough” of the detail is rendered so it can start work on the next frame, resulting in something that could be better, but maintaining the motion and consistency between frames.

    • Rza79
    • 7 years ago

    It would have been smart to include one more AMD card just to be sure this particular sample isn’t defective. In the june review of the 7970 GHz, none of the AMD cards had issues with latency.

    • tviceman
    • 7 years ago

    AMD is sacrificing consistency in favor of throughput through their drivers. These frame time issues did not exist (to this extent) before the 12.11 drivers. So like Scott asks, which is better – absolute fps or frame time? I think there needs to be a right balance between both. A more important question is, did AMD knowingly improve throughput at the cost of consistency and smoothness? And if so, did they do in hopes of looking better in more traditional video card reviews?

    • Aquineas
    • 7 years ago

    I have bought AMD GPUs for more than 10 years now, and I must say, EXCELLENT work. I remember wondering when running certain game benchmarks along with Fraps why the benchmarks would be cruising along at an impressive clip only to see the framerate fall inexplicably at certain points, almost as if the card or drivers hit some kind of glitch. My solution was to always buy the fastest possible card I could, stop worrying about benchmarks and just enjoy the games, and be done with it. It was an acceptable compromise to buying from Nvidia, whom I’ve personally always had a problem with (for reasons I won’t get into now as they’ll just start a flame war). Now I know that the performance drop wasn’t just my imagination.

    Having the issue publicly raised hopefully puts pressure on AMD to solve the problem. I feel bad for the driver team. The poor bastards are probably already working 18 hour days trying to get drivers done for the 8000 series (on a reduced staff, no less), having missed what would have been a valuable shopping season. I say “driver team” because, whether or not it’s a hardware or a driver issue, the driver team is likely going to have to be the folks who initially investigate this, even if it’s only to say the hardware guys, “Hey guys, this is YOUR problem! Your chip is taking 40ms to finish rendering on this frame!”

    In any case, if the rumors of the next generation consoles all being based on AMD are true, then this has implications for all gamers, not just AMD ones, because we all know that game designers typically design for the console and then do just enough to get it working on the PC. AMD would do themselves a huge favor by fixing this, and gain some much needed good publicity. Sadly, I (along with the board of directors, apparently), fear the best thing that could benefit AMD right now would be to be purchased by a company with deep enough pockets and a talented enough engineering support ecosystem to:

    1. Improve the competitiveness on the CPU side of things
    2. Keep AMD technology alive until they do.

    Samsung, anyone?

    • moog
    • 7 years ago

    TR, that was superb!

    I have a couplet for you:

    Ignoring driver issues, optimization in 8 reduce stutter.
    Can you believe it? 8 is oh so smooth like butter.

    [size=60]Disclaimer: The views and opinions expressed in this comment are those of moog and do not necessarily reflect the views and opinions held by MSFT.[/size]

    WTF – that didn’t work

    • jdaven
    • 7 years ago

    Reading through the comments it seems that the next step is to verify game to game variance from older titles. However, I would argue that this is not necessary. Not just given the busy TR review schedule but also the Frame Time versus Frame Number graphs. There seems to be a systemic problem with the Radeon cards. Look at the fluctuations compared to Nvidia over 1000s of frames. The fluctuations are in every single game almost as if the clock of the Radeon card is changing madly from frame to frame.

    I just don’t see this effect disappearing if you take a set of games from a year before.

    • tbone8ty
    • 7 years ago

    Far cry 3 benchmark????!!!!!

    • Damage
    • 7 years ago

    Guys, I’ve posted some side-by-side high speed videos comparing the 7950 to the GTX 660 Ti in our Skyrim test scenario here:

    [url<]https://techreport.com/review/24051/geforce-versus-radeon-captured-on-high-speed-video[/url<]

    • rechicero
    • 7 years ago

    Great job, but the results are far from being the same.

    Almost x3 time beyond 50 ms in Win7 vs Win8 in Borderlands 2
    27% more time beyonf 50 ms in Win 8 vs Win7 in Guild Wars 2
    Almost x2 time beyond 50 ms in Win 7 vs Win 8 in Sleeping dogs
    Nearly x20 time beyond 33.2 ms in Win 8 vs Win 7 in AC3

    And it’s the same history in every single game, except for MoH (that nobody plays anyway).

    It’s clear the results are not consistent at all betweeen Win 8 and Win 7. Sometimes they are better, some times are much worse. But it always looks like a driver mess.

      • superjawes
      • 7 years ago

      Don’t get too far ahead of yourself. Those metrics of time spent beyond 50 ms is measured in ms, so it’s not necessarily as bad as it seems.

      Take, for example, [url=https://techreport.com/review/23527/review-nvidia-geforce-gtx-660-graphics-card/5<]the GTX 660 review.[/url<] Here, the 2xx GeForce cards spending several seconds beyond 50 ms. So even though the 7950 appears to do 3x better on this metric in Win8, the Win7 score is still excellent (referring to Borderlands 2).

        • rechicero
        • 7 years ago

        I was talking about the lack of consistency. It’s really odd, at least to me.

          • superjawes
          • 7 years ago

          And what I’m saying is that the times are so low that you’re probably brushing against tolerances, so the “inconsistency” between Win 7 and Win 8 can probably be explained by random noise and natural variation between tests.

    • jonjonjon
    • 7 years ago

    this is why TR is the best. i would love to see a 7970 and 7870 follow up to see if its a specific problem with the 7950 or an overall amd issue.

      • HisDivineOrder
      • 7 years ago

      Agree with this. We need to see how this issue affects all echelons of the Radeon series. It affects the value proposition of every card since it’s such a widespread problem.

    • jessterman21
    • 7 years ago

    Scott, LOVE this article. I was not expecting it, but I was expecting the results you garnered. Great work – this is why people come to TR.

    • wierdo
    • 7 years ago

    Great read Scott, I’m glad you guys took a closer look at these things, was curious about these issues after reading the first piece, it’s nice to see more details covered like this.

    Quite fascinating to read about this issue, now I’m wondering what actions AMD will take in response to these findings, I hope they figure out a way to tackle those spikes.

    • Cyco-Dude
    • 7 years ago

    thanks for checking this out. hopefully amd can come back with a solution to this issue.

    • raghu78
    • 7 years ago

    I have a simple question to the so called frame latency experts

    Take 2 cards for instance. one having frametimes alternating between 10 ms and 40ms for every frame. the second card having 33.3 ms constant frametime per frame. The first card takes 50 ms for every 2 frames (20 such pairs of frames or 40 frames per sec) while the second card takes 66.66 ms per 2 frames (15 such pairs of frames or 30 frames per sec) . the first card gets 40 fps. the second card gets 30 fps. According to TR’s method the first card would have 134ms spent in long frames per second ( 20 x (40 – 33.33) = 134 ms) ie frames which have frametime above the 33.3 ms threshold. The second card will have 0 ms above the 33.3ms threshold.

    Now the issue is the first card runs at 40 fps . the second card runs at 30 fps. so which will the user perceive as smoother. Is it the first card or the second card. Can the user perceive the first card to be less smoother when the first card renders 2 frames in every 50 ms and 4 frames every 100ms while the second card has only rendered 1 frame in the first 50ms and renders 2 frames in the second half of every every 100ms interval ?

    Does the user perceive the 6.667 ms time per frame spent on long frametimes or does he perceive the extra 10 frames per second.

      • jessterman21
      • 7 years ago

      I probably should’ve written this out like a math problem, but I’m not quite that invested. The card rendering the consistent 33.3ms frametimes would feel much smoother. The first card you described is experiencing constant microstuttering, and would feel as jerky as 20fps despite the 40 average.

        • raghu78
        • 7 years ago

        and how is that. Even with a frametime of 40ms the first card would have 25 fps. but it also has low frametimes every alternate sec. So in effect it has an avg frametime of 25ms or 40 fps. The normal human here in this case is being presented with atleast 1 frame every 25ms to process in the case of the first card. Are we fast enough to perceive things at a faster rate than every 25ms. Can we perceive the gap between 2 consecutive short frametime frames when the human brain is busy processing the frames presented at the rate of 1 frame every 25ms.

        also remember that the second card only has a 6.667 ms advantage over the first card when considering the long frametimes. can a normal human perceive that.

          • sparkman
          • 7 years ago

          Frame rate (fps) is not everything.

          In the example you’ve described, the 30 fps card will feel smoother while the 40 fps card will feel choppier.

          Exactly how much you care probably depends on your own sensitivity and can vary greatly from person to person. Ex: I can see flicker on a 60 hz CRT monitor, and need 72 hz to be comfortable on a CRT, whereas most people seem happy with a 60 hz CRT.

      • homerdog
      • 7 years ago

      2nd card would look much smoother.

      • superjawes
      • 7 years ago

      Okay, frame times and FPS aren’t the only thing you need to think about (it’s just the easiest way to make data work for refreshing monitors).

      Let’s try to think of it like this: a 60 Hz monitor (which is where the 60 FPS “gold standard” comes from) will ask for a frame every 16.7 ms, whether a new frame has been rendered or not. So any time frames are being rendered over this threshold (consistently or otherwise), your monitor is displaying the same frame more than once.

      Consitent frames above this threshold, like the 30 FPS (stable) equivalent means that you are only showing each frame twice (depending on exact timings). If you are alternating between between two latencies, the [i<]shorter[/i<] frames end up getting stuck on the screen longer while the monitor waits for the slow frame. But it gets worse, since the fast frame immediately follows the slow one, your monitor could quickly alternate to the next fast frame within a refresh cycle (this is a stutter), get stuck on the long frame again and repeat. This is part of the reason that the "time spent beyond [x] ms" metric is so important, because your brain will register that it's seeing the same thing over multiple refresh cycles, AND it will see the quick transition to the next "stuck" frame. Check out spiritwalker2222's comment. He points out that some games have to run at ~150 FPS before things to get smooth (while others seem smooth at 60 FPS), and that's because even when a card gets "stuck" on a relatively long frame, there are so many frames being rendered that it beats the 60 Hz refresh and maintains perceived fluidity.

        • superjawes
        • 7 years ago

        It’s sampling theory (Digital Signal Processing).

        You can study DSP your entire life and still have more to learn, but typically you want your number of samples (frames) to be significantly higher than your target frequency (60 Hz). Otherwise you end up with…a weird looking signal that’s nothing like what you actually wanted.

        In the case of the human eye, steady, even rates appear much smoother because you can still register the quick changes and long waits associated with uneven rendering.

          • willmore
          • 7 years ago

          You’re looking for the term “sample jitter”, I think.

      • MrJP
      • 7 years ago

      Constant 30fps will definitely appear smoother than stuttering between 100fps and 25fps despite the lower average framerate. There’s a good video that demonstrates this in [url=https://techreport.com/news/21625/multi-gpu-micro-stuttering-captured-on-video<]this news post[/url<], although in fairness this is a more extreme example using multi-GPUs.

    • swaaye
    • 7 years ago

    I was running Win8 for about a month but I went back to 7 because there are some game compatibility issues. One that comes to mind is Max Payne 3 wouldn’t run at all (maybe it’s been fixed now). Besides, I don’t really like Win8 much, and I have a Radeon 6950 and perhaps the drivers do need work for 8. I also imagine that Win8 will receive a lot of patches like 7 has.

    • sschaem
    • 7 years ago

    Looking at newegg both cards are price under $300, how come the whole conclusion on performance per $ and graph are using $330 version of the 7950?

    Why is AC3 included knowing its a very biased title, but AMD heavy optimized games are not included?

    Now, those spikes seem like a big driver problem for AMD. But if they manage to fix this, then tables are turned as the review show the 7950 to be more powerful.

    So buying a 660 ti with 50% less memory for the same price is in the hope AMD will never fix this ‘bug’ ? Its reasonable …

      • erwendigo
      • 7 years ago

      You can´t be serious, in THIS REVIEW:

      Hitman Absolution.
      Sleeping Dogs.
      Medal of Honor: WarFighter.

      ALL of these games are gaming evolved titles. You know, the “TWIMTBP” of AMD.

      In these games the same problem persist, so it isn´t a specific problem or result of a new game with inmature drivers (because THESE games are gaming evolved, AMD tracks and helps to developing them, an because AMD tests these games with its cards). It´s a more basal problem.

      “o buying a 660 ti with 50% less memory for the same price is in the hope AMD will never fix this ‘bug’ ? Its reasonable …”

      The memory that you don´t use it´s useless, with this level or performance, 2GB is very much than enough.

      BUT, if you want memory, you can buy some aberration card of minimal performance but with many Gigas of VRAM DDR2. Sure, it´s your product, it´s designed for you!!

      Who needs good performance and good timeframes if you have many GIGAS??? NO ONE!!

      And sorry, this sentence isn´t true too:

      “if they manage to fix this, then tables are turned as the review show the 7950 to be more powerful.”

      RTFArticle!!!:

      [url<]https://techreport.com/review/24022/does-the-radeon-hd-7950-stumble-in-windows-8/10[/url<] In the performance/price ratio, you can see that the GTX 660 Ti wins both metric tests, the timeframe ratio AND the FPS ratio.

    • Scrotos
    • 7 years ago

    Wait…

    Hitman: Absolution
    In this game, Max Payne has sobered up…

    Um. Bad cut and paste, perhaps?

    [edit: oh, it was a joke? my bad!]

    • albundy
    • 7 years ago

    “The fundamental problem is that, in terms of both computer time and human visual perception, one second is a very long time. Averaging results over a single second can obscure some big and important performance differences between systems.”

    You cant be serious! Your theories are inconclusively based on metrics and graphs, and not of real world visuals.

    OMG!!! Loss of one thousandth of many unsuspecting seconds are making me lose my mind! lol

      • maxxcool
      • 7 years ago

      “” OMG!!! Loss of one thousandth of many unsuspecting seconds are making me lose my mind! lol “”

      Which is why TR is telling you about them ! 🙂 your welcome ….

    • jdaven
    • 7 years ago

    This is the first time I’ve seen COMPELLING evidence of graphic card driver differences between Nvidia and AMD with the former being better. Good job TR. I hope AMD pays close attention.

      • beck2448
      • 7 years ago

      100%! I have used both and my experience was Nvidia cards “felt” smoother. Looking at graphs might be fun, but real world play is the bottom line.

      • MrJP
      • 7 years ago

      Previous reviews have shown framerate consistency problems jumping backwards and forwards between AMD and Nvidia from game to game, and also varying with graphics settings when the same cards have been used in different reviews with different settings. The one-sided result seen here has been the exception rather than the rule, which is why it’s caused the unusual level of controversy in the comments.

      I know this will sound like I’m yet another AMD apologist, but it seems a touch hasty to jump to the conclusion that Nvidia drivers are always better based on the sample of games in this one review. You can’t rule out the possibility (however unlikely) that this was just an unusually bad selection of benchmarks for AMD.

      I do think it’s probably fairer to suggest, based on this and previous experiences, that Nvidia drivers do tend to have a significant edge with the most recently released games, but AMD tend to make up ground over time. This is probably a result of Nvidia’s greater efforts to engage with developers, which AMD are hopefully now starting to match.

        • l33t-g4m3r
        • 7 years ago

        This. AMD’s drivers are pretty decent on older games. The only way you can get these numbers is by cherry picking recent titles where Nvidia has been working closely with developers. Imagine if this article was written right when Rage was released and Rage was AC3. That’s exactly what this is: A Witch Hunt.

        However, AMD has been slipping since their VLIW5 architecture. Kinda does appear to be a budget problem. Never used to have such a large selection of games have so many issues running. That said, if you think this won’t be fixed by the next driver release or so, you’re smoking something powerful. It’s really a non-issue, because the hardware is fully capable of running AC3. They just need to fix what’s causing the problem, like they did with BF3.

        This whole thing needs to be taken with some perspective, instead of knee-jerk fanboy rage. It’s damn immature, especially for a professional review site, and a few driver problems on newer titles that I’m not going to buy until a steam sale isn’t going to stop me from making a purchase.

          • Airmantharp
          • 7 years ago

          +1 because you’re right;

          But understand that problems are problems, and cannot be discounted because ‘they tend to go away over time’, even if it’s true.

          Anecdotal-ly and in full hindsight, the problems with BF3 and Skyrim while running Crossfire were pretty bad. Going to a single Nvidia card made a world of a difference; they have my support and business for now.

          • MadManOriginal
          • 7 years ago

          Skyrim is just over a year old. How old does a game have to be to count as an ‘old game’ to you?

    • spiritwalker2222
    • 7 years ago

    I haven’t looked at another card review from a competing website in a while. But is it true that everyone else is still using FPS as their yardstick? It boggles my mind that they are.

    I clearly knew that FPS was a poor measure for card performance for a long time, but never knew why. Why did some games look good at 60 FPS, while others need ~150 FPS to remove the glitches. That was until TR clearly explained why. I just assumed TR’s competitors would see this and follow suit.

      • Farting Bob
      • 7 years ago

      I cant remember any other site using frame latency as the main performance metric. Even sites like Anand which are incredibly detailed and thorough still use just FPS in their testing. Its not a completely useless metric, especially when you actually read the review, but if you just look at a single graph fps can be misleading in situations like here where card A has better FPS but occasional spikes that get masked by fps.

      • superjawes
      • 7 years ago

      Well the frame time approach definitely takes more work and statistical expertise, which is why you’ve seen the “Inside the Second” reviews evolve since they first debuted. Frame times were good, 99th percentile was better, but the time spent beyond X ms really quantifies how bad the interruption can be. So while it’s [i<]better[/i<] than FPS average, it requires a lot more effort by the reviewer to make the results worthwhile. And the other part might just be to maintain the "horse race." Even if the Nvidia cards have tangible advantages over the Radeons, showing that the two have similar performance means fanboys can fight to the death, and generally makes for a more interesting article.

      • danny e.
      • 7 years ago

      this

    • anotherengineer
    • 7 years ago

    It is was it is. As an engineer, I find this intriguing though, why is this happening?

    -game optimizations?
    -video card drivers?
    -video card BIOS?
    -GPU hardware (memory controller need optimizing??)
    -issues with running AMD card on Intel board for whatever reason?
    -other?
    -all of the above?

    Make for an interesting dissection.

      • Krogoth
      • 7 years ago

      If I would hazard a guess.

      I suspect the problem lies with DWM 2.0 and some kind of buffering issue. Windows 7/8 share the same kernel, the only difference is DWM. Windows 7 is on version 1.1 while, Windows 8 rides on version 2.0. DWM 2.0 is completely reworked. AMD driver teams either didn’t have the time or resources to tune their drivers for it, unlike Nvidia. I suspect this “current issue” will go away with a few months anyway.

      People are also overlooking the fact that both cards at running at 4Megapixel in most of the tests with AA/AF on top of that. 660Ti and 7950 aren’t exactly ideal choices for these settings. You probably want to shoot for 680Ti or 7970 for these settings or some kind of multi-card setup.

      7950 and 660Ti can both effortlessly handle 2Megapixels with AA/AF on the overwhelming majority of the titles that are out there.

        • anotherengineer
        • 7 years ago

        Another possibility, however some windows 7 tests were worse and drivers should be developed for that, which should have made all win7 tests better than win8 tests?

        Could the dxapi possibly be an issue? Well either way it would be within the software.

      • GTVic
      • 7 years ago

      In the past graphics card manufacturers have come up with some inventive and deceptive ways to increase frame rates. Why does each new game require so much driver optimization???

      I have a feeling this is due to some underhanded optimization that doesn’t work all the time.

    • rrr
    • 7 years ago

    More paid shilling for nVidia. Good job 😉

      • maxxcool
      • 7 years ago

      ^ More paid shilling for AMD. Good job 😉

        • rrr
        • 7 years ago

        Except no one paid me.

        If other sites don’t get nearly the results you get, there must be sth wrong you did. Sth to consider. If you care at all, $$$ have quite a power of persuasion after all 😉

          • superjawes
          • 7 years ago

          Alternatively…

          [quote<]To those who would claim that other "professional" review sites haven't seen results like ours in their comparisons of the 7950 and GTX 660 Ti, I would simply respond: of course not. Virtually nobody else tests like we do.[/quote<] Perhaps other sites are getting similar results, but said results are being masked by the methodology, which is the entire point of using latency-based metrics.

            • rrr
            • 7 years ago

            Which means reviewer may have complicate methodology for no good reason, if results are that wonky.

            Other sites, which are known to have good GPU testing methodology ([H], PCPer for example) didn’t show anything like it.

            • superjawes
            • 7 years ago

            It’s not that the results are wonky. It’s that if you [i<]only[/i<] use a FPS average over a relatively long period of time (1 second is long when you want to cram at least 60 frames in that time), the results might look consistent, but if you use your own eyes, you might see something amiss, despite the metric saying otherwise. The methodology is meant to capture what you might be seeing. This was very interesting in the first Inside the Second article when you moved on to SLi/Crossfire results (microstuttering). That effect is real to the viewer, but is subtle enough to be hidden by "traditional" metrics. EDIT: Those "traditional" metrics still have vailidity, don't get me wrong. FPS is, after all, derived from frame times and letency. More FPS is usually better. It's the details that really dinstinguish a "clean" 60 FPS from a "choppy" 60 FPS. Because DSP and sampling theory.

            • rrr
            • 7 years ago

            The thing is mentioned sites do not solely use average fps, but also min and max. In fact there are entire plots of FPS over time to catch any anomalies like that.

            • derFunkenstein
            • 7 years ago

            I know for sure that the graphs at [H] are not so fine-grained as to show what TR shows. They sample FPS once per second, which still masks what TR saw.

            • superjawes
            • 7 years ago

            Which are not necessarily granular enough to capture very short hiccups. Measuring FPS is typically an everage, which means you can still lump “bad” frame times in with really good ones. Measuring how long the frame actually took to render is basically the ultimate granularity you can achieve.

            • MrJP
            • 7 years ago

            If you look in detail at the output from FRAPS, you’ll realise that the FPS vs time and min and max FPS commonly reported elsewhere are based on averaging the FPS over one second intervals (FPS is only reported once per second). Consequently the variation in the time taken to render individual frames is smoothed out by averaging many frames together each second. The benefit of TR’s approach is to look at the time taken to render each individual frame (reported as frametimes from FRAPS) and then present various measures of the consistency of the framerate. Re-read the explanation on page 1 of this article for more information.

          • maxxcool
          • 7 years ago

          HAHAHA your “script” BROKE hahahahahaha…

          hmmm look at the bizarre sentence structure then look at the odd ‘Sth’ words

          ‘Sth’ == insert variable/randomized comment here

          “””Except no one paid me.

          If other sites don’t get nearly the results you get, there must be sth wrong you did. Sth to consider. If you care at all, $$$ have quite a power of persuasion after all 😉 “””

          hahahahahahahhahahaha stupid marketer…

            • Scrotos
            • 7 years ago

            I’m guessing “sth” is shorthand for “something” in this context.

            • lilbuddhaman
            • 7 years ago

            are you so lazy that you abbreviate “something” ? I have never seen this done, ever.

            • maxxcool
            • 7 years ago

            I’ll be damned… you are right… it’s abbreviated… wierd…

    • mtcn77
    • 7 years ago

    I have previously entertained myself with your reviews, but this one has lost all sense of validity on three strict codes of ethic a hardware enthusiast wouldn’t stride away from.
    First, which is the most obvious, is that you pick the most overclocked version of one graphics aperture rivaling a bone stock version of 7950 albeit the mere overclock you say is the “fastest there is”.
    Secondly, the frame latency of 7950 in hot debate is twice as long compared to previous records. It happens to be manufactured by the one and only failed brand of Radeons that have broken all possible records of component failure rates in consumer experience: [url<]http://www.behardware.com/articles/881-8/conclusion.html[/url<] "11.88% for the Sapphire Radeon HD 6770 1 GB (11189-10-20G) 7.62% for the Sapphire Radeon HD 7970 OC Edition 3 GB (11197-01-40G) 6.75% for the Sapphire Radeon HD 7870 OC Edition 2 GB (11199-03-20G) 6.06% for the ASUS ENGT520 SL/DI/1GD3/V2(LP) 5.07% for the Sapphire Radeon HD 7970 3 GB (21197-00-40G)". And third, could you not have at least repeated the tests with the previous roster that have worked up to par according to the principle "if it ain't broken, do not fix it"? Please do not indulge in your self centered approach any longer as you have come to succomb lately, we are expecting of you to be a reliable source of information as before.

      • alphadogg
      • 7 years ago

      Could have made your points without so much vitriol. However, I agree that a little more varied hardware would have been a nicer touch, just in case the card plays a role.

      • moose17145
      • 7 years ago

      Read the rest of the comments. They did test another card to see if it was specific to just this one. It wasn’t.

      [url<]https://techreport.com/discussion/24022/does-the-radeon-hd-7950-stumble-in-windows-8?post=693899[/url<] There is the link to Damage's reply to someone else who asked a similar "was it specific to just this card" question. Perhaps it should have been mentioned in the article that they did test another card to verify that this behavior was not specific to just this one card. I can understand your unwillingness to WANT to accept these results... I feel the same way. I have been running Radeons exclusively in my personal machines since the 9800 Pro. As such this test was quite a surprise to me as well. I was almost certain that the initial problems had to be because of Windows 8. But... that just isn't the case and I have to accept that, even if I do not like it. As someone who likes to consider themselves a person of science, I have to be willing to accept outcomes that go against what I was expecting, or even hoping for. But TR took mine and the other commentors suggestions (some of which were extremely rude) and went back and retested Skyrim in the old in game test location located in Whiterun. The card performed almost identically to the initial 7950 and 660 Ti reviews. It would seem to me that the card simply has problems with outdoorsy areas but handles in town areas more gracefully. I am unsure why this should shock anyone. We have seen behavior similar to this from previous cards before, from both ATI/AMD and NVidia.

        • mtcn77
        • 7 years ago

        Negative votes indicate that at least some people have read what I composed, I thank all of you anyway.
        Single card microstutter is unheard of(yes, I am biased).
        Is it too complicated for the red team to have Radeonpro tool included in any one of these review series since the results are emphasized with such blunt apathy? It would at least have all the options on record.
        Consider this; calculating average fps, reciprocating and setting as the frame latency goal and then improvising upon that as a starting point. How hard could it be since the review has gone so far to address 7950 as unplayable?

          • moose17145
          • 7 years ago

          I am guessing that TR isn’t using RadeonPro because it is third party utility. What they want to focus on is out of the box performance with what AMD gives you. There is a very good argument for why things should be done this way. Not the least of which is that if they do use RadeonPro which is supposed to boost the Radeons performance, then they also have to find a similar utility for the NVidia cards, otherwise its not a fair comparison. I understand why people would want TR to include results with RadeonPro, but I do think that would likely only cause more problems and controversy than just leaving it as is. IE out of the box performance with only whatever it is AMD/NVidia officially provides you. In this case the card and the latest drivers.

          Also micro stutter is hardly a new phenomenon, even on single GPU cards. I am not sure that it’s really micro stutter though as much as just that the radeons are flat out having a hard time producing consistent frame rates. At any rate, micro stutter does happen on single GPU cards from time to time. Usually when there is a driver bug or something along those lines.

          Something to keep in mind is that the GCN architecture these things are built on is still a very new architecture. As a result AMD is still finding new ways to fully utilize the potential of these chips, and along this path we are likely to experience some performance problems, and maybe even some rendering issues while they get their drivers fully optimized for this architecture. I remember when NVidia released the FX series of cards. Those were on a new architecture, and they had all kinds of issues with making sure scenes were being rendered correctly initially while they were getting their drivers optimized. It happens every time a wholey new architecture gets released. People like to think that because the architecture has been out for a year or so, that means that the drivers should already be fully optimized for it. But that just isn’t the case. Typically it takes a few generations before the drivers are optimized to the point of reaching diminishing returns. New video cards might come out every 18 months or so… but the underlying architectures of the chips that power them is usually the same (with minor tweaks and refinements from generation to generation, along with a node shrink).

    • Chrispy_
    • 7 years ago

    [quote<]...we pinged AMD to ask if they could explain the Radeon's struggles in recent games. AMD spokesman Antal Tungler told us that our article had "raised some alarms" internally at the company, and he said they hoped to have some answers for us "before the holiday."[/quote<] Thanks Scott. You guys have identified an issue, asked AMD for input, retested on an old OS, old benchmark scenarios and old drivers. I don't think you could actually do anymore. AMD's new GCN drivers exhibit microstuttering in single-GPU solutions. There's an entire internet full of people acknowledging, high-speed-camera-filming, complaining and having their multi-GPU microstuttering issues acknowledged, officially by both AMD and Nvidia. The fact that AMD's cards are currently doing in on a single GPU is very important, and if anything it just reiterates how worthless average fps figures are. It is likely drivers, and it may not be occurring on all platforms. As I commented in your original article, I run Borderlands 2 at the same settings as you tested, and I don't [i<]feel[/i<] the stuttering. I disabled vsync and logged frame times, then used a spreadsheet to calculate the number of frames rendering for longer than the 16.6ms and 33.3ms thresholds. I don't see results like yours. Neither of our results are wrong, all that can be concluded is that on difference configurations AMD cards exhibit wildly different results. I look forward to the AMD response. Whilst I don't suffer from this issue myself, it clearly needs to be acknowledged by them and fixed - AMD flailing in the GPU market and losing ground to Nvidia is bad for [b<]EVERYONE[/b<] (except perhaps Nvidia shareholders).

    • superjawes
    • 7 years ago

    Again, excellent work. I’m still happy that I’m not buying any hardware this season (those games coming with the 7950 would be far too tempting).

    But does this mean that we’ll see a 7950 revisited revisited review if AMD publishes a driver fix?

    • lilbuddhaman
    • 7 years ago

    Legitimate question:
    Is there anything that can be done to remove/stop micro(and not so micro) stutter with ati cards without reducing graphics settings?

    In far cry3 for example, I’m sitting at 55-70fps, yet I get a very substantial amount of stutter. The current solution on the forums is to drop to DX9 mode (less eye candy), but even that doesn’t stop it for everyone.

      • kenageboy
      • 7 years ago

      Try installing MSI afterburner and running the “On Screen Display” program that comes included. In that program, add far cry 3 to the list of applications, and then with far cry 3 selected, click on the wrench and try limiting your fps to 60. People have said this helped with micro stutters for skyrim.

      • clone
      • 7 years ago

      the worst part about frame stutter is that it can come from multiple sources, video card driver, network, cpu loads, which network chip is being used, latency, lag, number of ppl in the room, their location, their connection can all cause it on the fly.

      back in the day Mechwarrior 3 lag used to be so bad you had to try and adjust for it in every game, then you had to compensate for players who’s Toon’s would stop for seconds at a time having no idea where they’d go and where to shoot.

      so in answer to your first question, their is absolutely no way to stop micro stutter on Nvidia or AMD gfx cards in multiplayer because the causes are typically beyond their purview.

      in cases where it is a driver issue it’s usually a matter of time before it gets resolved, even back when drivers didn’t get updated monthly all issues that surrounded my Radeon 8500 that affected me personally eventually got fixed just like every issue with my 8800 GTX that affected me personally eventually got solved save some legacy support that really didn’t matter anyway.

      p.s. given the PR hit AMD is going to take on this article I suspect they’ll have some fixes in the pipe sooner than later.

      • Bensam123
      • 7 years ago

      Set powertune to +20% then stop back here with results.

    • Parallax
    • 7 years ago

    I really wonder what latency plots would look like showing all major driver revisions over the past year from both vendors. Even if just for a single game, it would give some idea if latency is improving on each release or not.

    • ET3D
    • 7 years ago

    Good to see that Windows 8 actually improves frame latency most of the time.

    • chuckula
    • 7 years ago

    Curses, foiled again!
    Time for a new conspiracy theory!

    • l33t-g4m3r
    • 7 years ago

    The better question is do you care about those games? If not, this isn’t a problem. AMD is just running behind on new releases, like always. *cough* Rage *cough* Maybe they’ll learn and beef up their driver team, but I doubt there’s going to be any immediately visible improvements. This is how AMD’s drivers have behaved for years, and anyone who’s used them knows this. Just give it another month and it’ll be fine.

    This entire benchmark fiasco is merely a clever jab, or cheap shot (depending on how you look at it), at AMD/ATI’s slow driver team. That’s it in a nutshell.

    Also, the drivers didn’t used to be this bad before AMD bought them, and they instituted a number of changes including discontinuing monthly releases. Seemed like cost cutting measures to me, and this is the result.

      • sweatshopking
      • 7 years ago

      i think cash might be tight. makes beefy driver teams tough.

        • l33t-g4m3r
        • 7 years ago

        Right, but that’s the one area they shouldn’t skimp on. Cut the CEO/Management’s pay. They don’t need a golden parachute when they’re running the company into the ground and going bankrupt.

    • ThorAxe
    • 7 years ago

    As the current owner of a 4870X2, 2 6870s, a GTX 680, 2 GTX 570s, 2 8800GTXs and having owned numerous other cards such as the 12mb Voodoo2s in SLI, the original GeForce Pro, 9800 Pro, x1800xt, x1900xt, X800 Pro etc… I find this constant clutching at straws by Radeon apologists ridiculous.

    The article clearly demonstrates that Nvidia offers smoother gameplay. Just deal with it and move on.

      • swaaye
      • 7 years ago

      I don’t know how anyone can not run hardware from both companies. It lets you get the most out of each game and it saves you from nonsense like the Rage debacle. I think it’s a no-brainer when you have more than one game PC/notebook.

    • bfar
    • 7 years ago

    Does this apply to the 7970 too? I hope AMD can deal with this, as I had been giving serious consideration to a 7970, or even waiting to see what the 8000 series can do. Lately the AMD cards have seemed much more tempting form a value point of view.

    My last radeon was a 1900xt in 2006, and while excellent value, was not an entirely smooth experience.

    There’s no actual fan-boyism here as often accused; this is about dropping $300-500 on a single product – familiarity with a brand is plays a large part. Switching brands requires reassurance, particularly when the only two viable brands have such patchy track records.

    • Silus
    • 7 years ago

    Great write-up, but I have to ask: will this be a common practice now ? Fanboys complain that their brand isn’t winning, so you do another article to “please” them and prove your previous point ?

    I have to admire that you actually listen to your readers and if they request it, you do so (within reason)….but I don’t think this is the best approach. You tested it once and those were your results. Stick with them! Sure you now have Windows 7 results too, but that shouldn’t matter, since the previous review was for Windows 8. This one with Windows 7 only exists because there was an uproar over the fact that some people’s favorite brand wasn’t winning as they “wanted” to…Just because those people want to live in their fantasy world where their brand wins everything doesn’t mean you have to re-test everything…

      • sweatshopking
      • 7 years ago

      i think the concern was that it didn’t match previous results, nor the information coming out of other review sites. it’s not so much the brand that’s winning.

        • Silus
        • 7 years ago

        But different reviews on different sites should never be compared directly. Usually they have different systems, with different methodologies and sometimes even different settings to test on. This is even more apparent with such a different method to benchmark that TR has.
        [H] did a similar thing a while back. They didn’t just provide graphs with average framerates. They provided a time period with framerate fluctuations and also the reviewers feedback on the “feel” of the gameplay on a certain game. Sometimes even though the framerate was high, the experience wasn’t that good.
        TR goes even deeper than that with latency analysis which is entirely different than 95% of the other reviews out there. Comparing them on the basis of framerates is absurd.

    • Arag0n
    • 7 years ago

    [quote<]The question you have to ask is what matters to you. Do you want the graphics card that scores best in the FPS beauty pageant, or do you want that one that gives you the smoothest gaming experience when you fire up a freshly downloaded game this Christmas? [/quote<] I remember having that discussion years ago with a collage in school. He kept saying that it was so important that games were over 60fps, that he could definitely see the difference... then I said to him, you know, movies are played at 24fps and you never notice the movie to flicker or anything, why? Because the frame rate is consistent. As long as frame rate is consistent, over 30fps the difference can be noticed but is meaningless. The thing you are really seeing when you have cards that go well beyond 60fps is that those latency spikes are more uncommon and consequently the playability seems more smooth. He almost said I was crazy.... but he is the same guy that in 2009 fight me so much because I told him that phones were going to eventually, be good enough for most people and market for dedicated point and shot cameras was going to shrink to the point that no one was going to manufacture/sell them. He kept shouting to me I had no idea about the technology behind cameras, that modern cameras do wonderful photos, and no professional was going to drop his camera for a phone. I then said, you know, no professional use a point and shot neither, they use DSLR.... It is funny sometimes how some people can be SO wrong, and still think they are SO right. He was also saying that people was tricked to change CRT to LCD, because LCD's resolution and refresh rate was lower than CRT... but he kept denying that it was impossible for people to have 24" CRT's for their normal desk without dropping everything else out of the table... still he kept saying "given the space a CRT is better than LCD". And maybe he was right at that point, but pure image quality is not the full reason people buy a monitor, and CRT had like 80 years of evolution vs how much for LCD? 10 years? Some people will simple deny that they look wrongly to the things, and they will never accept or acknowledge that they were looking to the things from the wrong point of view. They are going to fight you back, they will never acknowledged their mistake, and they will never accept that they were focusing the problem from the wrong point of view. Specially people that has been doing the same things for several years. I believe the techreport way to report GPU's is the right one. Higher frame rates are meaningless as long as those rates are not consistent. A different point, I like the geometric average, it limits the impact of spikes. Much better approach than removing games that underperform or overperform in a given game. Why? Because if the game overperform because AMD or NVIDIA worked hand to hand with the developer, it's still a bonus point, and because if drivers have issues with a given game is still a bad one. It should not modify a lot the final graph of expected performance from a GPU but it MUST be reflected. I said that when AMD had some game deleted from one GPU review cuz overperformed, and I will say that today, that they dropped AC3 because low AMD performance. Averages that delete those games are misleading at best.

      • TNStrangelove
      • 7 years ago

      “you know, movies are played at 24fps and you never notice the movie to flicker or anything, why? Because the frame rate is consistent. As long as frame rate is consistent, over 30fps the difference can be noticed but is meaningless.”

      This comparison is bogus. Movie frames are shot with a relatively high exposure so that theyblend into one another, this helps with creating the illusion of fluidity. To add to that although you may not notice them flicker you can see the difference in in high framerate movies, although exactly how much difference it makes is up for debate

      If your point is that consistency is more important (beyond a certain point) than FPS I don’t dispute it, but higher FPS is not meaningless and comparison to TV or Movie framrates is not the best way to make the point.

        • spuppy
        • 7 years ago

        Also movies can work at low fps because they are not receiving user input. And it’s not that you can’t notice the flicker, it’s that we’re just used to it now. That’s what gives movies the “cinematic” feel compared to video.

          • MFergus
          • 7 years ago

          Not to mention movies have a natural motion blur that games can never have, 30 fps is not enough for certain types of games with fast movement. There is a huge difference between 30 fps and 60 fps for FPShooter games :p

            • Scrotos
            • 7 years ago

            I think *someone* forgot about the Voodoo5’s T-Buffer! Motion-blur Quake 3 wooooo!

        • Arag0n
        • 7 years ago

        I didn’t say higher framerates can’t be noticed, I just meant that most of the time playing at constant 30fps can provide a great gaming experience. Can it be better? sure, but the spikes of latency are the thing that makes your gaming experience poorer more than anything else by far. You can have an inconsistent 60fps card and another one at stable 30fps, and gameplay was going to feel much smoother and better for the 30fps one.

        I know movies and games do not behave the same, but that’s why I said 30 and not 24fps. Movies keep some kind of continuity, but that at best fakes some extra frames, not triple.

          • TNStrangelove
          • 7 years ago

          I was mainly responding to the claim that beyond 30fps is meaningless, this is plainly not true as the jump from 30-60 makes a big difference. When I said you could see the difference in movies beyond 24fps I was responding to your statement about seeing no flicker, i.e. that visible flicker is not the only effect of low framerates. I probably agree that consistency plays a far larger role than is often recognised however.

          As for how much difference the high exposure makes for TV and movies, I’m not sure how to quantify it except to say it adds more than a few extra frames.

      • Bensam123
      • 7 years ago

      A high action scene would completely put this to shame. Even if you’re getting a consistent 24fps, if you’re playing a high action scene at 60fps it’s quite a bit more fluid because things change in the scene so fast. This can be measured based on the distance objects move per frame. That essentially has a direct correlation with fluidity.

      “It is funny sometimes how some people can be SO wrong, and still think they are SO right.”

      Careful when saying something like this. Once you say something like this you can no longer maintain a objective point of view and are instead looking at the other person in a condescending way (How could you possibly be wrong?). You are and were doing the same thing from the opposite side of the argument. You have to consider both answers as possibly right or wrong, including your own.

      Screen flicker isn’t the same thing as perceived fluidity in films too. Screen flicker originally dealt with how fast each frame changed to the point where the film itself (not the scene) was not fluid and people could see the frames changing. But now that we have faster and faster action scenes and 29.97 or 24fps used when everything was shot in black and white and someone walk into the room and give some dialogue isn’t enough.

      The faster a scene changes, the more frames need to be added in order to account for this. Ideally we should have a variable frame rate (along with variable bit rate), so low action scenes have all of maybe 10 fps, which would be enough to be fluid, while high action scenes have upwards of 240fps (possibly more). You don’t even need a static 24fps (so people don’t see the film grain) for a lot of theaters as they use a digital projection instead of film.

        • Arag0n
        • 7 years ago

        I always try to face debates and talking as my point of view or personal guess. I never think my opinion is the only valid one. When I mean people is SO wrong I mean people that have been showed and proved once and again that they were wrong, all the clues point to the fact they are wrong, and they still want to believe that they are right. It’s like some AMD fanboys trying to prove that the new AMD chips consume less power than Intel ones. Trust me, there is some like that.

        They are not so wrong also just because they have a different opinion, but because time proved they were wrong. At that time, at that day, it was not possible for me to say he was totally wrong, I was just sharing my vision and he was saying I was crazy for thinking like that. For me it was crystal clear, but still I accepted I might be wrong, and things may not be as I said, but he totally though I was crazy for thinking that. As soon as someone behaves like that, IMHO, he denies the fact that things may not be as he expect and at the very minimum, he is unable to see the reasons for WHY things may not be as he expects.

        See the difference between his way and my way with you and powertune, I think you are trying to find a reason that makes AMD less faulty, but still I don’t deny it be powertune issues, I just say it’s not a likely explanation even if plausible. Let’s remember the Occam’s Razor principle, given two plausible explanations, the easiest one is the most possible cause. It means that you can’t deny the fact that just because some explanations seems to fit better the things you observe means that explanation is the correct one. But still, you should not try to bet on an unlikely explanation just because it fits better with your goals, likings, believes or whatever.

        When someone calls crazy someone for giving his opinion of how things will evolve, it means you are so damned sure that will not happen. That’s why I look to him with condescending, not because I think he is wrong and do not see it, but because he is not even considering the fact that he may be wrong.

        Just for you to understand how stubborn that person is, he came back to me after one year saying that I don’t know which phone camera was worse than I don’t know which normal camera, trying to prove he was right at that discussion of years ago… and I’m pretty sure you could ask him today and he would still answer you that Point and Shot cameras can’t be replaced by phones cameras.

        A good non-personal example of someone being so wrong but not acknowledging it, may be blackberry. Most of people sees that is a big mistake to keep trying to pursue their own platform. Because they have no time, no resources and no marketshare to try to make a viable ecosystem to recover the lost ground. However, it does not mean that it is impossible, it just means that it’s a very unlikely thing to happen, they are just being stubborn and refuse to acknowledge the facts.

        Kodak can also be a good example of a company that kept thinking that digital quality was lower than chemical and consequently, they did not invest to be the top notch digital camera producers until it was too late.

        Kodak proved to be SO wrong but at that time they believe to be SO right. Time and further research usually is the only thing that can prove what and who is right or wrong. And sometimes, no one ever gets a real final answer.

          • Bensam123
          • 7 years ago

          How did you get rated up when you didn’t mention anything about FPS in films and how it was basically hypocrisy because you were so entrenched you weren’t able to argue objectively… which was also what you accused him of.

          You can write seven paragraphs explaining why you think you’re right, but you’re still wrong about a issue you spoke of if you are indeed wrong, which is the case that happened with FPS and people are pointing out.

            • Arag0n
            • 7 years ago

            As I pointed, my meaning was not about FPS over 30fps being meaningless, but about FPS over 30fps providing little value to the gaming experience. You can try that yourself by plugging your computer via HDMI, you will be allowed to setup your screen at just 30Hz. You will notice that while you can see the difference, that difference is not so important. Most of games for XBOX actually are played at 30fps and plenty people never notice. I didn’t want to deny the fact that 60fps better than 30fps, and I’m sure some people will notice 120fps better than 60fps, and actually people may be able to notice that 240fps look better than 120fps. The point is, the real pain in the ass related to PC gaming has and always have been, flickering and stuttering, and that problem is not related to average FPS but latency spikes and inconsistency of FPS.

            To put it in perspective. It’s the same as mobile phones screen resolution. Sure the more the better, but over some point while the screen looks better it offers fewer advantages. the bump from 400×240 screens to 800×480 improved quality by a huge margin, the bump from 800×480 to 1280×720 improved clarity of small text fonts specially. But the jump from 1280×720 to 1920×1080 seems to just improve contrast and perceived sharpness. It may improve readibility of Chinese and Japanese characters, maybe. The point is, while people would do right to dump a phone that has 400×240 nowadays as totally outdated and shitty, 800×480 should not be discarded and 1920×1080 should not be the automatic decision point for buying. It is not just about the higher the better, but it’s also about perceived improvements. For most of people it is more important by far the color reproduction, reflections, readability outdoors, black levels, etc. Than the bump from 800×480 to 1280×720, and for most people from 1280×720 to 1920×1080. The more the better, but after 800×480 resolution is not the biggest issue anymore. Some shitty LG, Huawei, ZTE and others, proof that higher resolution screens can look worse than lower ones, but usually the only data you have on comparison tables of phones is the screen resolution, and plenty people will be tricked to think that it is the most important and single factor for quality. And then, we have the other issues around higher resolutions such as higher GPU load, higher power needs and battery usage. Most of people wont, but it’s important to understand those trade offs and think if for that given person, the trade off is worth. Nothing comes for free.

            • MrJP
            • 7 years ago

            I agree with what you’re saying about screen quality vs resoltuion, but I don’t think you’re right that 60fps isn’t significantly better than 30fps for interactive games. Having said that, I think you’re right that consistency is more important than the overall framerate. Also, where did you get the idea that HDMI is limited to 30Hz?

            • Arag0n
            • 7 years ago

            HDMI is not limited to 30Hz, but most GPU drivers will allow you to use 30Hz screen refresh while VGA or DVI won’t. Anyways, if you can setup your GPU to refresh at 30Hz no matter the cable you use to connect, you also can do the test.

            About 60fps being significally better than 30fps, it depends the game and the person. I don’t deny 60fps is better than 30fps, I just mean that smoothness is far more important than 60fps. Smooth at 30fps or stuttering at 60fps? Smooth 30fps will provide better experience.

            • Bensam123
            • 7 years ago

            You do understand that refresh rate isn’t the same as FPS right? Just the same as response time isn’t the same as refresh rate.

            I honestly haven’t tried this, but I have played on a 120hz monitor before and the levels of fluidity are amazing. I can only imagine the eye cancer of playing at 30hz. You have to have really dead eyes to see no difference between 30hz and 60hz if I can easily see the difference between 60hz and 120hz. Even TR mentioned a huge difference between the two.

            That depends entirely on how much 60fps stutters. Stuttering doesn’t have a quantifiable number associated with it. For instance I use a 7870 which has the stuttering this article talks about and I don’t notice it for the most part. Some games like GW2 had a bit of it, but that was only in big cities or the like. By that we’re talking about a hitch maybe every 30 minutes, not every 1 second, and it wasn’t big enough to ruin the experience.

            • Arag0n
            • 7 years ago

            As I said, I never said you can’t notice, I just said that the image at 30fps already looks smooth and as I said beofre I’m sure people (or some at least) should be able to see a difference between 60 and 120, 120 and 240 and beyond, I just mean that higher than 30fps, stuttering is a worse problem than frame rate.

            And yes, refresh rate and fps is not the same, but there is an option called V-Sync that will limit your games FPS to the screen refresh rate and consequently let you see how looks like play a game at stable-smooth 30fps if you setup your monitor refresh rate at 30Hz.

            You are free to agree or disagree with my statement, and my statement might not be true for you, but at least you should do a blind test, setup your monitor at 30Hz and let someone that does not know you did that play like that. After that, you can ask him if he found something strange.

            In order to have a real final conclusion you would need a test group of 20 to 30 people playing games at 30fps without knowing and ask them how was their experience, my bet is that less than 50% of the answers will say that the game was not smooth.

      • jonjonjon
      • 7 years ago

      TN lcd monitors are horrible. especially older lcds monitors. i would take a sony trinitron crt widescreen monitor over a garbage TN lcd monitor any day. so you are wrong again. funny how that works. the people who think they are always right are usually the people who are always wrong.

        • sweatshopking
        • 7 years ago

        his point was that it’s not just about image quality. you’re obviously unusual, since most people now buy crappy tn panels, and did when they first came out (if they could afford them)

          • Arag0n
          • 7 years ago

          Glad you got my point, LCD’s did not wing the battle to CRT because image quality but because they were much more convenient and allowed many more people to have bigger screens. Even if first LCD’s had much worse than CRT image quality, people was willing to trade off quality for space and convenience. I believe current LCD (OLED, or whatever technology that allows flat screens) technology right now can provide the same or higher image quality than the last CRT’s ever manufactured. That was not true then, but as I said, I never wanted to claim LCD’s were the best technology ever because image was extremely better, but because they solved the problem: work as screen, better than CRT’s for most people. It’s the same with camera phones. Most if not all of them still have not as good pictures as point and shot cameras, but people prefers to not carry a dedicated camera than to have better quality photos. Once they really want a perfect photo, they do carry a DSLR anyways.

          Another good example is mp3 and CD, CD quality for music is far better than most mp3’s people used to have years ago, but convenience triumphed over quality. Now people is starting to pay more attention to mp3 quality, but back then that was not the issue or reason people choose it over portable CD players.

      • alphadogg
      • 7 years ago

      It’s hard to take a guy who had a “discussion years ago with a COLLAGE in school” seriously. A painting, maybe, but collages rarely make sense. 🙂

      • l33t-g4m3r
      • 7 years ago

      You sound like Carmack back in the Doom3 days, which now he’s rightfully changed his mind. People CAN see over 60 fps, but it’s meaningless unless you’re playing a really fast twitch shooter. You’re not going to notice 60 fps on a strategy game or 3rd person RPG.

      Side note: I used to be able to see the blinking of 60hz monitors, but I don’t notice it so much anymore after years of lcd use. I think it’s a use it or lose it type ability.

      Additionally, I do notice that my S-PVA monitor is not very capable of displaying 60 fps smoothly. It might do ~45 fps, but after that there’s issues. Dunno if vsync would help, but I rarely use it because of the associated mouse lag.

        • Arag0n
        • 7 years ago

        Did you try disabling triple buffer? Triple buffer means that you can’t interact with current frame but with 3 frames in the future. At 60fps it means a delay of 50ms. Add the mouse input lag, maybe some internet latency, etc and it may go up to 100ms. You can try to disable triple and use double or single frame buffer, it may help you with the perceived input lag, specially at low frame rates since games at 30fps mean that your input will not have effect until 100ms later.

        Still, input latency and frame smoothness is not related, and my conversation at that time was not related to input lag at all, he never mentioned that point, he just was trying to point that frame rates were the most important thing ever for a smooth image perception. As I said, you can notice higher frame rates, but consistent frame rate is far more important. 30fps can provide a smooth feeling if consistent.

        Another way to improve input lag without disabling triple buffer or other technologies is to disable V-Sync as you pointed, because it allows to go way higher than your monitor refresh rate, and in some games, at 200fps it cuts delay to only 15ms. Disabling triple buffer can decrease the smoothness of your framerate and give you spikes and flickering, but should avoid input lag, so turn off V-Sync can be better solution for games that you may have a much higher than 60fps frame rate, while disabling triple buffer is the only solution for games with low frame rate.

          • XTF
          • 7 years ago

          Single-buffer isn’t an option as you’d see the scene being rendered and it’s not pretty (or usable). 😉

            • Arag0n
            • 7 years ago

            Mmm are you sure? I don’t remember now, but I think that cards used to have double buffer and triple buffer setups, besides the “normal one”. So I always tho that given there was a “normal one” it means that double and triple meant the number of frames in memory besides the current one.

        • MadManOriginal
        • 7 years ago

        You sound like the geocentrists before Galileo.

        😀

    • moose17145
    • 7 years ago

    HOLY S#!T!! It’s like TR did EXACTLY what I asked for in my super long comment in the 7950 vs 660 Ti article!

    Here it is…
    [url<]https://techreport.com/discussion/23981/radeon-hd-7950-vs-geforce-gtx-660-ti-revisited#metal[/url<] Thank you!!!! A very enlightening article that highlighted many interesting things and answered a great many questions for me!!! Thank you again for the hard work and effort!!!

      • HisDivineOrder
      • 7 years ago

      It’s all because of you, I’m sure. They read your response. They said, “Well, now we have the answer to the eternal question that governs our lives, ‘WWMD?’ Let’s do what he said do.”

      I’m sure one of them argued, “No, I want to rearrange my sock drawer instead!”

      But they stood firm. “We must do what Moose17145 would do.”

      The world must thank you, Moose17145. We all owe you a debt of gratitude. Thank you. Everyone, we need to get this man a holiday named in his honor. There must be parades, balloons, festive decorations, present-exchanging, and possibly romantic interludes.

      Stretching our creative mental muscles, we shall call it, “Moose17145 Day.”

        • Srsly_Bro
        • 7 years ago

        Praise be to Moose17145 in the highest!

          • moose17145
          • 7 years ago

          lol well I wasn’t trying to make it out like I was the reason they retested it the way they did… I was just surprised that their retest so closely matched what my comment was and what I was hoping for.

            • superjawes
            • 7 years ago

            No need to be modest, Sir moose17145!

            • HisDivineOrder
            • 7 years ago

            Indeed Mouse37337! Your name will live on long after we have all passed away into Google recycle bins…

            • indeego
            • 7 years ago

            I want to thank Indeego for suggesting and testing this stuff on behalf of TR staff!

            • swaaye
            • 7 years ago

            ssk saved Christmas!

            • superjawes
            • 7 years ago

            Not yet! In order for SSK to really save Christmas, he needs to get a well into a Malawi village!

            Help him (Josh P) by voting here:
            [url<]https://www.facebook.com/Rogers/app_103139599851953[/url<]

    • raghu78
    • 7 years ago

    Is it a faulty card. With power control maxed to +20% in AMD CCC are these results the same. TR’s methodology was the same when they concluded HD 7970 Ghz was the fastest card back in late June. Are they going to go now say that GTX 680 is smoother and better than HD 7970 Ghz. This at a time when the rest of the sites are unanimous in calling the HD 7970 ghz the current fastest single GPU card.
    Saying the HD 7950 boost tragedy is one of wasted potential does the editor mean to say the HD 7900 series is one of wasted potential. at the same clocks the HD 7950 is 3 – 6% slower than HD 7970.

    [url<]http://hexus.net/tech/reviews/graphics/34761-amd-hd-7950-vs-hd-7970-clocks/[/url<] so whatever issues apply to HD 7950 also apply to HD 7970. this review has posted some of the most unbelievably low fps scores for HD 7950 boost. I am not talking about frametimes. just fps numbers. Every other review has the HD 7950 boost destroying the GTX 660 Ti in games like MOH Warfighter, Skyrim. To put it bluntly the data is not trustworthy. The last paragraph of the article reveals the author's true intent. its not objectivity. its vindiction. If people can't spot that they are dumb. [url<]https://techreport.com/blog/23638/amd-attempts-to-shape-review-content-with-staged-release-of-info[/url<] The author has no scientific proof for his testing methodology. the frames are not measured at the monitor which is were they are perceived. this is one of the major reasons why other reviewers avoid this method. Its not scientifically based. Then there is the question of how much of this is perceivable stutter. In a game like Hitman if one frame on HD 7950 boost takes 10ms and the next takes 40 ms while on GTX 660 Ti takes 25 ms each does the user see any difference between the 2 cards. but when the HD 7950 boost for the majority of the test runs at almost 10ms lesser frametime per frame compared to GTX 660 Ti and has a 50% higher fps average does the reviewer say that the user cannot perceive this compared to the first.

      • MFergus
      • 7 years ago

      TR doesn’t care what card gives more FPS, they care about which one is smoother, the 7970 will give more fps but no idea if its smoother than a 680 or not.

      7950 for most games will give more fps than 660ti, its just not as smooth.

        • BestJinjo
        • 7 years ago

        “HD7970 will give more fps but no idea if its smoother than 680 or not.”

        That’s incorrect. TR did test HD7970Ghz this summer against GTX680 and GTX680 AMP! edition.

        Based on frame times, not frame rates alone, here is their conclusion:

        “We’re relying on our 99th percentile frame time metric for our performance summation….The Radeon HD 7970 GHz Edition has indeed recaptured the single-GPU performance title for AMD; it’s even faster than Zotac’s GTX 680 AMP! Edition.”
        [url<]https://techreport.com/review/23150/amd-radeon-hd-7970-ghz-edition/11[/url<] If you look at results of games like Skyrim: [url<]https://techreport.com/review/23150/amd-radeon-hd-7970-ghz-edition/7[/url<] or Crysis 2: [url<]https://techreport.com/review/23150/amd-radeon-hd-7970-ghz-edition/9[/url<] There is no unusual latency penalty for HD7970 / HD7970 Ghz cards against 680/ 680 AMP! Yet the conclusions in this review imply as if AMD cards felt less smooth all this time, clearly contradicting TR's own HD7970 Ghz vs. GTX680 earlier this year. I think they should retest HD7970 Ghz vs. GTX680 again as well to see what's going on. If it's a driver issue, HD7970 Ghz should turn into a stutter-fest as well.

      • Arclight
      • 7 years ago

      Fastest average =/= smoothest gameplay, that was the point…..

        • BestJinjo
        • 7 years ago

        Ya, that’s true. Still, read my post above. If these latency issues existed all this time, how come HD7970 and HD7970 Ghz were smoother than GTX680/680 AMP in TR’s June review? Those cards don’t show latency issues, but now HD7950 has serious problems. Perhaps they need to retest HD7970Ghz vs. 680 and use Cats 12.7 Beta vs. Cats 12.11 b11s. Need to get to the bottom of this mess.

      • MrJP
      • 7 years ago

      I think you’re totally off-base with accusations of bias against TR. If you’d been here any time you’d see that AMD, Nvidia, and Intel have all been subject to criticism when their sleazy marketing behaviour or poor product performance have warranted it.

      However I think there is a fair point in there about the validity of the frametime measurements. I know Scott has touched on this in the past (though I couldn’t find the right article in a quick search), but ultimately the frametimes reported by FRAPS are not necessarily the same as the frametimes delivered to the screen, even ignoring issues with monitor refresh rate. While the whole push to evaluate graphics cards based on smoothness rather than raw fps is completely correct, it still worries me that the measurement being used to evaluate this may not be completely representative of the real gaming experience.

      I remember some discussion earlier in the year about using a high-speed camera to confirm the frametime measurements, and I wonder whether that could be used to check whether the differences measured here are truly apparent at the screen.

      At the same time, it’s encouraging that this article includes several comments about the feel of the game, because ultimately that’s what really matters. I just wonder how clear the differences are, and whether there’s any danger in perception being affected by the knowledge of the measured results. I know it’s yet more work, but ultimately this could push you towards the blind testing approach as used with some success in the sound card reviews.

      I hope this doesn’t come over as critical or negative, because I strongly believe that what TR are trying to do is absolutely the right thing and lifts these reviews head and shoulders above what’s available elsewhere. The fact this second article has been produced at all speaks volumes about the commitment to rigorous testing, and I’m just wondering whether there are further steps that could be taken to remove the few lingering doubts.

      P.S. Thanks again Scott. I can now move to Windows 8 with my 7950 with slightly less trepidation, though I might spend a bit more time experimenting with whatever drivers are available.

      Edit – I found the article with Scott’s concerns about the FRAPS numbers [url=https://techreport.com/review/22890/nvidia-geforce-gtx-690-graphics-card/3<]here[/url<]. It was the GTX 690 review, with particular concern regarding multi-GPU results.

    • Pantsu
    • 7 years ago

    Thank you for the W8/W7 comparisons, really good info!

    Would’ve also been nice to see if 7970 and 7870 exhibited similar frame time issues. This is still a limited amount of testing to make any broad assumptions about Nvidia having better frame delivery. We need more test scenarios to ascertain how broad the issue is.

    I’ve been a proponent of stable frame delivery rahther than raw fps numbers ever since I tried 6950CF which was god awful without a frame limiter. I’m still waiting the day AMD/Nvidia come up with a better way of stabilizing the frame delivery. Adaptive Vsync was a step in the right direction, but not really a complete solution.

    • spuppy
    • 7 years ago

    Although you’d think this would close the case on whether there is a methodology issue, the fact that another site that many people deem trustworthy is flat out denying frame time measurements will still give people reason to dismiss it. And that’s what they are doing in other discussions such as reddit.

    First it was “why did he use WIndows 8” and “he has a beef with AMD”

    Now it is “anandtech says this methodology is useless” and “he has a beef with AMD”

      • MadManOriginal
      • 7 years ago

      Stupid people will be stupid regardless. There’s such an easy thought experiment to show that frame times matter I don’t even understand how it can be questioned. The sad part is [i<]frame times and FPS are the same thing[/i<] just expressed differently. I blame science and math illiteracy for anyone who doubts this methodology. Scott explains it in the article lead-in, but to use a more extreme example: 2 cards with average FPS of 50. One consistently delivers 50 FPS, the other alternates between 1 and 101 FPS every other frame. Which one would deliver the better gaming experience? Obviously the former. The best part about 'Anandtech saying this is useless' (I'm not sure if Anand himself said it, or maybe another site writer) is that they just began doing the [b<]exact[/b<] equivalent thing for SSDs by exploring IO consistency. So it matters for SSDs but not for graphics cards?

        • Arclight
        • 7 years ago

        You explained it quite well, this is just another example of people being stupid and not trying to think for themselves..

      • danny e.
      • 7 years ago

      Anyone that says the methodolgy is useless doesn’t know anything. I’m also baffled by those complaining about game selection. It’d be one thing if all the other games didn’t show the exact same thing but they do. I have only ever owned one nVidia card and probably still won’t buy nVidia unless ATI has nothing decent out in 6-8 months or so when I’m in the market again.

      Edit: I just checked out this “reddit” you mention. What a crappy place. People actually go to that site? My opinion of “discussions” on that site would be somewhere about as high as my opinion of the people on “americas dumbest” tv program.

        • HisDivineOrder
        • 7 years ago

        People also watched Jerry Springer, the Twilight movies, and Jersey Shore.

        News at 11.

      • spuppy
      • 7 years ago

      Why am I being down voted for this? I was just pointing out what the ignorant people are saying (call them stupid if you prefer, but they are the majority)

        • MadManOriginal
        • 7 years ago

        People may have been mistaking your post as implicitly agreeing with people who are dismissing it.

        Lesson is: don’t worry about +/- thumbs 🙂

          • spuppy
          • 7 years ago

          I’m not worried about the thumbs themselves, I was just wondering if my point was being misunderstood, and it looks like it was.

          I still think it’s worth noting that people are dismissing this method… although it’s to be expected when their favourite brand “loses” because of a new testing method, the fact that other sites are actively dismissing it is an issue…

          Not an issue for this site and its readers of course, but for everyone who trusts one source over another

          Whether this is due to brand loyalty only, I can’t say for sure.

      • superjawes
      • 7 years ago

      -“Tech Report just published an article seeing if they could eliminate the performance problems they had with their 7950.”
      -“HE MUST HATE AMD!!!”

      Because logics.

    • Bensam123
    • 7 years ago

    Well that clears up the W7 vs W8, but as was stated in the other thread, some of these spikes may have a lot to do with powertune. I’ve personally upped it to +20% and it increases perceived fluidity in some games quite a bit, including GW2. I would highly suggest you guys at least do a subjective test with it and then add to this article if you find anything.

    Geforces don’t have a powertune or equivalent option in their drivers, so it’s entirely possible that Geforces are overunning the power envelope for the PCIE spec during peak situations in order to maintain performance. Nvidia supposedly has their own version of powertune, but no one has actually tested it to make sure its doing what Nvidia claims it’s doing.

    These spikes most definitely look like the Radeons are choking at moments when they try to draw more power, causing a spike. If Geforces almost never choke, perhaps that’s a good indication that they’re instead simply drawing more then they’re supposed to (or they don’t run into the PCIE power envelope limitations, which is still possible).

    Measuring peak power spikes would be rather hard to do because you can’t see exactly how much power a card is drawing through the system besides looking at wall numbers. So this may be hard to put into practice.

    Edit: This may actually be able to be looked at in depth using GPUZ. GPUZ reports power draws for cards (as long as the cards are reporting it properly).

      • Bensam123
      • 7 years ago

      -1 for suggesting TR look at powertune? I don’t even…

        • BestJinjo
        • 7 years ago

        Love that TR revisited W7 vs. W8 comparison. Good job!

        Without PowerTune, HD7950 boost cards cannot maintain constant clocks. Unfortunately, AMD’s implementation of its boost function needs a lot of work. The clocks are never going to be constant for Boost cards without adjusting PowerTune. This is a fact:
        [url<]http://www.computerbase.de/artikel/grafikkarten/2012/test-amd-radeon-hd-7950-mit-925-mhz/11/[/url<] The performance hit is also measurable. An HD7950 V2 Boost vs. a manually overclocked HD7950 at the same speed is 4% slower. This is not a big deal but maybe the frame delivery is also less consistent. [url<]http://www.computerbase.de/artikel/grafikkarten/2012/test-amd-radeon-hd-7950-mit-925-mhz/12/[/url<] While I am not disputing the frame time results in this review, the actual framerates in MOH:W and Skyrim are opposite of nearly every recent review. It would be one thing to see proper high FPS in those 2 games for HD7950 boost and then the conclusion is that it felt less smooth, but the FPS are lower. Skyrim results alone cannot be explained. No other review shows GTX660Ti getting higher FPS in this game at the specified resolution and AA tested by TR. In fact, many reviews show HD7950 V2 not only outperforming GTX670 in FPS in Skyrim but even GTX680. Frame times aside, the frame rate advantage should remain. You'd be hard pressed to find any recent review where HD7950 boost doesn't dominate GTX660Ti in Skyrim at high resolutions. > 40% faster [url<]http://www.techpowerup.com/reviews/HIS/HD_7950_X2_Boost/22.html[/url<] HD7970Ghz also has no problems beating GTX680 in FPS in Skryim: [url<]http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/57413-amd-12-11-never-settle-driver-performance-11.html[/url<] There are more than a dozen of professional reviews online and every single one shows HD7950 boost cards beating GTX660Ti in Skyrim with latest drivers at high rez. TR's frame times testing is unique but the FPS still don't look normal for anyone who plays this game and tried it on HD7900 vs. GTX600 series. I understand perfectly TR's frame times testing, but where did HD7950's advantage that shows up in every other review disappear to?

          • Bensam123
          • 7 years ago

          Interesting… I was never accusing TR of faulty testing, but techpowerup is rather reputable. And I agree, while frametime shows quite a bit more, the average FPS is so far off that this doesn’t look right.

          Conversely looking at Anand, they report Nvidia being faster then AMD in Skyrim…

          [url<]http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/15[/url<] (Although that article was from when the 680 first came out). This almost makes me wish for days where most games had a benchmark built in. Obviously this could change a lot based on where you're walking around in Skyrim as Geoff noted. There is almost too much variance to call these numbers reliable for benchmarking in Skyrim.

            • BestJinjo
            • 7 years ago

            That’s a March 2012 review. It’s not particularly useful for testing Skyrim. Both NV and AMD have improved their drivers since then. AMD had improved drivers significantly in Skyrim. The card in that AT review you linked is only an 800mhz 7950 not a 950mhz version either. If you are going to look at proper Skyrim benchmarks, the review would need to be done at least as of June 21st. This was as early as Catalyst 12.7s that focused on Skyrim improvements. Also, AT didn’t even install the high resolution texture pack for Skyrim, which makes their testing worthless regardless of drivers.

            I am not saying TR’s testing is flawed. I just don’t understand Skyrim’s numbers. Here are TechSpot’s – HD7950 Boost is beating a 680 at high rez with MSAA:
            [url<]http://www.techspot.com/review/603-best-graphics-cards/page9.html[/url<] Here is Tom's Hardware - HD7950's minimums are ~ GTX660Ti's average at 1600p: [url<]http://www.tomshardware.com/reviews/geforce-gtx-660-ti-benchmark-review,3279-9.html[/url<] Here is HardOCP - HD7950 OC is beating GTX660Ti OC by 36% and even beat GTX670 @ 1300mhz! [url<]http://www.hardocp.com/article/2012/08/23/galaxy_gtx_660_ti_gc_oc_vs_670_hd_7950/2[/url<] Skyrim frames per second results in this review seem odd. I can't seem to reconcile them.

            • Bensam123
            • 7 years ago

            Yup, I said the article was dated, but I was looking for websites everyone has heard of and is reputable.

      • Arag0n
      • 7 years ago

      Most of the time W8 is better than W7… but seems there is a driver issue with AMD that makes W8 have spikes. Clearly, NVIDIA does not have those issues, so it’s AMD fault and not W8 fault.

      • HisDivineOrder
      • 7 years ago

      “No, it’s Windows 8.”

      “No, it’s the games you’re testing…”

      “No, it’s PowerTune…”

      “No, it’s the drivers…”

      What matters is what end users are going to get when they slap the card in and install the drivers. It’s AMD’s job to make that as painless and smooth as possible.

      The latter seems to be a real problem for them.

        • Bensam123
        • 7 years ago

        There can be more then one cause of a symptom. I mentioned three different things in the original review. It’s the job of scientists (benchmarking is arguably a science) to get to the bottom of questions. That’s part of the scientific method.

        [url<]http://en.wikipedia.org/wiki/Scientific_method[/url<] Taking things a bit further, not only was I saying that powertune may be responsible for these results, but Nvidia may not even have a proper version of powertune on their cards. So they're in essence cheating to get better results by not adhering to the PCIE power spec while AMD is. Or they could be operating below the PCIE power envelope so their cards never bump into a throttling situation. Part of figuring this out is getting to the bottom of the oscillations which aren't normal, the other part is a hypothesis which could prove true... That would completely change the landscape of how both AMD and Nvidia are viewed as far as performance goes, similar to when they both did LoD and mipmap cheating. Figuring out what is causing the oscillations is just as important as figuring out they are there in the first place.

          • Arag0n
          • 7 years ago

          The problem here is that you are trying to find a potential explanation that exculpates AMD. Clearly it’s a drivers issue. You can argue that maybe NVIDIA architecture this generation is so similar to previous ones, so it’s easier for them to tune and AMD introduced a new one and takes time to refine. Anything else is just excuses IMHO. There is a cause, and it has to be found, but definitely it can be solved. If the problem isn’t solved it’s AMD problem.

          That’s why I said before, AMD and NVIDIA deserve credit for both, when they deliver faster games performance by working closer to game developers and when they miss to deliver a quality driver, so any score no matter how screwed or how insanely good is, it should be considered for the final average.

          IMHO, I could guess the issue seems to be periodical. There is a problem that repeats every given number of frames, so it seems something related to memory management or buffer queue. Something seems to be missing place to do the calculations, cleaning some memory or buffer and then go for next frames.

            • Bensam123
            • 7 years ago

            I’m curious how you list problems as ‘excuses’. In that manner how is my ‘excuse’ of powertune different from your ‘excuse’ of driver issues?

            I don’t think either of them are excuses, yet you’re making it seem like I’m making excuses where as you know what the problem is already… Which you don’t because neither has been proven true or false yet. A hypothesis isn’t a excuse. You seem to be confusing my intent or the lack there of… or you’ve been attending Chuckula’s sermons.

            • Waco
            • 7 years ago

            This. Trying to find the cause of a problem is not finding “excuses” — it’s finding the problem.

            If a simple fix makes the AMD card faster why do people insist on hating on AMD anyway? I’ve got no great love for either company but I don’t get the animosity people show towards either.

            • Bensam123
            • 7 years ago

            People like having an excuse to bag on something and tell other people how wrong they are, regardless of context or putting it into real world terms. There isn’t really a whole lot wrong with AMD, even their processors at their price points.

            A lot of people are seemingly becoming highly polarized for some reason. The Intel/AMD scenario being one of them. I’m sure you’ve seen the posts on the AMD financial front news page articles.

            • Arag0n
            • 7 years ago

            I’m not saying my definition of the problem is the only true explanation, I just mean that you are trying to find a problem that suits your ideals to avoid AMD as faulty. It may be powertune it may be not, but it does not make AMD less faulty. May NVIDIA be cheating the boards to get more power than it should? May NVIDIA be underusing the board power source? Maybe, but if that makes cards more stable and reliable, it means that is a good solution.

            But you know, try to use your scientific method to this point, why the same card has latency spikes depending the OS differently? If both OS mimic the behaviour you could have a point, but the fact is that they do not and the behaviour under w8 and w7 is inconsistent, meaning that there is some drivers differences that make the cards unpredictable. Given that the same game can have different latency behavior using different OS, with slightly different driver model should be enough to tell you that the issue is driver related and not powertune.

            • Bensam123
            • 7 years ago

            I think you’ve been attending the church of Chuckula.

            Here check out this helpful link I posted in a response to XTF explaining powertune:

            [url<]http://www.legitreviews.com/article/1488/4/[/url<] Do you notice the trends that happens when powertune reaches a maximum threshold? It causes the card to cut back so it stutters. I personally own a 7870 and I can tell you, personally, from my own experience, that this is what happens. Increasing the threshold to +20% helps fluidity in a lot of games, GW2 especially. You can also see the difference in power consumption when monitoring it with GPUZ, as I mentioned. It's important to note that power 'load' numbers that TR gives is the same as simply giving average FPS for games. That's representative of an entire distribution (although TR doesn't state if it's the average or peak). GPU power consumption isn't static when it's operating in 3D mode, at max clocks, or even operating at 100%. It varies entirely based on workload, just like your processor. Yet we can't see what sort of distribution Nvidia or AMD cards have, so we can't tell if there is a correlation between power draw, power peaks, and FPS. The two graphs could even be overlaid which would be insanely helpful both for diagnosing this situation and seeing how the two cards compare with peak situations There were really two different points to my main post. One was that powertune may be responsible with what is happening to AMD cards (it doesn't matter whose fault it is, we're looking for answers to why AMD cards are oscillating). Point two was that Nvidia doesn't exhibit this behavior so they may be overriding the PCIE power envelope, which AMD is respecting, or it never bumps into a situation where the card has to throttle back (which is unlikely as both Nvidia and AMD cards draw a lot of power in respect to the PCIE standard). It's entirely possible for a card to have the same GPU utilization and use more power. So in two different scenarios, a AMD card could be operating at 100% power usage and not run into the power envelope limitations or it may run into it. Before the 6xxx and 7xxx AMD series, both Nvidia and AMD simply overrode the PCIE spec... That's why you have two six pins going into most cards. They suck the extra juice from the PSU instead of from the slot. This particular generation may end up sucking more juice then the last one or it's simply more restrictive when it comes to actually adhering to the PCIE spec. This could be tested by using a 6970 or 6950 and comparing it to a 7950 or 7970. Either way, +20% elevates this bottleneck which relates to point one, subjectively, from my own personal testing. Unfortunately I'm not TR and a subjective test isn't the same as TR going through and looking at power consumption in depth. It's entirely possible I could be wrong and this is a placebo, but a lot of other AMD owners are saying the same thing. Sure they could test the 12.1 drivers against the 12.11 drivers too and see if it's a driver issue.

            • Arag0n
            • 7 years ago

            As I said, I’m not denying that the problem can be powertune related, I’m just saying t hat if previous cards and NVIDIA can avoid the issue it means there is a solution for it. Maybe it’s a driver issue, maybe it’s a firmware issue or maybe a design issue. Anyways, it’s an AMD mistake that needs to fix and I’m sure that they will eventually fix. You can’t expect people to think that NVIDIA is unfair because AMD is the one sticking with the powertune specification. I agree with all your saying, I just think that the powertune issues can be solved via drivers, consequently, it’s a driver issue.

            • Bensam123
            • 7 years ago

            Or if you read through my post Nvidia is simply overriding the PCIE power envelope, which would be a no-no if AMD cards are respecting it. Both manufacturers are supposed to abide by the TDP for PCIE.

            If AMD is and Nvidia isn’t and +20% removes that limitation, it turns into an entirely different ball game as you’re actually comparing apples to apples.

            So, now powertune is a driver issue too? XD

            Letting users override the PCIE spec isn’t the same as doing it automatically.

            • MadManOriginal
            • 7 years ago

            It is highly unlikely nVidia is overriding the PCIe power spec for a number of reasons. But all you have to do to confirm this is look at actual card DC power draw (techpowerup.com measures this) to see that the power inputs available on cards is actually very conservative. ie: cards with one PCIe 6-pin in addition to the slot (good for 150W total) draw ~100W, those with 2 PCIe 6-pins (good for 225W total) draw ~150, and so on.

            • Bensam123
            • 7 years ago

            You’re measuring how many watts a card draws based on the assumption that all cards draw the same amount of wattage through 0, 1, or 2 six pin power connectors? That wouldn’t even be measuring, that would be like ridiculous guestimation.

            TR provides load numbers as well, but that’s just one number and it doesn’t detail if it’s the average or peak. You have no idea what’s happening inside that distribution. TR also only measures from the socket and doesn’t use something like GPUZ to get actual data from the card.

            Looking at Techpowerup from a random article, they don’t appear any better:

            [url<]http://www.techpowerup.com/reviews/Club_3D/HD_7870_jokerCard_Tahiti_LE/26.html[/url<]

            • MadManOriginal
            • 7 years ago

            No, of course I don’t assume that # power pins = power draw, that would be the backwards way of doing it. I scanned over some power draw numbers and then generalized from the data which is the correct evidence-based way of doing it. If you look in detail at the individual cards and allow for some variation you will find it’s a reasonable generalization. Even if you don’t like the generalization, based on some of the ‘average’ and ‘peak’ numbers from techpowerup which are running actual games no cards come close to exceeding the PCIe power spec for their provided total power inputs. The ‘maximum’ numbers (which use artifical steady state loads like Furmark) come quite close in some cases but don’t exceed them either. [url=http://www.keithley.com/products/data/datalogger/?mn=2700<]Here's the test equipment techpowerup uses[/url<], it's not some cheap $15 multimeter and I don't doubt they are getting very accurate power numbers. However [b<]you[/b<] assume that nVidia is exceeding the power envelope of the PCIe spec with no data to back it up and when there is high quality data refuting your assumption. In fact nVidia has had power draw limiters for a number of generations specifically because of 'power virus' programs, and AMD does as well. You're grasping for straws in order to figure out why AMDs frame times are worse and it's rediculous to assume, without evidence, that nVidia would blatently exceed standard specifications when it could be, and has been, easily tested. If you're just saying that AMD's GPUs are too aggressive in their power states or that PowerTune makes them 'act funny' that's a very real shortcoming of AMD's GPUs or drivers, not an unfair advantage for nVidia. *To be clear, I think it would be worth testing with Powertune turned up to 20%. in case AMD's power management is dodgy and causes these issues. I just don't think that nVidia is cheating the PCIe power specs.

            • Bensam123
            • 7 years ago

            …dude, correlation doesn’t imply causation. Even if you can generalize based on how many six pins a GPU card has, that doesn’t hold true at all once you start to look at things below a certain threshold. Conversely GPU manufacturers usually add more six pins when the card requires more power… although that hasn’t always been true to a certain extent and some card manufacturers even remove power connectors when they feel the’re unnecessary. So the amount of variance in there doesn’t make that a reliable estimate at all, completely putting aside different GPUs having completely different power draw characteristics.

            I didn’t claim that NVIdia was exceeding the power envelope, I said they might be… Just as I said powertune may be causing these microstutters. I asked for more testing on both subjects, that’s what my original post was about. I don’t know how a likely scenario, even you admitted to is ‘grasping for straws’.

            I’m rather curious how techpowerup measured the PCIE slot voltage… It may even be possible that not all voltage comes through the six pins and AMD is regulating PCIE slot voltage, where as Nvidia isn’t… As silly as that sounds, that’s why thorough testing is done. Although, it’s not possible to differentiate sources in GPUZ.

            Graphing power consumption along with frame rates is still a good idea.

            • BestJinjo
            • 7 years ago

            If there is a problem with memory management or buffer, why wasn’t it there in say Skyrim testing or overall results of HD7970Ghz review in June? 7970Ghz beat GTX680 in smoothness in the 99th percentile frame times testing, using exact same methodology.

            [url<]https://techreport.com/review/23150/amd-radeon-hd-7970-ghz-edition/11[/url<] If AMD's drivers always had this problem, shouldn't the HD7970Ghz easily lost to the GTX680 in that review? If you look, even HD7970 tied GTX670 AMP! edition. Surely if HD7950 950mhz lost to the GTX660Ti in smoothness, GTX670 AMP! should have had no problems beating a 925mhz HD7970? GTX670 AMP! is actually nearly as fast as a stock 680. It just seems strange in the summer the smoothness wasn't a factor and now it is?

          • XTF
          • 7 years ago

          PowerTune isn’t (just) about PCI-E power specs, is it?

            • Bensam123
            • 7 years ago

            That’s it’s main purpose. Here’s a link to a article explaing some of it’s attributes:

            [url<]http://www.legitreviews.com/article/1488/4/[/url<] Nvidia has their own version, I just don't remember what it's called and there is no way to actually adjust it. Where as with powertune you can adjust it in the CCC.

            • XTF
            • 7 years ago

            Where exactly does it state Powertune’s main purpose? AFAIK it’s to keep the chip within it’s own limits / TDP.

            Can Powertune based throttling be easily detected and logged?

            • Bensam123
            • 7 years ago

            Click the link. Yes, powertunes main purpose is to keep the card inside the PCIE power envelope (which is what I said in the post above yours).

            I don’t know about logging the actions of powertune. That’s why I said it would be a good idea to graph power consumption along with frame rates. A correlation could then be formed between the two and we could see if powertune may be causing the stutters. +20% is a very easy way to see if powertune is causing it in the first place. You may get results without even needing to look at power consumption.

            • XTF
            • 7 years ago

            I have clicked the link, read the page and still couldn’t find where it makes that claim.

    • cygnus1
    • 7 years ago

    Please allow me to summarize the story (at least how I would have written it):

    Hey doubters, go F yourself. Have a nice day.

      • willmore
      • 7 years ago

      Oh, yes, because it would be better if we didn’t question things and just did as we were told. </sarcasm>

      TR is a great news site not just because they do good work, but because they have a large, involved fan base, and that they take feedback. We wouldn’t have this article without those last two and the article wouldn’t have been worth reading if not for the former. So, by all means, keep questioning, doubting, and challenging what you read here. We will all be better for it. But, do keep it civil, please.

    • SSJGoku
    • 7 years ago

    No more Micro Stuttering on the 7990 with Dynamic V-sync enabled according to tomshardware

    link [url<]http://www.tomshardware.com/reviews/radeon-hd-7990-devil13-7970-x2,3329-11.html[/url<]

      • Pantsu
      • 7 years ago

      That’s only useful if you hit the frame limit constantly. The moment the performance drops below, vsync turns off and stuttering will resume. Neither Nvidia or AMD has yet developed a proper adaptive vsync that would balance frame output regardless of performance.

        • Arclight
        • 7 years ago

        Hold my beer, i developed the proper solution by playing every game at 640×480,

    • lilbuddhaman
    • 7 years ago

    I refuse to believe these results, and I will not be upgrading (my OS).

    I WILL be giving Nvidia a chance next purchase after 3 generations of Ati though…

    • StuG
    • 7 years ago

    Very nicely done. You took what your readers had to say, and followed up on it. Part of the reason I always come back to TR for the final say on things. I am a bit surprised by the results, the HD7950 are arguably in some regards performing worse than with earlier drivers….but that is all part of the value proposition. I actually am going Green Team for the first time after selling my HD7970, my GTX680 is in the mail. Reasons like this, among others, are what convinced me to make the switch.

    Thanks again TR!

    • brucethemoose
    • 7 years ago

    This is why I love TR. Kudos to ya’ll, good, persistent tech journalism is hard to find these days.

    The frame time graphs in 2 identical Skyrim tests are really interesting… I think ya’ll underestimated the difference between the past and present.

    [url<]https://techreport.com/review/23419/nvidia-geforce-gtx-660-ti-graphics-card-reviewed/7[/url<] [url<]https://techreport.com/review/24022/does-the-radeon-hd-7950-stumble-in-windows-8/9[/url<] 1st, look at frame time graphs for the 7950 in both reviews. For whatever reason, the 7950 has regressed more than "a little". The frame times aren't nearly as tightly packed, and it's hitting 20ms a whole lot more than it used to. According to the other graph, it's hitting 16.7ms 50% more in Win7 than it used to. Now switch over to the 660 TI tab in the old review. See how much tighter the 660 TI is? See how closely it resembles the 670 now? So, as far as Skyrim goes, Nvidia's driver's have gotten significantly better, while AMD's have gotten slightly worse. All this happens at high AA/2560x1440, which should give the 7950 an advantage. Pity.

    • b_naresh
    • 7 years ago

    Nice article as usual. Typo: You’ve mentioned Max Payne instead of Agent 47 on Page 6(Hitman: Absolution)

      • drfish
      • 7 years ago

      Pssst… It was a joke. 😉

        • Scrotos
        • 7 years ago

        Doh! Missed this before hitting my comment. Yeah, made me do a double-take but I didn’t think it was on porpoise.

          • superjawes
          • 7 years ago

          You can always audit your cement.

        • b_naresh
        • 7 years ago

        Oops…I get it now! 🙂

    • BIF
    • 7 years ago

    Thank you for raising the red flag, especially for people like me who are preparing for a new build WITH Windows 8.

    I might still buy a 7950, but I’m happy to know what might be in store for me.

    More visibility may get the card makers to apply pressure to AMD. Sure, it’s probably a driver issue. And it’ll probably get fixed. Unless AMD files for bankruptcy first.

    But I’d rather know the risks. Always.

    • Srsly_Bro
    • 7 years ago

    And Seventhly, Windows 8 stumbles with the 7950. I don’t even need to write a fancy pants article to know that!

      • sweatshopking
      • 7 years ago

      WINDOWS 8 [b<] NEVER [/b<] STUMBLES!!!!! HAVEN'T I TAUGHT YOU [b<] ANYTHING!?!?!? [/b<]

        • Srsly_Bro
        • 7 years ago

        im unlurnible

      • HisDivineOrder
      • 7 years ago

      A Windows never stumbles. It walks in a manner exactly as it means to. Even if that manner is trippy and potentially dangerous.

      It totally MEANT to nearly fall flat on its face.

    • Novuake
    • 7 years ago

    I have that exact card and I am happy… If you start cranking AA in 1080p a GTX660ti performance will drop like a stone thanks to its smaller memory bus. GTX660ti won’t do multiple HD monitors so well either… Thankfully there are more than just pure FPS and your percentile chart to consider. But still very enlightening article. Now I am considering Win8! Curses… Haha…

      • erwendigo
      • 7 years ago

      “If you start cranking AA in 1080p a GTX660ti performance will drop like a stone thanks to its smaller memory bus.”

      Yeah, like the results of this review. You would know that these test results are using 1080p, very high parameters of quality, AND AA.

      If it makes you happy, then you can get with this: “GTX660ti won’t do multiple HD monitors so well either…”, but your fist sentence are false.

      Because the stone that will drop is the HD7950 with 1080p, AA, and blablabla.

        • mtcn77
        • 7 years ago

        This is actually quite false. 7950 has 3 times the “blend” performance of 660ti and all the rest of kepler geforce series. Its memory subsystem is quite good, hence AMD hardware has “supersampling” support, whereas Nvidia has not.

    • Derfer
    • 7 years ago

    I’m not surprised the results were more or less the same. My friend had been on AMD since the HD3870 at my recommendation. I always thought they were a good bang for the buck and that he wouldn’t really care about the feature advantages of nvidia. For him (and me) the 7950 was the last straw. Critical driver bugs that seemed unresolvable and all around bad performance in the games he played.

    When he switched to the 670 it wasn’t just the performance, low noise, and features that he praised, he remarked that for the first time since the HD3870 he didn’t have micro stutter (and this is across many system configs.) It would seem AMD has been putting out hollow frame rates for quite some time now. I’ve officially blacklisted their cards from my client builds and can’t say I’m missing these issues. I hope they keep selling well to keep nvidia prices down, but otherwise I’ve slowly grown to detest their products.

      • odizzido
      • 7 years ago

      That’s funny, because when I switched to AMD all of the driver issues I had with Nvidia weren’t there.

      The mouse cursor bug in SC2 is(was?) pretty annoying, though that was the only issue I had. I wonder if they ever fixed that?

      But yeah, I play a lot of games from 97-early 2000 so I am an atypical user.

      • brucethemoose
      • 7 years ago

      Congratulations, you’ve become an Nvidia fanboy!

      Unconditionally concluding that AMD<Nvidia isn’t a good idea. I agree, as a current AMD user, I think Nvidia’ driver improvements give them the upper hand now, even with the OC potential of the 7950. But they trade blows all the time.

      I can almost guarantee that AMD will best Nvidia’s frame times with the 8000 series, though we will undoubtedly get driver glitches. Nvidia will come back and outdo AMD’s frame times with Kepler V2, and they’ll have their own wonderful set of driver bugs.

        • Derfer
        • 7 years ago

        You accuse me of fanboyism then claim AMD will miraculously fix this multi-generational issue with as yet unreleased cards? I would question why you think that if I were you. I don’t believe in being a “fan” of a brand. I go with experience and data I deem trustable like what’s being presented here. It doesn’t take a polarized view to conclude AMD is putting out duds.

          • Bensam123
          • 7 years ago

          I don’t agree with the way bruce worded it as it seems to be the opposite end of things, but you’re simply concluding that Nvidia is the best no matter what and you’re letting your emotions impact your ‘unbiased’ view of AMD. Just as he doesn’t know how the 8000 series fares, neither do you.

          But from my perspective I’ve had none of the issues you’ve mentioned which makes your conclusion about AMD seem rather flawed, especially when you’re attempting to sway everyone else by it too. Read my post below for my take on them.

          I’ve owned Radeon series cards since the 8500 and I’ve played on my friends computers as well and they’ve always been Nvidia fans.

      • spuppy
      • 7 years ago

      “Hollow frame rates” that’s a good way to put it actually

      • Bensam123
      • 7 years ago

      Placebo? I don’t know where low noise comes from. That highly depends upon which card you buy, for both Nvidia and AMD. The only microstuttering I’ve actually witnessed with AMD cards (having used and compared them to Geforces my friends have used) has been with the latest generation and that’s simply cleared up by moving the powertune option to +20%. If you’re talking about Sli and Crossfire that changes the picture for both Nvidia and AMD though.

      What critical bugs do you speak of too? AMD cards haven’t had any real show stopping bugs since pre-Catalyst days. The exception to this being running nix, but I’m sure your friend isn’t running nix.

      Curiously what feature advantages are you talking about as well? AMD actually one-upped Nvidia this time around with zero-core. So from a feature perspective, AMD is actually ahead.

      • HisDivineOrder
      • 7 years ago

      Traditionally, across generations of old, I noticed ATI cards were never quite as “smooth” as nVidia cards. Back in the day, I didn’t value “smooth” as much as I do now (probably because we didn’t have cards that overwhelmed the games being sold like we do now), but it was pretty consistent across generations. Every single time I tried an ATI card, I regretted it because of a sense of “smoother” framerate that nVidia had.

      But all we ever had to measure games was fps, so that always fell by the wayside. I suspect this is just ATI’s bad driver coding finally being measurably inferior in a way they always were inferior, but managed to do so invisibly. They snuck by this issue for years and now it’s suddenly been brought into the light of day.

      Hence, AMD going on high alert. The secret is out. The Emperor is in fact quite naked. So much for performance increases in their drivers…

        • BestJinjo
        • 7 years ago

        Your post is so biased, and generalizing to the extreme, it’s not even funny. “ATI cards were never quiet as smooth as NV cards”. So far we can only conclude this is true for this set of games tested between HD7950 and GTX660Ti OC version. This was a great article.

        TR’s exact testing showed that HD7970Ghz was in fact smoother for games than GTX680 was:
        [url<]https://techreport.com/review/23150/amd-radeon-hd-7970-ghz-edition/11[/url<] Let's not start overreaching and generalizing left right and center based on HD7950 vs GTX660Ti and extrapolating it to all AMD/ATI generations of cards in the past as if this was some conspiracy that ATI/AMD was hiding for 10+ yeras. You realize many of us have played with NV and AMD/ATI cards for years and didn't have these stuttering issues the 7950 showed in the review? You think people would be buying AMD/ATI cards if all their gameplay was jerky after they had previously owned an NV card? No, they'd just stop buying AMD/ATI cards in general until the issues was fixed. Your entire post even contradicts TR's own HD7970Ghz testing where it was smoother in fact in actual gameplay in 99th percentile frame times compared to the 680.

    • wizpig64
    • 7 years ago

    Great follow up, looking forward to AMD’s response. Could the frame-time issue be unique to the Vapor-X rather than all 7950s?

      • Damage
      • 7 years ago

      No, I did some testing with another 7950 to be sure. Same results on a reference card.

        • Damage
        • 7 years ago

        Err, sorry, was an MSI card without Boost that I tested, just in case Boost was the problem.

    • sweatshopking
    • 7 years ago

    Great SECOND article on this subject. This kind of work is why people come (plus my tight abs)

      • gigafinger
      • 7 years ago

      I may have to change my vote as I misread “abs”

        • sweatshopking
        • 7 years ago

        what did you misread it as?

          • Srsly_Bro
          • 7 years ago

          I’m here for the abs, and maybe the misread abs.

    • UberGerbil
    • 7 years ago

    Good job (and thanks for) following up on exactly the question lots of people were asking.

      • papou
      • 7 years ago

      Hi

      regarding latency and stuttering, the most obvious location for a slow down would be in the software driver (especially in areas having to do with texture loading or garbage collecting or with the fixed vs floating point arithmetic use or optional post processing quality pass which get done eventhough it should not).
      I believe that in FRAPs one can get the report on ressources being used or set the maximum amount of memory used a any time.

      I am under the impression that the stuttering doesn’t happen without the High res texture pack ?
      (double negative !);

      Yet it may be interesting to investigate hardware throttling at
      – graphics car level (power or heat)
      – bus level (shared interrupt)
      – CPU level (HT enabling, dynamic speed setting, core parking and the like).

      I have dabbled in the past with DPC Latency Checker, disabling CPU speedstep, changing priority of system processes and setting CPU core affinity to solve streaming and game stuttering.

      I am going to try to setup again skyriim on my 7950 at 2560*1440 with my aging q6600@3.0Ghz.
      I am interested in what you have tried so far

Pin It on Pinterest

Share This