You may be interested to see the side-by-side high-speed videos we captured of our Skyrim test scenario, which illustrates the differences in smoothness between the 7950 and GTX 660 Ti.
Let's pick up where we left off. Last week, we published a rematch of the classic battle between the Radeon HD 7950 and the GeForce GTX 660 Ti. We tested with a host of the latest games, including Borderlands 2, Hitman: Absolution, and Guild Wars 2.
Going into the contest, we thought the Radeon HD 7950 was the heavy favorite for a number of reasons. Among them is the fact that AMD has released new graphics drivers in recent weeks that promise a general performance increase. Also, the firm has worked closely with the developers of many of the holiday season's top games, even bundling several of them in the box with the Radeon HD 7950.
To our surprise, though, the Radeon didn't fare particularly well in our tests. Although it cranked out FPS averages that were competitive with the GeForce, the 7950 produced those frames at an uneven pace—our frame time plots for the Radeon were riddled with latency spikes. As a result, the Radeon's scores were rather poor in our distinctive, latency-oriented performance metrics. Not only that, but our seat-of-the-pants impressions backed that up: play-testing on the GeForce felt noticeably smoother in many cases.
We noted that we'd taken various steps to ensure that the Radeon's disappointing results weren't caused by a hardware or software misconfiguration. Confident that our results were correct, we concluded that the most likely cause of the 7950's poor showing was our transition to all-new games and testing scenarios. Our tentative conclusion: playing the latest games on the Radeon HD 7950 just isn't as good an experience as playing on the GeForce GTX 660 Ti.
However, we did note the possibility that having upgraded our test rigs to Windows 8 might have played a role. I suppose we should have known that many of our readers would call for more testing in order to confirm our results. Quite a few of you asked us to run the same tests in Windows 7, to see if the Radeon's performance problems are the product of the transition to a new operating system. Others asked us to re-run a familiar test scenario from one of our older articles to see whether the Radeon with the latest drivers would experience latency spikes in places where it previously hadn't.
Those seemed like reasonable requests, and we were curious, too, about the cause of the 7950's unexpected troubles. So we fired up our test rig, installed Windows 7 with the same drivers we'd used with Win8, and got to testing.
The results, naturally, are enlightening. Read on to see what we found.
Wait, er, latency what? What about FPS?
If you're confounded by our talk of latency-focused performance metrics, you're probably not alone. Gaming performance has been measured in frames per second since the dawn of time, or at least since the 1990s, when today's PC enthusiast scene was first starting to form. However, FPS averages as they have been used tend to have some very big, very fatal flaws. We first explored this problem in the article Inside the second: A new look at game benchmarking. I recommend you read it if you want to understand the issues well.
For those too lazy to do the homework, though, let me extract a quick section from that article that explains the why FPS averages don't tell the whole story of gaming performance.
Of course, there are always debates over benchmarking methods, and the usual average FPS score has come under fire repeatedly over the years for being too broad a measure. We've been persuaded by those arguments, so for quite a while now, we have provided average and low FPS rates from our benchmarking runs and, when possible, graphs of frame rates over time. We think that information gives folks a better sense of gaming performance than just an average FPS number.
Still, even that approach has some obvious weaknesses. We've noticed them at times when results from our FRAPS-based testing didn't seem to square with our seat-of-the-pants experience. The fundamental problem is that, in terms of both computer time and human visual perception, one second is a very long time. Averaging results over a single second can obscure some big and important performance differences between systems.
To illustrate, let's look at an example. It's contrived, but it's based on some real experiences we've had in game testing over the years. The charts below show the times required, in milliseconds, to produce a series of frames over a span of one second on two different video cards.
GPU 1 is obviously the faster solution in most respects. Generally, its frame times are in the teens, and that would usually add up to an average of about 60 FPS. GPU 2 is slower, with frame times consistently around 30 milliseconds.
However, GPU 1 has a problem running this game. Let's say it's a texture upload problem caused by poor memory management in the video drivers, although it could be just about anything, including a hardware issue. The result of the problem is that GPU 1 gets stuck when attempting to render one of the frames—really stuck, to the tune of a nearly half-second delay. If you were playing a game on this card and ran into this issue, it would be a huge show-stopper. If it happened often, the game would be essentially unplayable.
The end result is that GPU 2 does a much better job of providing a consistent illusion of motion during the period of time in question. Yet look at how these two cards fare when we report these results in FPS:
Whoops. In traditional FPS terms, the performance of these two solutions during our span of time is nearly identical. The numbers tell us there's virtually no difference between them. Averaging our results over the span of a second has caused us to absorb and obscure a pretty major flaw in GPU 1's performance.
The bottom line is that producing the highest FPS average doesn't prove very much about the fluidity of in-game animation. In fact, even fancy-looking graphs that show second-by-second FPS averages over time don't tell you very much, because tools like Fraps average the results over one-second intervals. Those plots simply make the same mistake over and over again in sequence.
Once we realized the nature of the problem, we decided to test performance more like game developers do: by focusing on the time in milliseconds required to produce each frame of the animation—on frame latencies, in other words. That way, if a momentary slowdown happens, we'll know about it. This sort of analysis requires new tools and methods, which we've developed and refined over the past year. This article and other recent ones here at TR show those methods in action. We think they provide better insights into gameplay fluidity and real-time graphics performance than a traditional FPS-based approach.
The question of the hour is: Why does the Radeon HD 7950 struggle on this front in the current crop of games? Can switching back to Windows 7 alleviate the problem? Let's have a look.
|Monitors with AMD's FreeSync tech now available in select regions||49|
|Wolfenstein: The Old Blood trailer is delightfully pulpy||4|
|Thursday Night Shortbread||24|
|We learned more about Vulkan at Valve's glNext presentation||67|
|Steam Controller gets November release, $50 price tag||24|
|Report: The Witcher 3 to be bundled with higher-end Maxwell cards||22|
|Nvidia's PhysX joins the free source party||88|
|Intel announces Achievement Unlocked dev relations program||9|
|Intel partners with Raptr to optimize game settings for Iris graphics||24|
|And Samsung makes new phone with no sd slot lol whaw whaw whaw||+59|