Single page Print

The Elder Scrolls V: Skyrim
Now for something completely different.

Yep, it's time for some game benchmarking, but not, perhaps, as you know it. We tested performance using Fraps while taking a stroll around the town of Whiterun in Skyrim. The game was set to the graphical quality settings shown above. Note that we're using fairly high quality visual settings, basically the "ultra" presets at 1920x1080 but with FXAA instead of MSAA. Our test sessions lasted 90 seconds each, and we repeated them five times per CPU.

The thing is, as we tested, we were recording the time required to produce every single frame of the animation in the game. Our reasoning behind this madness is explained in my article, Inside the second: A new look at game benchmarking. Much of what we said in that article was oriented toward GPU testing, but the same methods of game benchmarking can apply to CPUs, as well. This is our first chance to give those methods a try in the context of a CPU review, so we're excited to see what happens.

Frame time
in milliseconds
FPS rate
8.3 120
16.7 60
20 50
25 40
33.3 30
50 20

Here's a crack at explaining the reasons behind our new testing methods. The constant stream of images produced by a game engine as you play creates the illusion of motion. We often talk about gaming performance in terms FPS, or frames per second, but most of the tools that measure gaming performance actually average out frame production over an entire second in order to give you a result. That's not terribly helpful. If you encounter a delay of half a second, or 500 ms, for a single frame surrounded by a stream of lightning-quick 16.7 ms frames, that entire second will average out to about 35 FPS. Most folks will look at that FPS number and think the performance was reasonably acceptable, if not stellar. (Because, hey, a stream of frames at a constant 35 FPS wouldn't be half bad.) They will, of course, be very wrong. Even a shorter interruption of, say, 200 ms or less while playing a game will feel like an eternity, destroying the illusion of motion and any sense of immersion—and possibly getting your character killed.

Fortunately, we have the tools to measure and quantify gaming performance in much greater detail, and we can bring those to bear in considering CPU performance. Let's start by looking at plots of the time required to produce individual frames during one of our test runs. (We've used just one run for the visualizations, but the rest of our results take all five runs into account.) Remember, since we're looking at frame times, lower is better in these plots. Also, if you want to convert FPS because it's more familiar, you can simply refer to the table on the right.

As you can see, the raw data show some clear differences in performance between the CPUs. The faster processors tend to produce more frames, of course. There are spikes in frame times for all of the processors, but the sizes of the spikes tend to be larger in certain cases. Some frames take quite a bit of time to produce, which isn't good. The AMD chips especially seem to struggle during the opening moments of our test run, where we're up by the Jarl's castle, looking out over Whiterun and the mountains beyond.

The traditional FPS average gives us a sense of the performance differences. Obviously, the Core i7-3770K acquits itself well in this test, as do all of the Intel CPUs. The AMD processors are all quite a bit slower. However, even the slowest one averages over 60 FPS. Doesn't that mean all of the processors are more than adequate for this task?

Not necessarily, as those spikes in frame times tend to show.

Another way of thinking about gaming performance is in terms of real-time frame latencies. That is, after all, what smooth animation relies upon. We've borrowed a bit from the transaction latency measurements in the server benchmarking world and suggested that a look at the 99th percentile frame latency might be a good starting point for this approach. This metric simply offers a bit of information, telling you that 99% of all frames were produced in x milliseconds or less. It's a simple way of thinking about overall frame delivery.

Here, all of the Intel processors again perform very well. They're cranking out 99% of all frames in the 17-18 ms range, not far from the 16.7-ms frame time that equates to a steady 60 FPS. The 99th percentile frame latencies for the AMD chips are nearly double that.

Then again, this metric only considers one select point where 99% of all frames have been produced. We can look at the entire latency picture for each CPU by plotting the latency curve from the 50th percentile up.

The contest between the Intel processors is incredibly tight. For most intents and purposes, they are all evenly matched.

Things become more interesting when we look at the AMD CPUs. The Phenom II X6 and the FX-8150 are essentially tied in both the average FPS and 99th percentile results. However, a funny thing happens to the FX-8150 while it's rendering the toughest 5% of the frames, on the right edge of the plot: its frame times shoot up above the Phenom II X6's. That outcome is likely the result of a unique characteristic of the Bulldozer architecture: its relatively low per-thread performance in many cases. When this real-time system, the Skyrim game engine, runs into a trouble spot, the FX-8150 doesn't have the per-thread oomph to power through. I'd say the Phenom II X6 is a better Skyrim companion than the FX-8150, as a result. (Although Lydia is still the best.)

We are, of course, splitting hairs a bit here, just because we can. Even frame latencies in the 30-plus millisecond range are relatively decent. One reality check we can give ourselves is to consider the worst-case scenarios, those long-latency frames that are most likely to ruin the sense of smooth motion. We've done that in the past, with GPUs, by looking at the amount of time spent rendering frames beyond a threshold of 50 milliseconds. 50 ms equates to 20 FPS, and we figure if you dip below 20 FPS, most folks are going to notice. However, none of these CPUs deliver frames that slowly. Our next obvious step down is 33.3 milliseconds, or 30Hz. If you have vsync enabled while gaming on a 60Hz monitor, frames that take longer than 33.3 milliseconds won't be shown until two full display refresh cycles have passed.

None of these CPUs spend much time at all working on frames that take longer than our 33.3 ms threshold. However, we can ratchet things down one more time, to 16.7 milliseconds or a constant 60 FPS, and see what happens then.

If you are looking for glassy smooth animation in Skyrim, any of these Intel CPUs will deliver it. Interestingly enough, the Ivy Bridge chip with its slightly improved per-clock performance has an ever-so-slim lead over even the mighty Core i7-3960X. The AMD processors, meanwhile, spend quite a bit of time working on frames beyond 16.7 ms. They're not poor performers here, but the Intel processors ensure more consistent low-latency frame delivery.