First up is my favorite game of the year so far, Borderlands 2. The shoot-n-loot formula of this FPS-RPG mash-up is ridiculously addictive, and the second installment in the series has some of the best writing and voice acting around.
As you may know, our game benchmarking methods are different from what you'll find elsewhere, in part because they're based on chunks of gameplay, not just scripted sequences. We're trying something different this time around: embedding videos of typical gameplay sessions in the article. Below is a look at our 90-second path through the "Opportunity" level in Borderlands 2.
As you'll note, this session involves lots of fighting, so it's not exactly repeatable from one test run to the next. However, we took the same path and fought the same basic contingent of foes each time through. The results were pretty consistent from one run to the next, and final numbers we've reported are the medians from five test runs.
We used the game's highest image quality settings at the 27" Korean monitor resolution of 2560x1440.
Our first result is a simple plot of the time needed to render each frame during one of our test runs. Because the frame render times are reported in milliseconds, lower times are preferable. Note that, although you may see FPS-over-time plots elsewhere, those usually are based on averaging FPS over successive one-second intervals; as a result, they tend to mask momentary slowdowns almost entirely. Our plots are sourced from the raw frame time data instead.
Right away, this approach gives us some insights. The GTX 660 Ti's frame times tend to be very low, generally under 20 ms and rarely ranging above that mark. By contrast, the Radeon HD 7950's plot is riddled with spikes up to twice that long or longer.
A traditional FPS average doesn't really capture the difference in how these two cards perform in Borderlands 2. Yes, the Radeon's average is lower, but it's still over the supposedly golden 60-FPS mark. Usually, producing an average that high would be considered quite good, but we felt the difference between the 7950 and the GTX 660 Ti clearly while testing.
We think gamers would be better served by skipping the FPS average and instead taking a latency-focused approach to frame delivery, if they really want to understand gaming performance. One alternative method is to consider the 99th percentile frame time, which is simply the threshold below which 99% of all frames have been generated. In the chart above, the Radeon HD 7950 has delivered 99% of the frames in under 31.7 milliseconds or less. That means all but the last one percent of frames were produced at a rate of 30 FPS or better—not too shabby.
Compared to the GeForce, though, the Radeon isn't doing so well. The GeForce delivers 99% of frames in under 20 milliseconds, which is the equivalent of about 50 FPS. That's why playing the game on the GeForce feels perceptibly smoother. I think these 99th percentile numbers more accurately convey the sense of things one gets from studying those initial frame time plots—and from playing the game on both cards.
Our 99th percentile cutoff has proven to be a pretty good choice for capturing a sense of comparative performance. However, we have to be careful, because it's just one point along a curve. We can plot the entire frame latency curve, using data taken from all five runs, in order to get a better sense of overall performance. Over time, I've grown accustomed to reading these curves, and they're now my favorite way to illustrate gaming performance.
As you can see above, the Radeon's performance is very close to the GeForce's much of the time. The two cards' latency curves are similar up to about 90% of the frames rendered. Once we reach the last 10% or so, though, they begin to diverge, with the Radeon's curve shooting upward sooner, to higher reaches. For some reason, the 7950 struggles to render a portion of the frames as quickly as its GeForce counterpart. We know from the initial frame time plots that those difficult frames are distributed throughout the test run as intermittent and fairly frequent spikes.
Our 99th percentile metric rules out the last one percent of frames, instead focusing on the general latency picture. That's helpful, as we've seen, but we also want to pay attention to the worst delays, the ones that are likely to impact the fluidity of gameplay. After all, a fast computer is supposed to curtail those big slowdowns.
We can measure the "badness" of long frame times by adding up all of the time spent working on frames beyond a given threshold. In this case, we've picked 50 milliseconds as our cutoff. That's equivalent to 20 FPS, and we figure any animation moving slower than 20 FPS will probably be noticeably halting and choppy. Also, 50 ms is equivalent to three vertical refresh intervals on a 60Hz display.
These results are somewhat heartening. Although the Radeon does spend twice as long above our threshold as the GeForce, neither card wastes much time at all working on especially long-latency frames. In other words, both cards offer pretty good playability in this test scenario. Subjectively, I prefer the smoother gameplay produced by the GeForce, but the Radeon doesn't struggle too mightily. Still, the gap between them is much larger than the 64-to-72 difference in FPS averages would seem to suggest.
|Windows 8.1 overtakes XP in market share, Win7 still on top||101|
|Star Wars: Battlefront alpha gameplay videos leak||32|
|North America's IPv4 address supply is running dry||60|
|Renée James steps down as Intel president||25|
|NoScript vulnerability allows malicious scripts to run unchecked||14|
|Canada Day Shortbread||47|