We tested Battlefield 3 with all of its DX11 goodness cranked up, including the "Ultra" quality settings with both 4X MSAA and the high-quality version of the post-process FXAA. Our test was conducted in the "Kaffarov" level, for 60 seconds starting at the first checkpoint.
Let me apologize in advance for what follows, because it is a bit of a data dump. We're about to do some unusual things with our test results, and we think it's best to show our work first. The plots below come from a one of the five test runs we conducted for each card.
Yep, those plots show the time required to produce each and every frame of the test run. Because we're reporting frame times in milliseconds, lower numbers are better. If you're unfamiliar with our strange new testing methods, let me refer you to my article Inside the second: A new look at game benchmarking for an introduction to what we're doing.
The short version is that we've decided traditional FPS averages aren't a very good indicator of the fluidity of animation in real-time graphics. The problem isn't with reporting things as a rate, really. The problem is that nearly every utility averages frame production rates over the course of a full second—and a second is a very long time. For example, a single frame that takes nearly half a second to render could be surrounded by frames that took approximately 16 milliseconds to render—and the average reported over that full second would be 35 FPS, which sounds reasonably good. However, that half-second wait would be very disruptive to the person attempting to play the game.
In order to better understand how well a real-time graphics system works, we need to look closer, to use a higher-resolution timer, if you will. Also, rather than worrying about simple averages, we can consider the more consequential question of frame latencies. What we want is consistent production of frames at low latencies, and there are better ways to quantify that sort of thing. Since we're going to be talking about frame times in milliseconds, I've included a handy table on the right that offers conversions from milliseconds to FPS.
We'll start by reporting the traditional FPS average. As you can see, the Radeon HD 7970 GHz Edition just outperforms the stock GeForce GTX 680 in this metric, although the difference is too small to worry about, really.
If we're thinking in terms of frame latencies, another way to summarize performance is to look at the 99th percentile frame time. That simply means we're reporting the threshold below which 99% of all frames were rendered by each card. We've ruled out that last 1% of outliers, and the resulting number should be a decent indicator of overall frame latency. As you can see, the differences between the top few cards are even smaller by this measure.
A funny thing happens, though, to our two legacy cards, the GeForce GTX 570 and the Radeon HD 6970. Although the GTX 570 has a slightly higher FPS average, its 99th percentile frame time is much higher than the 6970's. Why? Well, the 99th percentile is just one point on a curve, so we shouldn't make too much of it without putting it into context. We can plot the tail end of the latency curve for each card to get a broader picture.
The GeForce GTX 570 is faster than the Radeon HD 6970 most of the time, until we get to the last 3% or so of the frames being produced. Then the GTX 570 stumbles, as frame times shoot toward the 100-millisecond mark. Scroll up to the frame time plots above, and you can see the problem. The GTX 570's plot is spiky, with a number of long-latency frames interspersed throughout the test run. This is a familiar problem with older Nvidia GPUs in BF3, though it appears to have been resolved in the GK104-based cards.
In fact, all of the newer cards are nearly ideal performers, with nice, straight lines in the high 20- and low 30-millisecond range. They only curve up modestly when we reach the last one or two percentage points.
You'll recall that our 99th percentile frame time measurement ruled out the last 1% of long-latency frames. That's useful to do, but since this is a real-time application, we don't want to ignore those long-latency frames entirely. In fact, we want to get a sense of how bad it really is for each card. To do so, we've concocted another measurement that looks at the amount of time spent working on frames for which we've already waited 50 milliseconds. We've chosen 50 ms as a threshold because it corresponds to 20 FPS, and somewhere around that mark, the illusion of motion seems to be threatened for most people. Also, 50 ms corresponds to three full vertical refresh intervals on a 60Hz display. If you're gaming with vsync enabled, any time spent beyond 50 ms is time spent at 15 FPS or less, given vsync's quantization effect.
Predictably, only the two legacy cards spend any real time beyond 50 ms, and only the GeForce GTX 570 really has a problem. The GTX 570 has a real but not devastating issue here; it spends 1.4 seconds out of our 60-second test session working on long-latency frames. Our play-testing sessions on this card felt sluggish and clumsy. Remember, though: when we started, the GTX 570 had a higher FPS average than the Radeon HD 6970. That FPS number just turns out to be pretty meaningless.
|New Need for Speed looks like a lean, mean machine||66|
|Friday night topic: how dinosaurs probably looked||36|
|Thermaltake's Suppressor F51 mid-tower looks a tad familiar||6|
|Umbra action RPG uses Megascans tech to glorious effect||21|
|Deal of the week: 27'' AHVA monitor for $300, The Witcher 3 for $39||19|
|F1 2015 offers a new formula for racing fans||8|
|The Witcher 3 developer explains controversial graphics downgrade||49|
|Frostbite engine lead teases next-gen Radeon||36|