Developed by Crystal Dynamics, this reboot of the famous franchise features a more believable Lara Croft who, as the game progresses, sheds her fear and vulnerability to become a formidable killing machine. I tested Tomb Raider by running around a small mountain area, which is roughly 10% of the way into the single-player campaign.
This is a rather impressive-looking game that's clearly designed to take full advantage of high-end gaming PCs. The Ultra and Ultimate detail presets were too hard on these cards, so I had to settle for the High preset and leave the game's TressFX hair physics disabled. Testing was done at 1080p.
Let's preface the results below with a little primer on our testing methodology. Along with measuring average frames per second, we delve inside the second to look at frame rendering times. Studying the time taken to render each frame gives us a better sense of playability, because it highlights issues like stuttering that can occur—and be felt by the player—within the span of one second. Charting frame times shows these issues clear as day, while charting average frames per second obscures them.
To get a sense of how frame times correspond to FPS rates, check the table on the right.
We're going to start by charting frame times over the totality of a representative run for each system. (That run is usually the middle one out of the five we ran for each card.) These plots should give us an at-a-glance impression of overall playability, warts and all. You can click the buttons below the graph to compare our protagonist to its different competitors.
Right away, it's clear that the Radeon HD 7790 is much closer to the 7850 1GB than to the 7770, whose plot shows frequent spikes above 30 ms. However, the 7790's plot is still a little higher than that of the 7850 and the more expensive GTX 650 Ti 2GB AMP! Edition, which suggests that it's not quite as fast.
We can slice and dice our raw frame-time data in several ways to show different facets of the performance picture. Let's start with something we're all familiar with: average frames per second. Average FPS is widely used, but it has some serious limitation. Another way to summarize performance is to consider the threshold below which 99% of frames are rendered, which offers a sense of overall frame latency, excluding fringe cases. (The lower the threshold, the more fluid the game.)
The average FPS and 99th-percentile results confirm our appraisal of the frame time plots. However, the performance difference between the 7790 and its faster rivals isn't that big, especially in the 99th-percentile metric, which gives us a better indication of seat-of-the-pants smoothness and playability than average FPS.
Now, the 99th percentile result only captures a single point along the latency curve, but we can show you that whole curve, as well. With single-GPU configs like these, the right hand-side of the graph—and especially the last 5% or so—is where you'll want to look. That section tends to be where the best and worst solutions diverge.
Finally, we can rank the cards based on how long they spent working on frames that took longer than a certain number of milliseconds to render. Simply put, this metric is a measure of "badness." It tells us about the scope of delays in frame delivery during the test scenario. Here, you can click the buttons below the graph to switch between different milisecond thresholds.
None of the cards spend much time beyond our most important threshold of "badness" at 50 milliseconds—that means none of them dip below the relatively slow frame production rate of 20 FPS for long. In fact, except for the Radeon HD 7770, none of our cards spend a significant amount of time working on frames that take longer than 33.3 ms to render. That should mean pretty fluid gameplay from each of them.