I told myself I'd try to keep pace with any developments across the web related to our frame-latency-based game benchmarking methods, but I've once again fallen behind. That's good, in a way, because there's lots going on. Let me try to catch you up on the latest with a series of hard-hitting bullet points, not necessarily in the order of importance.
- First, I totally missed this when it happened, but not long after we posted our high-speed video of Skyrim, the folks at NordicHardware included a frame latency analysis with slow-mo video in one of their video card reviews. They tested the GTX 680 vs. the Radeon HD 7970 GHz Edition, and they saw the same issues we did with the Radeon 7950 in Borderlands 2 and Skyrim. I'm not sure, but I think I may have talked with one of these guys at a press event a while back. Infected them successfully, it seems.
Also, a word on words. Although I'm reading a Google translation, I can see that they used the word "microstuttering" to describe the frame latency issues on the Radeon. For what it's worth, I prefer to reserve the term "microstuttering" for the peculiar sort of problem often encountered in multi-GPU setups where frame times oscillate in a tight, alternating pattern. That, to me, is "jitter," too. Intermittent latency spikes are problematic, of course, but aren't necessarily microstuttering. I expect to fail in enforcing this preference anywhere beyond TR, of course.
- Next, Mark at AlienBabelTech continues to progress with latency-based performance analysis. He asks the question: does an SSD make for smoother gaming? (Answer: only sometimes, not often.) Then he straight up pits the GeForce GTX 680 vs. the Radeon HD 7970 in a range of games. Among other things, he saw frame time spikes on the 7970 in Hitman: Absolution similar to what we saw with the 7950. Mark says more is coming, including results from the rest of his 30-game benchmark suite.
- The guys at Rage3D have gotten a start on their version of latency-based testing in this review. The text takes some time to explain their approach. There are some interesting ideas in there, including a "smoothness index" that could become useful with further refinement (including a clear sense of how specific, knowable amounts of time matter more than percentages in real-time systems based on display refresh rates and human perception.) I get the sense James and I see the world in very different ways, and I'm happy to have him join the conversation.
- Ryan at PCPer has continued his vision quest on the matter of "frame rating," after offering part one just before CES. For the uninitiated, he's using a video capture card and colored overlays to record and analyze each frame of animation output to the display. In part two, he shows how stepping through the captured frames allows him to identify and pinpoint frame delivery slowdowns, which he calls "stutter." (Bless him for not adding the "micro" prefix.)
The colored overlays that track frame delivery are nifty, but I'm pleased to see Ryan looking at frame contents rather than just frame delivery, because what matters to animation isn't just the regularity with which frames arrive at the display. The content of those frames is vital, too. As Andrew Lauritzen noted in his B3D post, disrupted timing in the game engine can interrupt animation fluidity even if buffering manages to keep frames arriving at the display at regular intervals.
- To take that thought a step further, I recently realized—much later than I probably should have—that the possibility for timing disruptions at both ends of the rendering pipeline means there may never be a single, perfect number to characterize smooth gaming performance. At least, that number would likely have to be the result of a complex formula that accounts for the game engine simulation time, the time when the frame reaches the display, and the relationship between the two.
Those folks who are still wary of using Fraps because it writes a timestamp at single point in the process will want to chew on the implications of that statement for a while. Another implication: we'll perhaps always need to supplement any quantitative results with qualitative analysis in order to paint the whole picture. So... this changes nothing!
- On a tangentially related note, Nvidia's Timothy Lottes, the FXAA and TXAA guru, has taken to his personal blog to discuss the issue of game input latency. I mention his post in part because our talk of frame rendering latency has caused some folks to think about that other latency-oriented problem, input lag. Frame rendering times are important to the CPU and GPU reviews we do, but frame times are just one piece of the larger puzzle when you're talking input lag. Timothy's post explains the sources of input latency and how GPU rendering fits into the picture. I expect we'll be hearing more about input lag as things like Oculus Rift move toward becoming products.
Although it may be confusing to some folks, we will probably keep talking about frame rendering in terms of latency, just as we do with input lag. That's because I continue to believe game performance is fundamentally a frame-latency-based problem. We just need to remember which type of latency is which—and that frame latency is just a subset of the overall input-response chain.
- Finally, this may be old news to most of you, but those who are new to the subject may be interested to see that our frame latency-based game testing methods apply to CPU performance, too.
That's all for now, folks. More when it happens.