Single page Print

Vsync starts taking night courses
You're probably familiar with vsync, or vertical refresh synchronization, if you've spent much time gaming on a PC. With vsync enabled, the GPU waits for the display to finish updating the screen—something most displays do 60 times per second—before flipping to a new frame buffer. Without vsync, the GPU may flip to a new frame while the display is being updated. Doing so can result in an artifact called tearing, where the upper part of the display shows one image and the lower portion shows another, creating an obvious visual discontinuity.

Tearing is bad and often annoying. In cases where the graphics processor is rendering frames much faster than the display's refresh rate, you may even see multiple tears onscreen at once, which is pretty awful. (id Software's Rage infamously had a bad case of this problem when it was first released, compounded by the fact that forcing on vsync via the graphics driver control panel didn't always work.) Vsync is generally a nice feature to have, and I'll sometimes go out of my way to turn it on in order to avoid tearing.

However, vsync can create problems when GPU performance becomes strained, because frame flips must coincide with the display's refresh rate. A 60Hz display updates itself every 16.7 milliseconds. If a new frame isn't completed and available when an update begins, then the prior frame will be shown again, and another 16.7 ms will pass before a new frame can be displayed. Thus, the effective frame rate of a system with a 60Hz display and vsync enabled will be 60Hz, 30Hz, 20Hz, and so on—in other words, frame output is quantized. When game performance drops and this quantization effect kicks in, your eyes may begin to notice slowdowns and choppiness. For this reason, lots of folks (especially competitive gamers) tend to play without vsync enabled, choosing to tolerate tearing as the lesser of two evils.

Nvidia's new adaptive vsync feature attempts to thread the needle between these two tradeoffs with a simple policy. If frame updates are coming in at the display's refresh rate or better, vsync will be enforced; if not, vsync will be disabled. This nifty little compromise will allow some tearing, but only in cases where the smoothness of the on-screen animation would otherwise be threatened.


An illustration of adaptive vsync in action. Source: Nvidia.

If the idea behind this feature sounds familiar, perhaps that's because we've heard of it before. Last October, in an update to Rage, id Software added a "Smart vsync" option that sounds very familiar:

Some graphics drivers now support a so called "swap-tear" extension. You can try using this extension by setting VSync to SMART. If your graphics driver supports this extension and you set VSync to SMART then RAGE will synchronize to the vertical retrace of your monitor when your computer is able to maintain 60 frames per second and the screen may tear if your frame rate drops below 60 frames per second. In other words the SMART VSync option trades a sudden drop to 30 frames per second with occasional screen tearing. Occasional screen tearing is usually considered less distracting than a more severe drop in frame rate

So yeah, nothing new under the sun.

In the past, games have avoided the vsync quantization penalty via buffering, or building up a queue of one or two frames, so a new one is generally available when it comes time for a display refresh. Triple buffering has traditionally been the accepted standard for ensuring smoothness. When we asked Nvidia about the merits of triple buffering versus adaptive vsync, we got a response from one of their software engineers, who offered a nice, detailed explanation of the contrasts, including the differences between triple buffering in OpenGL and DirectX:

There are two definitions for triple buffering. One applies to OGL and the other to DX. Adaptive v-sync provides benefits in terms of power savings and smoothness relative to both.

  • Triple buffering solutions require more frame-buffer memory than double buffering, which can be a problem at high resolutions.

  • Triple buffering is an application choice (no driver override in DX) and is not frequently supported.

  • OGL triple buffering: The GPU renders frames as fast as it can (equivalent to v-sync off) and the most recently completed frame is display at the next v-sync. This means you get tear-free rendering, but entire frames are affectively dropped (never displayed) so smoothness is severely compromised and the effective time interval between successive displayed frames can vary by a factor of two. Measuring fps in this case will return the v-sync off frame rate which is meaningless when some frames are not displayed (can you be sure they were actually rendered?). To summarize - this implementation combines high power consumption and uneven motion sampling for a poor user experience.

  • DX triple buffering is the same as double buffering but with three back buffers which allows the GPU to render two frames before stalling for display to complete scanout of the oldest frame. The resulting behavior is the same as adaptive vsync (or regular double-buffered v-sync=on) for frame rates above 60Hz, so power and smoothness are ok. It's a different story when the frame rate drops below 60 though. Below 60Hz this solution will run faster than 30Hz (i.e. better than regular double buffered v-sync=on) because successive frames will display after either 1 or 2 v-blank intervals. This results in better average frame rates, but the samples are uneven and smoothness is compromised.

  • Adaptive vsync is smooth below 60Hz (even samples) and uses less power above 60Hz.

  • Triple buffering adds 50% more latency to the rendering pipeline. This is particularly problematic below 60fps. Adaptive vsync adds no latency.
  • Clearly, things get complicated in a hurry depending on how the feature is implemented, but generally speaking, triple buffering has several disadvantages compared to adaptive vsync. One, the animation may not be as smooth. Two, there's substantially more lag between user inputs and screen updates. Three, it uses more video memory. And four, triple-buffering can't be enabled via the graphics driver control panel for DirectX games. Nvidia also contends that smart vsync is more power-efficient than the OpenGL version of triple buffering.

    Overall, we're compelled by these arguments. However, we should note that we're very much focusing on the most difficult cases, which are far from typical. We've somehow managed to enjoy fast action games for many years, both with and without vsync and triple buffering, before adaptive vsync came along.

    Whether or not you'll notice a problem with traditional vsync depends a lot on the GPU's current performance and how the camera is moving through the virtual world. Petersen showed a demo at the Kepler press day involving the camera flying over a virtual village in Unigine Heaven where vsync-related stuttering was very obvious. When we were hunting for slowdowns related to vsync quantization while strafing around in a Serious Sam 3 firefight, however, we couldn't perceive any problems, even though frame rates were bouncing around between 30 and 60 FPS on our 60Hz display.

    With that said, we only noticed very occasional tearing in Sam 3 with adaptive vsync enabled, and performance was generally quite good. We've also played quite a bit of Borderlands on the GTX 680 with both FXAA and adaptive vsync, and the experience has consistently been excellent. If you own video card capable of adaptive vsync, I'd recommend enabling it for everyday gaming. I'm convinced it is the best current compromise.

    We should note, though, that this simple optimization isn't really a solution to some of the more vexing problems with ensuring consistent frame delivery and fluid animation in games. Adaptive vsync doesn't really address the issues with uneven frame dispatch that lead to micro-stuttering in multi-GPU solutions, as we explored here, nor is it as smart about display synchronization as the HyperFormance feature in Lucid's upcoming Virtu MVP software. We'll have more to say about Kepler and micro-stuttering in the coming weeks, so stay tuned.

    Because adaptive vsync is a software feature, it doesn't have to be confined to Kepler hardware. In fact, Nvidia plans to release a driver update shortly that will grant this capability to some older GeForce products, as well.