Single page Print

A first look at Nvidia's G-Sync display tech


We lean too far toward the screen, fall in, and don't want to come out
— 5:55 PM on January 2, 2014

We got our first look at G-Sync a couple of months ago at an Nvidia press event in Montreal, and we came away impressed with the handful of demos being shown there. Now, we've had the chance to spend some quality time with early G-Sync hardware within the comfy confines of Damage Labs, and we have much more to say about the technology. Read on to see what we think.

So what is G-Sync?
In order to understand G-Sync, you have to understand a little bit about how current display technology works. If you've been hanging around TR for any length of time, you probably have a sense of these things. Today's display tech is based on some fundamental assumptions borrowed from ye olde CRT monitors—as if an electron gun were still scanning rows of phosphors inside of today's LCDs. Among the most basic of those assumptions is the refresh cycle, where updates are painted on the display at rapid but fixed intervals. Most monitors are refreshed at a rate of 60 times per second, or 60Hz. Going a little deeper, most LCDs still paint the screen much like a CRT: updating rows of pixels from left to right, starting at the top of the screen and scanning down to the bottom.

Updating the screen at fixed intervals can be a fine way to create the illusion of motion. Movies and television do it that way, and so do video games, by and large. However, most motion picture technologies capture images at a fixed rate and then play them back at that same rate, so everything works out nicely. The rich visuals produced by graphics processors in today's video games are different. Graphics chips produce those images in real time by doing lots of math very quickly, crunching through many billions of floating-point operations each second. Even with all of that power on tap, the computational workloads vary widely as the camera moves through a dynamic, changing game world. Frame rendering times tend to fluctuate as a result. This reality is what has driven our move to frame-time-based performance testing, and we can draw an example from one of our recent GPU reviews to illustrate how frame rendering times vary. He's a look at how one of today's faster graphics cards produces frames in Battlefield 4.

The plot above shows individual rendering times for a handful of frames. There's really not tons of variance from frame to frame in this example, but rendering times still range from about 16 to 23 milliseconds. Zoom out a bit to look at a longer gameplay sequence, and the range of frame times grows.

The crazy thing is that the stem-winding plot you see above illustrates what we'd consider to be very decent performance. No single frame takes longer than 50 milliseconds to produce, and most of them are rendered much quicker than that. In the world of real-time graphics, that's a nice looking frame time distribution. As you can imagine, though, matching up this squiggly plot with the regular cadence of a fixed refresh rate would be pretty much impossible.

Here's what's crazy: that impossibility is at the heart of the interaction between GPUs and displays constantly, with every frame that's produced. GPU rendering times vary, and display refresh rates do not. At its lowest level, the timing of in-game animation is kind of a mess.

For years, we've dealt with this problem by choosing between two different coping mechanisms, neither of them particularly good. The usual default method is a technology called vsync, or vertical refresh synchronization. Vsync involves storing completed frames in a buffer and only exposing a fresh, buffered frame when the time comes to paint the screen. This technique can work reasonably well when everything else in the system cooperates—when frames are coming out of the GPU at short, regular intervals.

Frame rendering times tend to vary, though, as we've noted. As a result, even with some buffering, the system may not have a frame ready at the start of each new refresh cycle. If there's no new frame to be displayed when it's time to paint the screen, the fallback option is to show the preceding frame once again and to wait for the next refresh cycle before flipping to a new one.

This wait for the next refresh interval drops the effective frame rate. The usual refresh interval for a 60Hz display is 16.7 milliseconds. Turn in a frame at every interval, and you're gaming at a steady 60 FPS. If a frame takes 16.9 milliseconds to render—and is just 0.2 ms late to the party—it will have to wait the remaining 16.5 ms of the current interval before being displayed. The total wait time for a new frame, then, will be 33.3 ms—the equivalent of 30 FPS.

So the consequences for missing a single refresh interval are dire: half the performance and presumably half the perceived smoothness. Things get worse from there. Missing two intervals, with a frame that requires just over 33.3 ms to produce, delays the display update to 50 ms in length (equal to 20 FPS). Missing three intervals takes you to 66.7 ms, or 15 FPS. Those are your choices: 60 FPS, 30 FPS, 20 FPS, 15 FPS, and so on.

Now imagine what happens in action, as vsync works to map a wavy, up-and-down series of rendered frames to this stair-step series of effective animation rates. Hint: it ain't exactly ideal. Here are a couple of examples Nvidia has mocked up to illustrate. They're better than the examples I failed to mock up because I'm lazy.


Source: Nvidia.

This stair-step effect is known as quantization, and it's the same effect that, in digital audio, can cause problems when mapping an analog waveform to a fixed sampling rate. Heck, I'm pretty sure we're hearing the effects of intentionally exaggerated quantization in today's autotune algorithms.

Quantization is not a friend to smooth animation. The second scenario plotted above is fairly common, where frame rendering times are ranging above and below the 16.7-ms threshold. The oscillation between update rates can lead to a halting, uneven sense of motion.

That's true not just because of the quantized update rates alone, but because of the side effects of delaying frames. When buffered frames waiting in the queue are finally displayed at the next refresh interval, their contents will be temporally out of sync with their display time. After all, as frames are generated, the game engine has no knowledge about when they'll be displayed. Also, buffering and delaying frames adds latency to the input-response feedback loop, reducing the immediacy of the experience. You'll wait longer after clicking the mouse or pressing a key before you begin to see the corresponding action taking place onscreen.

Nvidia calls this quantization effect stuttering, and I suppose in a sense it is. However, I don't think that's a helpful term to use in this context. Display refresh quantization is a specific and well-understood problem, and its effects are distinct the from longer, more intermitted slowdowns that we usually describe as stuttering.

The downsides of vsync are bad enough that many gamers have decided to opt for disabling it, instead. Turning off vsync is faster and more immediate, but it means the GPU will flip to a new frame while the display is being drawn. Thus, fragments of multiple rendered frames will occupy portions the screen simultaneously. The seams between the frames are sometimes easy to see, and they create an artifact called tearing. If you've played a 3D game without vsync, you've probably seen tearing. Here's a quick example from Borderlands 2:

Tearing is huge penalty to pay in terms of visual fidelity. Without any synchronization between GPU render times and frame display times, tearing is likely to be happening somewhere onscreen almost all of the time—perhaps multiple times per refresh cycle, if the GPU is pumping out frames often enough. As with quantization, the type of game and the nature of the motion happening in the game world will influence how readily one perceives a problem.

Like I said, neither of these coping methods is particularly good. G-Sync is intended to be a better solution. G-Sync's goal is to refresh the display when the GPU has a frame ready, rather than on a fixed schedule. One could say that G-Sync offers a variable refresh rate, but it's more about refresh times than rates, since it operates on a per-frame basis.

On a fast display with a 144Hz peak refresh rate, G-Sync can vary the refresh interval between 6.9 and 33.3 ms. That first number, 6.9 milliseconds, is the refresh interval at 144Hz. The second is equivalent to 30Hz or 30 FPS. If a new frame isn't ready after 33.3 ms, G-Sync will paint the screen again with the prior frame. So the refresh interval isn't infinitely variable, but it does offer pretty wide leeway.

In theory and in practice, then, G-Sync is easily superior to the alternatives. There's no tearing, so the visual integrity of displayed frames isn't compromised, and it provides almost immediate display updates once a frame is ready. Even though GPU frame rendering times vary, G-Sync's output looks smoother than the quantized output from traditional vsync. That's true in part because each frame's contents more closely corresponds to its display time. G-Sync also reduces the wait time imposed by the display refresh cycle, cutting input lag.

G-Sync isn't a perfect solution by any means. It doesn't eliminate the left-to-right, top-to-bottom motion by which displays are updated, for instance. The 33-ms frame time cap is a little less than ideal, too. Still, this is a far sight better than the antiquated approaches we've been using for years.