A few thoughts on Nvidia's G-Sync


— 11:34 AM on October 21, 2013

On the plane home, I started to write up a few impressions of the new G-Sync display technology that Nvidia introduced on Friday. However, that attempt pretty quickly turned into a detailed explanation of refresh rates and display technology. Realistically, I'll have to finish that at a later date, because I have another big graphics-related project hogging my time this week.

For now, I'll just say that whatever its drawbacks—which are mainly related to its proprietary nature—the core G-Sync technology itself is simply The Right Thing to Do. That's why Nvidia was able to coax several big names into appearing to endorse it. Because G-Sync alters conventional display tech by introducing a variable refresh rate, there's no easy way to demonstrate the impact in a web-based video. This is one of those things you'll have to see in person in order to appreciate fully.

I've seen it, and it's excellent.

You may remember that I touched on the possibility of a smarter vsync on this page of my original Inside the Second article. In fact, AMD's David Nalasco was the one who floated the idea. We've known that such a thing was possible for quite some time. Still, seeing Nvidia's G-Sync implementation in action is a revelatory experience. The tangible reality is way better than the theoretical prospect. The effect may seem subtle for some folks in some cases, depending on what's happening onscreen, but I'll bet that most experienced PC gamers who have been haunted by tearing and vsync quantization for years will appreciate the improvement pretty readily. Not long after that, you'll be hooked.

In order to make G-Sync happen, Nvidia had to build a new chip to replace the one that goes inside of most monitors to do image scaling and such. You may have noticed that the first version of Nvidia's solution uses a pretty big chip. That's because it employs an FPGA that's been programmed to do this job. The pictures show that the FPGA is paired with a trio of 2Gb DDR3 DRAMs, giving it 768MB of memory for image processing and buffering. The solution looks to add about $100 to price of a display. You can imagine Nvidia could cut costs pretty dramatically, though, by moving the G-Sync control logic into a dedicated chip.

The first monitors with G-Sync are gaming-oriented, and most are capable of fairly high refresh rates. That generally means we're talking about TN panels, with the compromises that come along with them in terms of color fidelity and viewing angles. However, the G-Sync module should be compatible with IPS panels, as well. As the folks who are overclocking their 27" Korean IPS monitors have found, even nice IPS panels sold with 60Hz limits are actually capable of much higher update rates.

G-Sync varies the screen update speed between the upper bound of the display's peak refresh rate and the lower bound of 30Hz—or every 33 ms. Beyond 33 ms, the prior frame is painted again. Understand that we're really talking about frame-to-frame updates that happen between 8.3 ms and 33 ms, not traditional refresh rates between 120Hz and 30Hz. G-Sync varies the timing on a per-frame basis. I'd expect many of today's IPS panels could range down to 8.3 ms, but even ranging between, say 11 ms (equivalent to 90Hz) and 33 ms could be sufficient to make a nice impact on fluidity.

G-Sync means Nvidia has entered into the display ASIC business, and I expect them to remain in that business as long as they have some measure of success. Although they could choose to license this technology to other firms, G-Sync is just a first step in a long line of possible improvements in GPU-display interactions. Having a graphics company in this space driving the technology makes a lot of sense. Going forward, we could see deeper color formats, true high-dynamic range displays enabled by better and smarter LED backlights, new compression schemes to deliver more and deeper pixels, and the elimination of CRT-oriented artifacts like painting the screen from left to right and top to bottom. Nvidia's Tom Petersen, who was instrumental in making G-Sync happen, mentioned a number of these possibilities when we chatted on Friday. He even floated the truly interesting idea of doing pixel updates across the panel in random fashion, altering the pattern from one frame to the next. Such stochastic updates could work around the human eye's very strong pattern recognition capability, improving the sense of solidity and fluid motion in on-screen animations. When pressed, Petersen admitted that idea came from one Mr. Carmack.

AMD will need to counter with its own version of this tech, of course. The obvious path would be to work with partners who make display ASICs and perhaps to drive the creation of an open VESA standard to compete with G-Sync. That would be a typical AMD move—and a good one. There's something to be said for AMD entering the display ASIC business itself, though, given where things may be headed. I'm curious to see what path they take.

Upon learning about G-Sync, some folks have wondered about whether GPU performance will continue to matter now that we have some flexibility in display update times. The answer is yes; the GPU must still render frames in a timely fashion in order create smooth animations. G-Sync simply cleans up the mess at the very end of the process, when frames are output to the display. Since the G-Sync minimum update rate is 30Hz, we'll probably be paying a lot of attention to frames that take longer than 33.3 ms to produce going forward. You'll find "time beyond 33 ms" graphs in our graphics reviews for the past year or so, so yeah. We're ready.

G-Sync panels will not work with FCAT, of course, since FCAT relies on a standards-based video capture card. We can use FCAT-derived performance data to predict whether one would have a good experience with G-Sync, but ultimately, we need better benchmarking tools that are robust enough to make the transition to new tech like 4K and G-Sync displays without breaking. I've been pushing both Nvidia and AMD to expose the exact time when the GPU flips to a new frame via an API. With that tool, we could capture FCAT-style end-of-pipeline frame times without the aid of video captures. We'd want to verify the numbers with video capture tools whenever possible, but having a display-independent way to do this work would be helpful. I think game engine developers want the same sort of thing in order to make sure their in-game timing can match the display times, too. Here's hoping we can persuade AMD, Nvidia, and Intel to do the right thing here sooner rather than later.

   
Register
Tip: You can use the A/Z keys to walk threads.
View options

This discussion is now closed.