Weighing the trade-offs of Nvidia DLSS for image quality and performance
While Nvidia has heavily promoted ray-traced effects from its GeForce RTX 2080 and RTX 2080 Ti graphics cards, the deep-learning super-sampling (DLSS) tech that those cards' tensor cores unlock has proven a more immediate and divisive point of discussion. Gamers want to know whether it works and what tradeoffs it makes between image quality and performance.
Eurogamer's Digital Foundry has produced an excellent dive into the tech with side-by-side comparisons of TAA versus DLSS in the two demos we have available so far, and Computerbase has even captured downloadable high-bit-rate videos of the Final Fantasy XV benchmark and Infiltrator demo that reviewers have access to. (We're uploading some videos of our own to YouTube, but 5-GB files take a while to process.) One common thread of those comparisons is that both of those outlets are impressed with the potential of the technology, and I count myself as a third eye that's excited about DLSS' potential.
While it's good to be able to look at side-by-side still images of the two demos we have so far, I believe that putting your nose in 100% crops of captured frames is not the most useful way of determining whether DLSS is effective. You can certainly point to small differences between rendered images captured this way, but I feel the more relevant question is whether these differences are noticeable when images are in motion. Displays add blur that can obscure fine details when they're moving, and artifacts like tearing can significantly reduce the perceived quality of a moving image for a game.
Before I saw those stills, though, I would have been hard-pressed to pick out differences in each demo, aside from a couple isolated cases like some more perceptible jaggies on a truck mirror in the first scene of the FFXV demo in DLSS mode. To borrow a Daniel Kahneman-ism, I'm primed to see those differences now. It's the "what has been seen cannot be unseen" problem at work.
This problem of objective versus subjective quality is no small issue in the evaluation of digital reproduction of moving images. Objective measurements such as the peak signal-to-noise ratio, which someone will doubtless produce for DLSS images, have been found to correlate poorly with the perceived quality of video codecs as evaluated by human eyes. In fact, the source I just linked posited that subjective quality is the only useful way to evaluate the effectiveness of a given video-processing pipeline. As a result, I believe the only way to truly see whether DLSS works for you is going to be to see it in action.
This fact may be frustrating to folks looking for a single objective measurement of whether DLSS is "good" or not, but humans are complex creatures with complex visual systems that defy easy characterization. Maybe when we're all cyborgs with 100% consistent visual systems and frames of reference, we can communicate about these issues objectively.
What is noticeable when asking a graphics card—even a powerhouse like the RTX 2080 Ti—to render a native 4K scene with TAA, at least in the case of the two demos we have on hand, is that frame-time consistency can go in the toilet. As someone who lives and breathes frame-time analysis, I might be overly sensitive to these problems, but I find that any jerkiness in frame delivery is far, far more noticeable and disturbing in a sequence of moving images than any tiny loss of detail from rendering at a lower resolution and upscaling with DLSS, especially when you're viewing an average-size TV at an average viewing distance. For reference, the setup I used for testing is a 55" OLED TV about 10 feet away from my couch (three meters).
The Final Fantasy XV benchmark we were able to test with looks atrocious when rendered at 4K with TAA—not because of any deficit in the anti-aliasing methods used, but because it's a jerky, hitchy mess. Whether certain fine details are being rendered in perfect crispness is irrelevant if you're clawing your eyes out over wild swings in frame times, and there are a lot of those when we test FFXV without DLSS.
Trying to use a canned demo with scene transitions is hell on our frame-time analysis tools, but if we ignore the very worst frames that accumulate as a result of that fact and consider time spent beyond 16.7 ms in rendering the FFXV demo, DLSS allows the RTX 2080 to spend 44% less time working on those tough frames and the RTX 2080 Ti to cut its time on the board by 53%, all while looking better than 95% the same to my eye. Demo or not, that is an amazing improvement, and it comes through in the smoothness of the final product.
At least with the quality settings that the benchmark uses, you're getting a much more enjoyable sequence of motion to watch, even if not every captured frame is 100% identical in content from TAA to DLSS. With smoother frame delivery, it's easier to remain immersed in the scenes playing out before you rather than be reminded that you're watching a game on a screen.
Some might argue that Nvidia's G-Sync variable-refresh-rate tech can help compensate for any frame-time consistency issues with native 4K rendering, but I don't agree. G-Sync only prevents tearing across a range of refresh rates—it can't smooth out the sequence of frames from the graphics card if there's wild inconsistency in the timing of the frames it's asked to process. Hitches and stutters might be less noticeable with G-Sync thanks to that lack of tearing, but they're still present. Garbage in, garbage out.
The same story goes for Epic Games' Infiltrator demo, which may actually be a more relevant point of comparison to real games because it doesn't have any scene transitions to speak of. With DLSS, the RTX 2080 cuts its time spent past 16.7 ms on tough frames by a whopping 83%. The net result is tangible: Infiltrator becomes much more enjoyable to watch. Frames are delivered more consistently, and major slowdowns are rare.
The RTX 2080 Ti doesn't enjoy as large a gain, but it still reduces its time spent rendering difficult frames by 67% at the 16.7 ms threshold. For minor differences in image quality, I don't believe that's an improvement that any gamer serious about smooth frame delivery can ignore entirely.
It's valid to note that all we have to go on so far for DLSS is a pair of largely canned demos, not real and interactive games with unpredictable inputs. That said, I think any gamer who is displeased with the smoothness and fluidity of their gaming experience on a 4K monitor—even a G-Sync monitor—is going to want to try DLSS for themselves when more games that support it come to market, if they can, and see whether the minor tradeoffs other reviewers have established for image quality are noticeable to their own eyes versus the major improvement in frame-time consistency and smooth motion we've observed thus far.