Another look at Nvidia’s DLSS
The Aorus GeForce GTX 1080 Ti card used throughout this review is my own personal graphics card that I’ve had for some while now. This review actually marks the first time I’ve had a GeForce RTX card in my own hands to play with. As a result, this is the first time I’ve gotten to spend much time fooling around with RTX—more on that in a bit—and it’s also the first time I’ve seen DLSS in action.
If you’re not familiar, then first, go read Jeff’s write-up on DLSS from last year. The AI-powered upscaling technology was created by Nvidia for … well, I’m not completely sure why it was created. If someone were to ask me for my personal opinion, I’d say it was probably an attempt to give some purpose to the tensor cores aboard Nvidia’s Turing-based GeForce GPUs. Let me elaborate on (or perhaps belabor) that point.
As Nvidia CEO Jensen Huang has stated, Nvidia is a “one-architecture company.” That means that the company develops and supports one architecture for all of its products at any given time. Nvidia now services a fairly wide range of markets with its graphics and compute products, but one market is by far both the most lucrative and the most demanding: high-performance computing (HPC).
So saying, Nvidia’s processors are largely designed with an eye toward what will best serve the HPC market and then adapted for other markets, like gaming graphics. Much of HPC these days is concerned with artificial intelligence and neural networks. As a result, the latest GeForce GPUs dedicate a significant amount of silicon (and thus, compute capability) to their AI-oriented tensor cores. These are not utilized in or even useful for typical gaming workloads.
It’s very likely, then, that someone at Nvidia was tasked with finding a way to make tensors useful for games. This is pure conjecture on my part, but it’s not difficult to visualize the short mental hop from “AI-powered image upscaling” to what DLSS ultimately is: AI-powered image upscaling, done in real-time.
When Nvidia first introduced DLSS, the company described two versions of the technology: DLSS 2X and the “standard” DLSS. DLSS 2X is a lot closer to what its creators seem to have envisioned when they came up with the name: the game is rendered in native resolution, and then the pre-trained neural network upscales the game to a higher resolution.
That’s still upscaling, not super-sampling, but it’s a heck of a lot closer to super-sampling than what all extant DLSS implementations currently are. When you enable DLSS in Final Fantasy XV or Monster Hunter World, the game actually drops the rendering resolution from what you have selected—without telling you it is doing this—before applying the DLSS filter. To be clear, when we say “4K DLSS,” we’re not talking about a 3840×2160 image that has been upscaled—we’re talking about a lower-resolution image that has been upscaled to 3840×2160.
Jeff was frustrated last year by the fact that the only software he had with which to test DLSS were a couple of canned demos that didn’t play nice with our performance profiling tools. So, to see the performance impact of DLSS in a real game, I tested Monster Hunter World on the RTX 2070 Super card. I ran the game through the same benchmark as before in 2560×1440, 3840×2160, and then 3840×2160 with DLSS enabled.
Certainly, setting DLSS to “On” improves the game’s performance. Make no bones about it: 4K with DLSS runs much better than without. The problem I have is that, as discussed above, 4K with DLSS isn’t really 4K. In fact, I feel the proper comparison is really to the lower resolution. Considered that way, DLSS actually has a seriously deleterious effect on performance.
I don’t know what the base resolution of the DLSS’d image is—because the game doesn’t tell me—but “4K with DLSS” certainly runs a lot worse than 2560×1440 without. The problem is, at least to my eyes, it doesn’t look that much better, either. Produced below are three images taken in Monster Hunter World‘s Research Base, a richly-detailed area with lots of complicated geometry.
I enthusiastically encourage you to download these images and display each one full-screen in a photo viewer like Irfanview or XnView, blowing up the smaller image as necessary. You don’t have to use a 4K monitor for this, but obviously that’s the best option. Looking carefully at the full-resolution shots, you can clearly see how the DLSS image has less aliasing than the 2560×1440, yet it also muddies detail in distant areas even more than the lower-resolution image. There’s no comparison to be made to the native 4K image.
Furthermore, DLSS looks bizarre in motion. I didn’t produce a video because the encoding on YouTube or another service would surely crush the critical details, but others have described the effect as being “like an oil painting,” and I find myself intuitively agreeing with this assessment. In motion, DLSS adds strange “shimmering” that makes details shift and swirl on static surfaces. It’s subtle, and I may not even have noticed if I wasn’t looking for them, but either way, the final product looks nothing like “real” native 4K rendering.
I have a lot of reservations about DLSS. I don’t approve of the way it is marketed or the way it is implemented. The drop in resolution is evident to the user, yet not at all communicated by the software. Worse, I simply don’t think the effect is convincing, at least in Monster Hunter World. I do think that the technology is fascinating, and I applaud Nvidia’s ingenuity, but I don’t think it achieves what Nvidia wanted. I’ll probably just stick to the lower resolution without DLSS and enjoy more consistent performance.