Weighing the trade-offs of Nvidia DLSS for image quality and performance

While Nvidia has heavily promoted ray-traced effects from its GeForce RTX 2080 and RTX 2080 Ti graphics cards, the deep-learning super-sampling (DLSS) tech that those cards' tensor cores unlock has proven a more immediate and divisive point of discussion. Gamers want to know whether it works and what tradeoffs it makes between image quality and performance.

Eurogamer's Digital Foundry has produced an excellent dive into the tech with side-by-side comparisons of TAA versus DLSS in the two demos we have available so far, and Computerbase has even captured downloadable high-bit-rate videos of the Final Fantasy XV benchmark and Infiltrator demo that reviewers have access to. (We're uploading some videos of our own to YouTube, but 5-GB files take a while to process.) One common thread of those comparisons is that both of those outlets are impressed with the potential of the technology, and I count myself as a third eye that's excited about DLSS' potential.

While it's good to be able to look at side-by-side still images of the two demos we have so far, I believe that putting your nose in 100% crops of captured frames is not the most useful way of determining whether DLSS is effective. You can certainly point to small differences between rendered images captured this way, but I feel the more relevant question is whether these differences are noticeable when images are in motion. Displays add blur that can obscure fine details when they're moving, and artifacts like tearing can significantly reduce the perceived quality of a moving image for a game.

Before I saw those stills, though, I would have been hard-pressed to pick out differences in each demo, aside from a couple isolated cases like some more perceptible jaggies on a truck mirror in the first scene of the FFXV demo in DLSS mode. To borrow a Daniel Kahneman-ism, I'm primed to see those differences now. It's the "what has been seen cannot be unseen" problem at work.

This problem of objective versus subjective quality is no small issue in the evaluation of digital reproduction of moving images. Objective measurements such as the peak signal-to-noise ratio, which someone will doubtless produce for DLSS images, have been found to correlate poorly with the perceived quality of video codecs as evaluated by human eyes. In fact, the source I just linked posited that subjective quality is the only useful way to evaluate the effectiveness of a given video-processing pipeline. As a result, I believe the only way to truly see whether DLSS works for you is going to be to see it in action.

This fact may be frustrating to folks looking for a single objective measurement of whether DLSS is "good" or not, but humans are complex creatures with complex visual systems that defy easy characterization. Maybe when we're all cyborgs with 100% consistent visual systems and frames of reference, we can communicate about these issues objectively.

What is noticeable when asking a graphics card—even a powerhouse like the RTX 2080 Ti—to render a native 4K scene with TAA, at least in the case of the two demos we have on hand, is that frame-time consistency can go in the toilet. As someone who lives and breathes frame-time analysis, I might be overly sensitive to these problems, but I find that any jerkiness in frame delivery is far, far more noticeable and disturbing in a sequence of moving images than any tiny loss of detail from rendering at a lower resolution and upscaling with DLSS, especially when you're viewing an average-size TV at an average viewing distance. For reference, the setup I used for testing is a 55" OLED TV about 10 feet away from my couch (three meters).

FFXV with DLSS

FFXV with TAA

The Final Fantasy XV benchmark we were able to test with looks atrocious when rendered at 4K with TAA—not because of any deficit in the anti-aliasing methods used, but because it's a jerky, hitchy mess. Whether certain fine details are being rendered in perfect crispness is irrelevant if you're clawing your eyes out over wild swings in frame times, and there are a lot of those when we test FFXV without DLSS.

Trying to use a canned demo with scene transitions is hell on our frame-time analysis tools, but if we ignore the very worst frames that accumulate as a result of that fact and consider time spent beyond 16.7 ms in rendering the FFXV demo, DLSS allows the RTX 2080 to spend 44% less time working on those tough frames and the RTX 2080 Ti to cut its time on the board by 53%, all while looking better than 95% the same to my eye. Demo or not, that is an amazing improvement, and it comes through in the smoothness of the final product.

At least with the quality settings that the benchmark uses, you're getting a much more enjoyable sequence of motion to watch, even if not every captured frame is 100% identical in content from TAA to DLSS. With smoother frame delivery, it's easier to remain immersed in the scenes playing out before you rather than be reminded that you're watching a game on a screen.

Some might argue that Nvidia's G-Sync variable-refresh-rate tech can help compensate for any frame-time consistency issues with native 4K rendering, but I don't agree. G-Sync only prevents tearing across a range of refresh rates—it can't smooth out the sequence of frames from the graphics card if there's wild inconsistency in the timing of the frames it's asked to process. Hitches and stutters might be less noticeable with G-Sync thanks to that lack of tearing, but they're still present. Garbage in, garbage out.

The Epic Infiltrator demo with DLSS. Vsync was on to permit better image quality evaluation

The Epic Infiltrator demo with TAA. Vsync was on to permit better image quality evaluation

The same story goes for Epic Games' Infiltrator demo, which may actually be a more relevant point of comparison to real games because it doesn't have any scene transitions to speak of. With DLSS, the RTX 2080 cuts its time spent past 16.7 ms on tough frames by a whopping 83%. The net result is tangible: Infiltrator becomes much more enjoyable to watch. Frames are delivered more consistently, and major slowdowns are rare.

The RTX 2080 Ti doesn't enjoy as large a gain, but it still reduces its time spent rendering difficult frames by 67% at the 16.7 ms threshold. For minor differences in image quality, I don't believe that's an improvement that any gamer serious about smooth frame delivery can ignore entirely.

It's valid to note that all we have to go on so far for DLSS is a pair of largely canned demos, not real and interactive games with unpredictable inputs. That said, I think any gamer who is displeased with the smoothness and fluidity of their gaming experience on a 4K monitor—even a G-Sync monitor—is going to want to try DLSS for themselves when more games that support it come to market, if they can, and see whether the minor tradeoffs other reviewers have established for image quality are noticeable to their own eyes versus the major improvement in frame-time consistency and smooth motion we've observed thus far.

Comments closed
    • Drifter
    • 1 year ago

    What is this rubbish, Jeff? Frametime comparison? Why not a frametime comparison of taa vs msaa? Any difference you see is probably just a result of it having a temporal requirement.

    No one anywhere is concerned with frametime of temporal anti aliasing. I dunno mate, if you really are interested in all this stuff I reckon you should read the white papers and do the degree and get work in a similar field.

    It’s very interesting stuff, these empirical machine learning algorithms they’ve set up. There’s a few papers on the nvidia website. Just a lot of d\dx iterated to one output.

    I probably would’ve, but they fucking disabled me, so they took that future from me.

      • Voldenuit
      • 1 year ago

      [quote<]What is this rubbish, Jeff? Frametime comparison? Why not a frametime comparison of taa vs msaa? Any difference you see is probably just a result of it having a temporal requirement.[/quote<] Whoa, hold your horses there. AFAIK, the FFXV demo available to reviewers to test only has TAA and DLSS as available options, so unless I'm mistaken, don't throw anyone under the bus just yet.

    • DavidC1
    • 1 year ago

    Previous post-AA methods seem to blur images, and looking at some videos, DLSS seems like its no exception.

    Even traditional methods like FSAA or MSAA are known to introduce blur. Blur = Loss of definition and often not desirable.

    I can see Ray Tracing eventually eventually gaining adoption as there are definite advantages to it, and current over-the-top implementation is going to be toned down to get more realistic images and better performance.

    DLSS? Just add to the pile of ever increasing AA methods. All methods seem to have advantages and disadvantages.

    • Andrew Lauritzen
    • 1 year ago

    I agree that smoothness is king and that DLSS is a good trade-off to that end. I also agree that native shading @ 4k is generally a waste of resources.

    But even in the infultrator demo, there are several instances of swimming, flickering and generally undersampling/aliasing that occur with DLSS that don’t with the “native” shading rate, and I don’t think it’s good to entirely paper over those. In a lot of cases it’s probably still a good trade-off, but DLSS is not immune to the fact that it is rendering at 1440p or whatever.

    Furthermore I’d really like to see a comparison of a nicely tuned TAA-based upsampling from 1440p -> 4k. That DLSS is faster than native 4k shading should surprise no one, but the real question is whether it delivers significantly more quality than conventional temporal upsampling-based filterers given that it is effectively proprietary and has a fair amount of overhead itself, even on hardware with ML acceleration.

    My guess is that it improves quality somewhat compared to a good TAA-based upsampling solution rendering at the same resolution, but is somewhat slower. That’s a far more interesting comparison than NVIDIA effectively trying to sell the tech by just convincing people that smart temporal upsampling can look good at 4k… this is something we already know 😛

      • Chrispy_
      • 1 year ago

      I’d like to see 1440p-to-4K TAA upsampling vs DLSS as well.

      Of course DLSS is faster than native 4K, but it seems a lot slower than 1440p ought to be!

    • Phartindust
    • 1 year ago

    How does DLSS compare to other AA besides TAA?

    • Voldenuit
    • 1 year ago

    I’d be interested to know what happens when you try to use DLSS on a game where you’re running a ReShade mod. Will it break? Does it depend on where in the pipeline changes are made? How extreme the mod is? What if you’re not even using a ReShade mod but, say, a single player mod that adds new enemy units, terrain or visual effects?

      • ptsant
      • 1 year ago

      I don’t know what a ReShade mod is, but it would most likely break if you use it with a texture patch that it hasn’t been trained for and is radically different from the original game.

      • auxy
      • 1 year ago

      Reshade is not likely to work on any game that implements DLSS. However, if you get it working, it will probably apply after DLSS is applied and so might be OK.

      Any mod that adds new content will break DLSS as it will not understand what it is “supposed” to look like. Likewise any mod that changes existing content.

      Of course, game developers don’t want you modding their games anymore anyway. (See: lack of mod tools, Bethesda Creation Club, Denuvo DRM). This is the future Microsoft wants, too, with its Windows Store and UWP completely locking down games from any kind of modification. Buy DRM-free! (/・ω・)/

    • Jeff Kampman
    • 1 year ago

    Both demos now have DLSS and TAA videos in 4K—sorry for the upload/processing delay.

      • ptsant
      • 1 year ago

      Can you please add some paired screenshots? We know which is more fluid, the numbers are clear. Image quality is the issue.

    • Pville_Piper
    • 1 year ago

    It has been my biggest question about ray tracing… When I’m playing a game like Battlefield am I going to use it? Am I going to notice a big difference in it?

    Will using it introduce lag in the game where it struggles to get to 60 hz so much that I get killed buy anybody running 144hz setup, 100+ fps and medium settings?

    When I was running my GTX 970 on Ultra (or even High) with Battlefield 1, my computer would run between the mid 50s and 90 fps. I felt like I was getting killed a lot of times when I shouldn’t have. I just felt like I was getting beat, the screen said one thing, the game another. By lowering my settings so that the GPU could render at 100+ fps (preferably 144) I still died, but I didn’t have that feeling like the monitor was out synch with the game. G-sync doesn’t fix that kind of lag issue. I was convinced that you couldn’t play at Ultra.

    But when I upgraded to the GTX 1080 for my VR games, I tried Ultra, which the GTX 1080 could run at 144 fps and no lag issues. I love the way the game looks and it is more immersive because of the higher graphics quality.

    I just don’t believe that I could play Battlefield V with a card that struggles to make 60 fps and enjoy it, no matter how well it looks because of the lag issues it will likely introduce.

      • fellix
      • 1 year ago

      The problem with this first-gen RTX implementation (as with any other tech) is that it always makes compromise with the performance. When AMD was first on the market with DX11-compatible GPU, the marquee feature (tessellation) was slow enough to be barely usable in any serious capacity, as later Fermi just mopped the floor with it. The issue with ray-tracing is no different: all we see now is only partial (hybrid) implementation, as an afterthought, in existing games and engines with rather costly performance drops. 7-nm tech just can’t come soon enough.
      Ray-tracing (and particularly path-tracing) is fundamentally underdeveloped for real-time applications, since little or nothing was invested in that direction for the past 30 years, in contrast to direct rasterization. And now it has to play catch-up on a fast pace, with many things bound to go sideways.

      • Jeff Kampman
      • 1 year ago

      DLSS is using different on-chip resources than ray-tracing effects, so we can’t say anything about ray-traced effects performance from the DLSS data we have so far.

        • Voldenuit
        • 1 year ago

        Jeff, doesn’t the de-noiser for ray tracing use the Tensor cores?

          • DoomGuy64
          • 1 year ago

          Yes. But I would like some clarification on what these “RT cores” really are, and how having 3 different cores works in practice. Sounds like a bottleneck to me, especially considering how the RT core is “inside the turing SM”. Can we isolate raytracing to a lower resolution and framerate from the rest of the system, and not take such a massive hit across the board?

          Either way, DLSS uses the tensor cores too, and performance will likely take an additional hit using DLSS with ray tracing, although the benefits of upsampling would outweigh rendering ray tracing in higher resolution. But that all depends on how well the tensor cores handle simultaneous multiple tasks. In theory, it shouldn’t be a problem, but in practice it is.

          There could be potential, but it’s going to need massively better optimization from what currently exists. Ray tracing needs to be separated from the rest of the graphics pipeline, in order to not take such a massive hit.

            • psuedonymous
            • 1 year ago

            [quote<]But I would like some clarification on what these "RT cores" really are, and how having 3 different cores works in practice. [/quote<] CUDA cores: general purpose INT and FP operations. Adds, subtracts, divides, multiplies, etc. Tensor cores: Matrix FMA operations. If you cannot pack all your work into two matrices to be multipled then added to a thrid matrix, the Tensor cores are useless to you. Luckily, NN operations work really well when expressed as FMA operations. A single Tensor core is about 32x the speed of a single CUDA core at doing a 4x4x3 matrix FMA operation. RT cores: These do rapid binary tree traversal. If you have a problem that is not "what node in a tree is this item?" these are no good for you. Luckily, traversing binary trees is what needs to be done every time a ray is cast, so these get a significant workout. Direct operation speedup is unknown, as we only have comparisons between the 1080ti with Pascal CUDA cores and the 2080Ti with RT cores active (we would need a comparison of Turing CUDA cores vs. RT cores), but at least 10x is likely if not more. The reason for 'wasting' silicon on units that can't do general purpose operations is because these special-purpose units operate dramatically faster than the CUDA cores in their specific operations.

            • auxy
            • 1 year ago

            You can do ray-tracing at a lower resolution. The BFV developers at DICE told DigitalFoundry this during the initial reveal, that they could raster at say, 4K, but do ray-tracing at 1080p.
            [quote<]Ray tracing needs to be separated from the rest of the graphics pipeline, in order to not take such a massive hit.[/quote<]This doesn't make any sense. Hybrid rasterization with RTX relies on the fact that the GPU already has the scene set up and it uses that geometry to perform the ray casting. (´Д⊂ヽ

            • DoomGuy64
            • 1 year ago

            I know that, and it’s not what I’m saying. The new cards are raster monsters, but raytracing absolutely cripples them being tied to the main pipeline. Raytracing needs to be rendered completely independently from rasterization. Let the game run at 60 fps, while raytracing runs at 30 fps. It’s an unnecessary bottleneck that is holding back the performance of the rest of the card’s resources. If you can’t make raytracing independent of rasterization, there’s no point to buying this card for raytraced games, because the rest of the GPU will be completely held up by the RT cores.

            edit:
            Think of this like animated textures in a game. Run raytracing like an animated texture, while the rest of the game runs at full speed.

            • Redocbew
            • 1 year ago

            Maybe the reason you think ray tracing “cripples” the card is because ray tracing is hard? Having specialized hardware for it doesn’t reduce the amount of work that needs to be done. It just provides tools better suited to the job. It sounds like you’re confusing the mixing of procedural and heuristic techniques that Andrew has been talking about elsewhere in this thread with this idea that ray tracing needs to somehow be moved somewhere else within the GPU.

            • Voldenuit
            • 1 year ago

            This works fine until you run into a game using RT for global illumination and point light sources. Having the lights blink on and off every other frame is gonna be distracting. Also, light reflecting off objects turning off and on? Either the game is haunted, or it’ll break immersion.

            • Redocbew
            • 1 year ago

            Speaking of haunted games, if someone remade the original FEAR so it supported ray tracing I’d play the hell out of that.

            • DoomGuy64
            • 1 year ago

            Why would you have lights turn off? Just leave it on until the next update, and update RT lighting at a reduced rate. You don’t need to raytrace the entire screen on top of rasterizing. It’s called hybrid rendering for a reason. Use raytracing for [i<]specific effects[/i<], and not the entire screen. You can already do global illumination and other effects via dx11, so doing everything in RT is sub-optimal and inefficient. Here's an example: BF:V reflecting an explosion. Run the reflection @ 30 fps like an animation, and don't bottleneck the rest of the engine to the RT core's performance capability. You then get a full 60 fps game with 30 fps reflections. Raytracing is simply not going to perform acceptably unless you do this. The RTX architecture supports variable rate shading. I'm simply suggesting that you apply VRS to ray tracing. Here is a link for inside the box thinkers, that cannot wrap their mind around such a concept unless it's handed to them: [url<]https://www.tomshardware.com/reviews/nvidia-turing-gpu-architecture-explored,5801-9.html[/url<]

            • Voldenuit
            • 1 year ago

            I can picture the sun stuttering across the sky at 30 fps, or light/shadow rays from windows doing the same. I’d be okay with decoupling lighting/raster at 120/60 fps, but 60/30? I’ll wait for the 3080 Ti.

            • DoomGuy64
            • 1 year ago

            That’s not how it works in reality. If you’re doing it right, you don’t over emphasize the effect, and use [i<]motion blur[/i<] like a video camera. It's not distracting or unrealistic if you do it right. Variable rate shading also means you can throw more resources at the section you're directly looking at for faster framerates. You seem to be living in this imaginary world where the 2080TI hasn't been reported to get 1080p 30 FPS for raytracing, making it useless without some level of compromise. That, or the 3080 Ti comment means you don't care about making it perform acceptably on the 2080Ti at all. The only possible way to get playable framerates with existing technology is to use every cheat available to maximize efficiency. So you should stop your infantile complaining about quality loss, because no matter how you do it, you're still getting 30 FPS with raytracing. At least with my suggestion, the rest of the game can run at higher framerates and resolution.

            • Voldenuit
            • 1 year ago

            I never said that developers shouldn’t try to decouple raster from raytrace, just that it may be more complicated to do so and have more considerations than at first glance. If (remember, [i<]if[/i<]) a given implementation creates more problems than it solves, then I'd personally turn it off on my end until the hardware and software catch up. I do believe that RTRT is going to be the future of game graphics, but ho-lee it takes a lot of horsepower. If that constitutes infantile complaining, then so be it.

            • DoomGuy64
            • 1 year ago

            This time isn’t, but your earlier posts were overly confrontational and certainly seemed like it.

            Considering how Nvidia is directly working with developers to include raytracing, I don’t think it would be more complicated for them to implement. It would just require the Nvidia engineers to compile the optimizations into the raytracing api, and make those optimizations easily accessible to the game developers. The game developers can then choose what level of optimization they want to implement, and it would be even better if those options were ultimately made accessible to the end user, so that 2070 users could max out optimizations, while Ti users could enable higher quality. The absolute worst case scenario is that this is never done at all, and you would have to wait for the 3080 series to use raytracing.

            • Redocbew
            • 1 year ago

            You must not have read Auxy’s post yet if you think this is confrontational.

            • Voldenuit
            • 1 year ago

            [quote<]This time isn't, but your earlier posts were overly confrontational and certainly seemed like it. [/quote<] Frankly, I thought I was much more polite and reasoned than common decency warrants to someone who called me an 'infantile complainant'.

            • DoomGuy64
            • 1 year ago

            What a history revisionist. The posting system doesn’t work in reverse, and you were definitely being juvenile.

            “I can picture the sun stuttering across the sky” was [i<]before[/i<] the more reasonable, "I never said that developers shouldn't try". Don't sit there and pretend you weren't being ridiculous all the way up until that later post. Hell, you went right back to being rude, so that one post was the only time you were being reasonable. If you don't like being called rude, stop acting ridiculous and posting in such a bipolar manner. It's either that, or you're attempting to gaslight. I can't tell because I'm not a mind reader. If it's the latter, that tactic does not work on me, and has the opposite effect. If you're going to be reasonable, be reasonable. If not, drop the pretense. I can recognize a faker a mile away.

            • Redocbew
            • 1 year ago

            Dude, this is a very strange and speculative hill to die on, no? In fact, I’m pretty sure it’s not even confirmed to be a hill. I wouldn’t call it that, but what’s below a hill? A mound? A slight incline? Whatever it is, let it go man.

            • DoomGuy64
            • 1 year ago

            It’s the other way around, dude. My point was never Voldenuit, it was optimization. His point was dismissing the argument and me. I call it out, he pretends like he wasn’t doing it, and then immediately afterwards continues to do it. There’s your hill, but I’m not the one standing on it.

            My only point in this whole thing is that Nvidia has the ability to decouple raytracing from rasterization, and it should be done to improve performance. Games don’t need to be fully raytraced either, and using it where it counts is more efficient than raytracing the entire scene with diminishing returns.

            Make of that what you will. Nvidia doesn’t have to do anything, and they probably won’t, because it will sell more 3080’s if they never optimize. Of course, the 3080’s will be using optimization along with increased performance, because by then AMD might have RT capable hardware and Nvidia will need to compete.

            I’m just mentioning that the capability for optimization already exists in the 2080, and it can be done. Which is probably too controversial for people drunk on kool-aid and stock prices. Nvidia optimizing the 2080 too much could hurt long term investments, so sometimes it’s hard to tell whether or not some of this knee jerk reactionary is fanboyism or trolling to keep stock up via repeat sales. Either way is ridiculous for people not invested in the system, and it should be a requirement for posters to disclose whether or not they own Nividia stock before making posts. Why else would anyone dismiss calls for optimization in favor of the 3080? Ridiculous. There’s no point to raytracing if it can’t run acceptably.

            • psuedonymous
            • 1 year ago

            Not if combined with another (hardware accelerated on Turing) feature: texture-space shading. Rather than storing shading results in a screen-space buffer, you effectively paste them onto a texture mapped to the object. If that object then moves or is otherwise transformed in the next frame, you can still grab that shaded texture sample in the correct new location without needing to perform a screen-space estimated transformation (or storing a ‘transform log’ and performing a double coord conversion + transform). Shading rate can thus be decoupled from update rate, at the peril of the output being inaccurate (e.g. specular highlights not immediately shifting as the object shifts), though no worse than the same update rate applied to the whole frame.

            • DoomGuy64
            • 1 year ago

            This. Combine this with variable rate shading and you have full framerate gameplay with raytracing used only where it makes a meaningful impact, like reflections.

          • Jeff Kampman
          • 1 year ago

          It’s one way developers can de-noise an image, but Nvidia notes that it’s hardly the only way and that we’ll see a variety of approaches.

      • Srsly_Bro
      • 1 year ago

      Csgo does it with 64 tick servers were high refresh rates don’t give you the advantage. I do much better in 128 tick servers to with a 144Hz monitor.

        • Puiucs
        • 1 year ago

        Even with a 64 tick server when playing csgo, high FPS still helps a lot in reducing the input lag. (although it helps even more if you also have a high refresh rate monitor)

    • chrcoluk
    • 1 year ago

    DLSS is the proper new exciting tech, RT a clear gimmick but the performance potential from this is very nice.

    However, the problem is I think it will gain limited traction at best.

    CSAA is long expired.
    MFAA barely had much impact on games after the feature was launched because MSAA is now mostly abandoned by developers.
    I expect DLSS will appear in games where nvidia give the dev’s a nudge to add it, so we will see it in some games, but not many and within 2 card generations something will replace it and its traction gone.

    This is the problem, its basically nvidia’s track record, they dont make their technology last, and the lack of longevity means I dont consider DLSS worth the extra 40% cost of a gpu for the special cores.

    • ptsant
    • 1 year ago

    I think to the extent that you can just turn it on/off, there is nothing NOT to like about DLSS. Just choose what you prefer. The only question is how much of a $$$ bonus are you going to give for DLSS support, over a 10X0. I would probably pay a 10% extra for the hope that my GPU will be future proof…

    EDIT: Wow, anyone care to explain why you are downvoting? DLSS is an option, right? You’re not losing TAA. So you either like and pay some $$ for it or don’t like it.

      • derFunkenstein
      • 1 year ago

      I’m not among your anonymous detractors, but it seems to be you’re equating the price hikes with only the RTX features. In reality, performance [url=https://techreport.com/review/34112/nvidia-geforce-rtx-2080-graphics-card-reviewed/12<]scales roughly linearly[/url<] with price and the ray-tracing and DLSS features are a bonus.

        • ptsant
        • 1 year ago

        Maybe I wasn’t very clear. I was thinking about 1080 Ti vs 2080 (or 1080 vs 2070, depending on how benchmarks come out). For example, in local prices I can find the 1080 Ti at $800 but the 2080 starts at $950. Which is probably a bit more than I would pay for the extra performance, but I could be tempted by the RTX/DLSS features.

        Obviously, if you are after the 2080 Ti then there is no 10×0 equivalent.

          • Freon
          • 1 year ago

          There have been some good deals on the 1080 Ti, like <$600 if you hunt hard enough, while the 2080 is $790+ if you can find one in stock. TR is using what appears most recent official MSRP, and while a reasonable way to approach this isn’t the whole story.

            • ptsant
            • 1 year ago

            Indeed, that’s what I’m seeing here too. I always have a hard time extrapolating with USA prices but we’re pretty close to a $200 difference here. Oh, and the ASUS 2080 Ti hit an astronomical $1499 in the biggest national e-shop.

      • synthtel2
      • 1 year ago

      I didn’t downvote; maybe it’s “future proof”? That implies cards without it being substantially hampered by the lack of it as time goes on, which I highly doubt. (Variable rate shading might be a good bet for that though, particularly when it comes to VR.)

        • ptsant
        • 1 year ago

        It’s too early to tell. Based on the perf estimates the RTX features might need a couple of generations to mature. I mean if the perf penalty is huge (which we don’t know) then it might only become viable in later iterations of the hardware. It may really take 50 GRays/s to make actual use of the technology in games, for example.

        I had the exact same problem when I bought one of the first cards that had the “3D” sticker back in 1996-97, the S3 Virge. It was supposed to be a 3d accelerator but the performance was barely adequate and only used for a few games (Descent, Tomb Raider). A bit later, when Voodoo and Verite came out they were a massive success and my “future-proof” 3d card was essentially only useful for 2d.

        All that being said, the perf that is already on the table is undeniably phenomenal. I wouldn’t hesitate to buy if I needed that today (and could afford it). DLSS is a nice cherry on top, as I wrote above.

          • djayjp
          • 1 year ago

          Nah the new Metro game uses it as its new general lighting solution. It definitely impacts the framerate but it’s still decent for a beta. Seems very usable today.

      • rahulahl
      • 1 year ago

      It’s an option, but if you are gonna leave it off anyway, then you just purchased all those tensor cores, that are not doing nothing for no reason. Those cores could have been replaced by the more traditional cores and your performance would have gone up in all the game without the need for options in certain games.

      I like the idea of DLSS. In fact I love the idea of DLSS. However it does not change the fact that you are getting gimped if you purchase this hardware and intend to leave all the tensor core options off and unused. You are literally paying for stuff which you will never use.

      I am personally waiting to buy the next series. Hopefully they will have worked out the pricing a bit better by then.

        • ptsant
        • 1 year ago

        What I’m saying is that this a game-by-game decision. So, maybe you use it a couple of times. Also, might depend on what monitor you have. I have a 1440p so I probably don’t need to consider this, but if I upgrade to a shiny 4k I might be tempted to turn it on.

    • synthtel2
    • 1 year ago

    Fire up DOOM, set resolution to 70%, use TSSAA 8x, and compare. It isn’t as good, but it’s close and a lot cheaper.

    This isn’t magic.

    If anyone but Nvidia tried this, everyone would be up in arms about how DLSS 2x should be the baseline variant to be used in performance comparisons. It’s good that they’re getting people to realize that shading 8 million pixels per frame at full quality is a bit ridiculous (and that better tech to let us not do that is a thing we should care about), but I can’t help but be a bit salty at how they seem to have convinced half the internet that they’re playing in a whole different league.

      • auxy
      • 1 year ago

      I think Doom’s TSSAA 8x looks crap, and applying it to a lower-resolution image certainly isn’t going to help. It’s blurry and imprecise, and I totally get that this looks “better” to some people because it shaves off sharp aliasing artifacts but to me it just looks blurry. Just throw a blur filter over the whole screen like the old Quincunx AA method if that’s what you’re after. ( ;一一)
      [quote<]It's good that they're getting people to realize that shading 8 million pixels per frame at full quality is a bit ridiculous (and that better tech to let us not do that is a thing we should care about)[/quote<]On the one hand I totally get what you're saying. If we can do less work for functionally-the-same results then great! (*'▽') And I do totally understand that "accuracy" isn't necessarily preferable to "looks good" (see: discussion of pSNR in video encoding). HOWEVERRRRRRRR, this whole "rendering 4K at full detail is a waste" thing that you and Andrew Lauritzen seem to be taking not only as truth but in fact a foregone conclusion is BEYOND batshxt to me. Even on a 24" display (higher PPI), 4K reso is not high enough to avoid aliasing artifacts in any game, even old stuff like Quake. Talking about it like it's "too much" is so far beyond crazy it's past "negligent" and into "irresponsible" IMO. We can't stop pushing resolution here, we're not even halfway to that point. I'm interested in ray-tracing but DLSS just seems like it is a solution in search of a problem. That is to say that I think Nvidia created these chips as the one-architecture company that they are, and then said "hmm, all these tensors are going to go totally unused in games" and then came up with a situation where they could actually apply the tensors to game visuals. Given that it requires game support from the developers, game support from Nvidia, and GeForce Experience I think it's very clear and safe to say DLSS is actually even more of a gimmick than the raytracing stuff is at this time. Just like NV's TSAA and MFAA and CSAA and Quincunx before it, you'll see a few games using it this generation and then it'll never be heard from again. That's my prediction. ;つД`)

        • Srsly_Bro
        • 1 year ago

        “1080P with full details ought to be enough for everyone” – tech report 2k18

        • synthtel2
        • 1 year ago

        I agree that it looks like crap, I just think DLSS barely looks better, and they’re doing the same basic job (render at a lower res and they’ll spit out an image that can sort-of pass for full res).

        As far as present-day tech, yeah, 4K is pretty nifty. The difference is probably that Andrew and I are looking ahead to what’s achievable by developers in the near future, rather than what’s achievable by gamers in the present. One way or another, there’s an impressive amount of math that could be cut out of a typical 4K render without making it look much worse, if we could figure out how to make such cuts better. There’s no shortage of algos that could be useful, but the industry seems to have standardized on TAA as a good enough solution for the moment. DLSS should at least get people thinking about it again.

        • ptsant
        • 1 year ago

        If you think that 64x SS is great, then that is what they are using as ground truth to train DLSS and, if their training/expertise is right, DLSS should give you something reasonably close. That’s the whole point, but it remains to be proven in actual games and in carefully selected screenshots.

        I absolutely agree that they most likely (a) spent countless hours optimizing the demos and may not be as careful in actual games and (b) are quite likely to decide, down the road, that we need DLSS2 or super-DLSS a couple years later, abandoning the current version, as they’ve done in the past for the N iterations of their “special” AA method.

          • auxy
          • 1 year ago

          I really wasn’t making any commentary about DLSS aside from that I think it’s a gimmick due to its limitations. I’m sure that, for people who use GeForce Experience, and for at least some (if not most or even all) of the titles that support it, it could be a great option for image quality. Of course, that’s quite a few caveats. I don’t play AAAs in general, for example, so it’s fairly unlikely that I will [i<]ever[/i<] play [i<]anything[/i<] that supports it. And I don't use GFE anyway. I don't really have too much doubt about the technique itself; it's more about the circumstances of it. All the talk about resolution is mainly just me expressing my frustration that people seem to think that 4K is "enough" or even "too much" resolution. Even if you have a 1080p screen, rendering in 4K and downsampling gives TREMENDOUS visual upgrades. (*'▽') Basically what I'm saying is that all of this "let's try to get 95% of the visuals with 50% of the work" seems premature given that, in my opinion, our visuals are only stacked up to about 25% of where they need to be. ('ω') Of course, DLSS is supposed to be able to give those visuals without the performance hit and I hope that's true, but again: most games won't support it, and it requires GeForce Experience. And it may not ultimately work all that well because it really depends on the quality of the trained model. ┐(゚~゚)┌ So I hope DLSS is good, but ultimately it's not likely to matter much to me. (;´・ω・) Especially since I just got another 1080 Ti. LOL

            • djayjp
            • 1 year ago

            Curious: what non AAA games do you play that require two 1080tis? :O

            • auxy
            • 1 year ago

            Hehe, I don’t have two. I sold my other 1080 Ti (a Zotac AMP! Extreme) for the same price I paid for it brand new a few months ago after my friend (Zak, a writer for TR here) did the same thing. I felt like it was a pretty smart idea to sell the old GPU for full price at the time, but then at that time we both thought the new Nvidia GPUs would be out any day.

            I just picked up one for $300 less than I paid last time, so I’m pretty happy about it. Of course, it’s used, and a crappy Asus Turbo blower model, but it works fine and I don’t care about the noise. Slam the fan to 100% and leave it there, that’s the way I roll! (*’▽’)

            I do use my 1080 Ti with a 1920×1080 display. It’s a 240 Hz monitor (BenQ XL2546), and I frequently use it in 3840×2160 mode with Nvidia DSR, particularly for single-player games and games where I can still get over 120 FPS consistently in 4K . The only games I play at native resolution are things where I really can’t run them in 4K at all, like Doom (2016), which is probably the last AAA I played for any length of time. I tried to play Prey but I refunded it because it is a terrible game made by terrible people. (・へ・)

            • djayjp
            • 1 year ago

            Ohhh gotcha. Yeah that was a good way to make a cool $300! Heh I still remember Nvidia (maybe even Huang) saying that their next gen won’t be out for a long time (but that was only about 6 months ago ha).

            Interesting about DSR and high framerate. I’m sure you’ll love DLSS2x! You should have a similar quality result as running in 4k with DSR but with the performance impact of running at 1080p! 😀

            Seems I should be thankful I’ve never played Prey lol

          • synthtel2
          • 1 year ago

          Hoping for something reasonably close to 64x SSAA out of DLSS is like thinking CSI’s “enhance image” scenes actually make any sense.

        • techguy
        • 1 year ago

        Calling outlets like TR negligent and irresponsible because they looked at DLSS and didn’t immediately rip into it – that’s absurd.

        You don’t want to use it? Don’t. Nothing is being forced down your throat. DLSS is an OPTION. If a gamer wants to sacrifice some image quality (arguably undetectable in motion) for higher FPS – who are you to say they shouldn’t?

          • auxy
          • 1 year ago

          I was not talking about TR. You failed to comprehend my post.

            • techguy
            • 1 year ago

            You failed to read the article:

            [i<]The Final Fantasy XV benchmark we were able to test with looks atrocious when rendered at 4K with TAA—not because of any deficit in the anti-aliasing methods used, but because it's a jerky, hitchy mess. Whether certain fine details are being rendered in perfect crispness is irrelevant if you're clawing your eyes out over wild swings in frame times, and there are a lot of those when we test FFXV without DLSS.[/i<] Said another way: it's a waste of resources. So yeah, when you attack people who have stated this opinion, you are attacking TR because the editor has voiced the same opinion.

            • auxy
            • 1 year ago

            Holy crap, your reading comprehension is terrible! Jeff says FFXV looks bad in 4K+TAA because it runs like toasted butthole. He says DLSS runs more smoothly. That’s all he says there. He doesn’t say “rendering in 4K+TAA is a waste of resources”, he simpy says that it’s not a pleasant experience because the hardware can’t manage it.

            Stop replying to me. I’ve supported TR with my readership and also as a subscriber for years. Literally my best friend is one of the writers here. Piss off you ignorant cretin. (; ï½¥`д・´)

            • Redocbew
            • 1 year ago

            Don’t get me wrong Auxy, I often enjoy your posts, but having known many a firefighter once upon a time I can tell you that a toasted butthole can run pretty quickly.

            More seriously(heh), the way this is turning out I kind of feel bad for the people who worked on DLSS. There’s probably some pretty clever stuff involved there, but with the direction Nvidia has taken to market the stuff that’s probably just going to get lost in the noise. Maybe it will make more of an impact among developers though.

            • Voldenuit
            • 1 year ago

            [quote<]Don't get me wrong Auxy, I often enjoy your posts, but having known many a firefighter once upon a time I can tell you that a toasted butthole can run pretty quickly.[/quote<] Yes, but frame time analysis tells us he's running pretty stiffly.

            • Redocbew
            • 1 year ago

            No doubt. Fortunately that’s one of those applications where speed matters more than latency.

            • techguy
            • 1 year ago

            Pretty sure there’s a terms of service here, like any other comment section, and stuff like this is highly frowned upon.

            • techguy
            • 1 year ago

            Being a subscriber and knowing someone that works for TR do not give you the right to speak this way to another person in a civilized society without consequences. You have every right to believe what you want, and even express it, you don’t have the right to be free from consequences though.

            Being downvoted also doesn’t make me wrong.

            • Voldenuit
            • 1 year ago

            I agree with you in principle, but I also think that, like the [url=https://sites.google.com/site/h2g2theguide/Index/g/704396<]great telephone and ventilation riots of SrDt 3454[/url<], everyone should be allowed to call one other person a 'cretinous idiot' at least once a month. EDIT: Forgot /s tag. I think Kampman or the TR staff should step in here. There's a fine line between excitability and toxicity, and I personally agree that some people may have crossed the line in this thread, it's probably not a bad idea for TR to remind the community to act as civil as possible even when disagreeing with others.

            • Redocbew
            • 1 year ago

            [url<]https://www.youtube.com/watch?v=6ZUpfqTUgyQ[/url<]

            • Spunjji
            • 1 year ago

            You’re correct about this point very specifically.

            Unfortunately, if you want to pound on etiquette, it’s worth noting that you escalated this with your rather nakedly derisory “You failed to real the article…” response. I wouldn’t call Auxy’s response measured but I would definitely say it was appropriate.

          • Voldenuit
          • 1 year ago

          There were some very impressive (if subtle) ray-tracing effects in the Metro demo, like light reflecting off a hidden surface onto a second, visible surface.

          This is something that can’t be faked using screenspace reflections, and even if you used other methods to fake it, it would have to be manually baked in to every map. Instead, it “just works” with Ray Tracing due to materials and geometry.

          At the very least, RT sounds like a boon to developers when designing, testing, and iterating level design. See how the map looks with GI and ray-tracing, then bake in lighting in the final product for users and platforms without RT.

        • Andrew Lauritzen
        • 1 year ago

        > Even on a 24″ display (higher PPI), 4K reso is not high enough to avoid aliasing artifacts in any game, even old stuff like Quake.

        Right but you’re missing the point so let me make it clear: increasing the sampling rate is not the most efficient way to eliminate the remaining artifacts, aliasing or otherwise. There’s a HIGHLY diminishing return on throwing more samples at the problem, which is why GPUs mostly stopped at 4x MSAA and such as well. (And a uniform grid – i.e. increasing the resolution – is generally the worst possible way to increase the sample count as well.)

        The question is for a given performance budget, what is the most efficient way to make pretty pixels. Increasing the sampling rate makes sense up to some resolution/pixel density, but before it hits 4k we pass that point where the performance would have better been used somewhere else to create objectively better pixels.

        Whether you can point to a game that does this today is actually beside the point: the reason 4k shading has even been an option is games has nothing to do with a graphics engineer sitting down and deciding “this is how I best want to use my performance budget” and everything to do with “it’s no additional work to let people with faster machines get some additional quality”.

        As the mainstream shifts more towards 4k monitors and TVs and more engineering effort begins to be focused towards that use case, you’ll see much smarter uses of that power. Checkerboard, dynamic resolution rendering, TAA and DLSS are all examples, but I guarantee they won’t be the last 🙂 Indeed the rendering pipeline is likely to become even more decoupled to the point that even trying to talk about what resolution something is rendering in won’t make much sense, because it’s going to be a mix of adaptively generating samples in various locations based on error metrics and heuristics and reconstructing an image from that irregular data set.

          • auxy
          • 1 year ago

          You’re doing this thing that people do where you go “you don’t agree with me, so you obviously just don’t understand.” [b<]You need to not do that. I understand.[/b<] I just don't agree. [quote<]Increasing the sampling rate makes sense up to some resolution/pixel density, but before it hits 4k we pass that point where the performance would have better been used somewhere else to create objectively better pixels.[/quote<]This where we fundamentally disagree, and I said this in my last post. I've seen checkerboard rendering, I've seen upsampling+TAA, I've seen dynamic resolution, I've seen (lossless full-resolution video of) DLSS, and none of them look as good as native 4K shading. They just don't. I'm sorry. As I said before, "95% of the way for 50% of the cost" is only 95% of the way. It's not good enough. I'm sorry. And in my opinion it's more like 75% of the way, anyway. Or worse, in the case of dynamic resolution. (・へ・)

            • Andrew Lauritzen
            • 1 year ago

            > I understand. I just don’t agree.

            I mean no offense, but you haven’t demonstrated to me that you understand the difference between what you seem to think I’m saying (which is indeed a matter of opinion), and what I’m actually saying (which is not really). The examples you keep giving to support your point are precisely why I’m trying to inject a bit of that subtlety into your generalizations.

            Here’s a perfect example:
            > I’ve seen checkerboard rendering, I’ve seen upsampling+TAA, I’ve seen dynamic resolution, I’ve seen (lossless full-resolution video of) DLSS, and none of them look as good as native 4K shading.
            Cool… I don’t even disagree with that. But as they say, the plural of anecdote is not data. Particularly in this case where we’ve barely started to see games experiment with anything interesting, the fact that you aren’t convinced that the techniques that are faster than “native 4k shading” don’t look as good as it doesn’t really mean anything. Comparisons that don’t fix performance or quality are of very limited utility in the first place.

            You’re absolutely responsible for forming your own opinions on where your quality/performance tradeoff lies, but that’s tangential to the discussion. Saying “native 4k shading is the best quality of the options I’ve seen” is not the same as saying “native 4k shading is the best use of FLOPS at a fixed performance target”. The latter is frankly an absurd statement, particularly if you have any experience in offline rendering where even the use of the term “native” would be sorta laughed at 🙂 I don’t believe you are making that latter statement which is why I’m extending the benefit of the doubt, but you seem to believe I’m saying something different than I am.

            So to summarize, I don’t even disagree with you on your quality examples above. But don’t misrepresent the statement that shading a 4k uniform grid and presenting it to the user is not the best way to get high image quality iso-perf as something that relates to those examples because it’s more subtle than that. Let’s remember how this exchange started:

            > “HOWEVERRRRRRRR, this whole “rendering 4K at full detail is a waste” thing that you and Andrew Lauritzen seem to be taking not only as truth but in fact a foregone conclusion is BEYOND batshxt to me.”
            Can you see how that statement is at best a strawman in the context of the above?

            • auxy
            • 1 year ago

            [quote<]Can you see how that statement is at best a strawman in the context of the above? [/quote<]No. Because it isn't. You're really hung up on this semantic thing, ergo; This statement: [quote<]shading a 4k uniform grid and presenting it to the user is not the best way to get high image quality iso-perf[/quote<] is not functionally different in this context from this statement: [quote<]rendering 4K at full detail is a waste.[/quote<]Literally, technically, yes, they do mean different things; you're saying that more "intelligent" rendering techniques like sparse shading and foveated rendering can be employed to make better use of the compute capability available to a rendering pipeline. I know, okay? But functionally, practically, as part of this conversation, I think you knew [i<]that I knew[/i<] what you were talking about, and I think you're taking this little objection to my phrasing as an "out", an excuse to pretend that I don't know what I'm talking about. Because you're doing that thing again where you think because I disagree with you I must simply not know what I'm talking about. Yeah, maybe in the future at some point the whole scene will be generated by a neural network and talking about resolution will be irrelevant. That seems like it's a long way away, a time measured in decades. [b<]RIGHT NOW[/b<] in all but the barest handful of titles there is absolutely no way to increase image quality faster or more efficiently than using more samples (higher resolution). You can throw on all the AA you want and it will never, ever look anywhere near as good as simply bumping up the resolution. This has always been true and it will likely continue to be true for a very long time because every attempt at producing something that "makes better use" of the available compute resources to produce prettier pixels simply fails compared to putting the same amount of processing power toward increasing the sample count. That's a fact. Don't bother slathering on AA because most AA methods these days (since nobody uses MSAA anymore) generally make things look [i<]worse.[/i<] You might as well just literally smear vaporub on your screen and then ram it up your nose too. What a waste. And as far as things like sparse shading go, no thanks? Yeah, ok, maybe I wouldn't notice it during gameplay. What about during video playback? What about people watching me play? What about screenshots? If I can notice any "more efficient alternative" to full-resolution uniform shading at any point, it's not acceptable. (As a note, if games still supported "demo" recording, I could play with sparse shading or whatever and then turn it off to take screenshots or make videos. But games don't do that anymore.) Instead of spending all this effort on questionable AA methods, what someone should have done a long time ago is create a "reverse dynamic resolution" -- one that will actually supersample the game when the hardware resources outstrip the game's requirements. But I guess everything is made for consoles and they don't have the juice to even think about something like that. Ugh. This post ruined my mood entirely.

            • Andrew Lauritzen
            • 1 year ago

            > But functionally, practically, as part of this conversation, I think you knew that I knew what you were talking about, and I think you’re taking this little objection to my phrasing as an “out”, an excuse to pretend that I don’t know what I’m talking about.

            I’m actually trying to give you an out to be honest… but if you refuse to take it, that’s on you 🙂

            > RIGHT NOW in all but the barest handful of titles there is absolutely no way to increase image quality faster or more efficiently than using more samples (higher resolution). You can throw on all the AA you want and it will never, ever look anywhere near as good as simply bumping up the resolution.

            Being charitable here again, is it possible that you’re just not very familiar with anything beyond current games in this context? How familiar are you with graphics research or offline rendering for instance? I’m really doing my best to be helpful here, but this stuff is not “decades” away – indeed we knew most of it in the late 1980s and certainly by the 2000s we have a pretty firm grasp.

            > This has always been true and it will likely continue to be true for a very long time because every attempt at producing something that “makes better use” of the available compute resources to produce prettier pixels simply fails compared to putting the same amount of processing power toward increasing the sample count. That’s a fact.

            Sorry, but I’m gonna just have to pull rank on you here… find me a single person in the graphics field who agrees with this statement please. Pro-tip, movies don’t just render their frames at 10x the resolution and down-sample 😉 Yes they generally use more than real-time rendering but it turns out beyond 16x or so (and dear god not uniform grid!!) the deminishing returns hit even the amount of hardware they are happy to throw at their rendering (which is largely limited by out of core concerns so more samples are even relatively cheaper than games).

            > If I can notice any “more efficient alternative” to full-resolution uniform shading at any point, it’s not acceptable.

            For sure, but that’s a pretty damn low bar 😛

            Your Quake example was particularly amusing because some simple analytic geometry AA would produce better quality that *any* amount of super-sampling. And that’s not a subjective statement, the results would be strictly better. Throwing more shading samples at a mipmapped multi-textured polygon is actually the definition of wasted compute… like, mathematically wasted.

            You keep claiming “you know” all this, but then immediately saying things that indicate that you don’t. Trying to rationalize that away by saying “my opinion is equally valid to anyone else” is besides the point: you have to make arguments about the research if that’s the conversation you want to engage in. And if the conversation you want to engage in is just about your opinion on what games do today (since that’s the only argument you’ve offered so far), that’s what my entire last post was about…

            So despite your rudeness I’m going to extend to you the most charitable interpretation that I can which is that you’re simply only familiar with what games do right now and not any of the actual theory here. That’s fine, but don’t be surprised if people call you out when you make statements that over-reach your expertise, particularly when you start them with inflammatory statements directed at individuals.

            • auxy
            • 1 year ago

            Oh come on. Theory is useless without practical application, and your argument from authority (“a fallacious ad hominem argument to argue that a person presenting statements lacks authority and thus their arguments do not need to be considered”) proves nothing and convinces no one.

            Saying “all this stuff is well-described in the literature of offline rendering” [i<][paraphrase, obviously][/i<] doesn't argue your point because if it's not implemented, it may as well not exist. Nevermind that offline rendering has, practically, nothing to do with real-time stuff. So yeah, I'm talking from a position of practical knowledge. That's why, yeah, I get a little annoyed when you make statements like: [quote<]this stuff is not "decades" away - indeed we knew most of it in the late 1980s and certainly by the 2000s we have a pretty firm grasp.[/quote<] What does imagining something have to do with implementing it? I was talking about having software in my metaphorical hands that does this, not the conceptualization of it. [quote<]Your Quake example was particularly amusing because some simple analytic geometry AA would produce better quality that *any* amount of super-sampling.[/quote<]Where is the implementation? If this is definitely true, where is it? If this is is actually true, why do all the Quake ports simply use super-sampling? Quake, given its simplistic and open-source nature, seems a target ripe for exactly this sort of experimentation. [quote<]Trying to rationalize that away by saying "my opinion is equally valid to anyone else" is besides the point: you have to make arguments about the research if that's the conversation you want to engage in.[/quote<]I would never say anything like this, because my opinion is way more valid than most people's, since it's based on experience and observation, not theories and imaginary software that doesn't exist. So no, I don't want to talk about the research. I want to talk about realities, practicalities. I'm a woman of means, not of maybes. Look, I'm not discarding the value of research. You have to do research before you can develop. It's not for nothing that it's called "R&D." I know that. But from the perspective of someone who has spent the last 25 years playing, tuning, and tweaking PC games as a primary entertainment activity, you're talking about things that just don't exist. It may be true that there are fantastic and amazing things going on in the world of offline rendering, but that has very little relevance to the world of Wintendo, and that's the only thing I've ever been talking about.

            • Andrew Lauritzen
            • 1 year ago

            > Saying “all this stuff is well-described in the literature of offline rendering” [paraphrase, obviously] doesn’t argue your point because if it’s not implemented, it may as well not exist. Nevermind that offline rendering has, practically, nothing to do with real-time stuff.

            All of this is obviously implemented in the tools and prototypes that were used to write the papers and make the comparisons… this isn’t just math on paper in rendering after all. Much of it is already the standard in high quality rendering as well. Go take a quick browse through any CAD tool or renderer and you’ll find quickly that even the most basic options are much fancier than “render higher resolution”. Take a look at how any optimized path tracers works and you’ll discover that the *entire* game is about picking smart samples and good reconstruction, not trying to brute force through the terrible asymptotic behavior. There’s nothing imaginary about this… anyone who has taken an intro to graphics course will have become painfully aware of this in their assignments :).

            Offline rendering targets a different place on the curve and indeed not everything about it applies to real-time rendering, but in most cases it is relatively predictive of future real-time. Ray tracing and reconstruction are both great recent examples. A few years ago there was some question in the research space about this but no one today really doubts that these are tools that will be used in real-time as well these days. Our PICA PICA demo is already a small step in that direction and that was done in a month or two by a very small team of engineers.

            You can argue modern games have already gone down this path, even if you are rendering in “4k”. Plenty of terms are evaluated at different frequencies, and basically every game does some amount of temporal reprojection. A few games entirely decouple shading from the screen sampling (ex. nitrous engine) and that will likely get more popular in the future as it has quite a few advantages. And you might be surprised to know that this has been the norm in movies for over 20 years 😛 (https://en.wikipedia.org/wiki/Reyes_rendering)

            > If this is is actually true, why do all the Quake ports simply use super-sampling? Quake, given its simplistic and open-source nature, seems a target ripe for exactly this sort of experimentation.

            I actually explained this to you in my first post – because it’s *easy* to just do the dumbest thing possible (i.e. increase resolution).

            In case you’re actually interested, here’s a recent paper on the topic:
            [url<]https://research.nvidia.com/publication/2018-08_Correlation-Aware-Semi-Analytic-Visibility[/url<]. On a scene like something from Quake that would likely run plenty fast since there's so little geometry. And there's plenty more if you follow the references (ex. [url<]https://research.nvidia.com/sites/default/files/pubs/2015-08_Decoupled-Coverage-Anti-Aliasing/hpg15_DCAA.pdf[/url<] is another good read). > It may be true that there are fantastic and amazing things going on in the world of offline rendering, but that has very little relevance to the world of Wintendo, and that's the only thing I've ever been talking about. So basically you actually agree with what I said... i.e. you can make whatever comments you like about your opinions on what the current games you have tested offer as options. But ultimately when you use that as your basis to make arguments about how rendering should be done you're stepping outside your expertise, and I will continue to call you out for those comments 🙂

            • auxy
            • 1 year ago

            [quote<]So basically you actually agree with what I said... [...] But ultimately when you use that as your basis to make arguments about how rendering should be done you're stepping outside your expertise, and I will continue to call you out for those comments :)[/quote<] Well that's fine. ┐( ̄ヮ ̄)┌ One of the things about talking from experience is that it makes you a little cynical about clever things because in practice, in life, the simplest solutions often turn out to be the best. However, I'm also occasionally wrong about my conclusions because sometimes things aren't so simple. In this case, you haven't convinced me (as we were really just talking about different things), but I'll be happy to be wrong in the future. Prettier pixels with less power is always great! For the record, I have a great amount of respect for you personally and also for the work that you guys are doing at EA SEED. I'm just very frustrated with the idea that things like checkerboard rendering, dynamic resolution, or DLSS will become "the norm." It's fine if they're implemented by default on hardware that needs it to maintain a stable framerate, but there is a long and storied history of PC games being ruined by their console siblings. I don't want this mindset that it's fine to leave this stuff enabled by default to settle in, because that junk looks crap and I don't want it in my games. In the situation where one solution is "buy more hardware" I want to make sure that solution keeps working, because once image quality compromises are made that I can't fix I've lost control of the presentation of the game, and that's really half of what PC gaming is all about after all. (The other half being mods and hacks, hehe.) Anyway, good talk. I'm glad we managed to establish that we were coming from different backgrounds and talking about different things. (*'▽')

            • RAGEPRO
            • 1 year ago

            [i<]tsun tsun tsun tsun[/i<]

            • auxy
            • 1 year ago

            [b<](USER WAS CASTRATED FOR THIS POST)[/b<]

            • Andrew Lauritzen
            • 1 year ago

            > I’m just very frustrated with the idea that things like checkerboard rendering, dynamic resolution, or DLSS will become “the norm.” It’s fine if they’re implemented by default on hardware that needs it to maintain a stable framerate, but there is a long and storied history of PC games being ruined by their console siblings.

            I hear you and I share your frustration in a lot of cases. I also think it’s important that gamers *do* continue to demand higher quality out of game developers. I personally want to see more studios consider high end PCs as an actual target that they optimize for though, as I know for a fact we can get a lot more out of a 1080 or Vega 64 or whatever than what you currently get by just ramping up settings somewhat.

            In SEED we’ve had the luxury of considering that class of hardware as a baseline and indeed optimizing for even much faster stuff (Titan V), and you can do a lot of neat stuff on that level of hardware if you’re not worried about the techniques scaling down to consoles or whatever. Indeed that’s why I’m more exited about the possibilities with fancier rendering: while you see them as a way to compromise quality to make stuff run at all on consoles, I see it as an opportunity to make images on high end hardware look *much* better than consoles rather than mostly similar, but just a little sharper and at higher framerates as we tend to get today.

            I don’t think you need to be concerned that what you’ve seen from those techniques *specifically* becomes the “only option” in games or whatever. As I said, as long as there’s a quality advantage of any sort to just rendering at higher resolutions, I’d expect to see that as an option. Indeed as people decouple shading a bit more I’d also expect you to get more options in terms of just ramping up sampling counts arbitrarily. Things like the Nitrous engine (Ashes of the Singularity) for instance can trivial support that kind of thing.

            I’m perhaps not cynical enough, but again the simplicity of those kinds of sliders once stuff is decoupled makes me inclined to say you’ll see them on PC. In any case I fully support and encourage giving developers feedback (polite ideally!) in cases where you feel they have compromised quality in any way on your hardware.

            > In the situation where one solution is “buy more hardware” I want to make sure that solution keeps working

            Here here on that! I’ve recently become spoiled by 144Hz and a super high end GPU, so any games that lock frame rates make me very sad 🙁

            > Anyway, good talk. I’m glad we managed to establish that we were coming from different backgrounds and talking about different things. (*’▽’)

            Indeed, thanks for the chat!

        • OptimumSlinky
        • 1 year ago

        I think 90% of AA implementations look like crap given the performance hit. I admit I’m weird in that jaggies don’t bother me much, so I end up playing most games (especially competitive shooters like BF4) without AA to ensure that 100+ fps.

      • Chrispy_
      • 1 year ago

      Yeah, it is kind of sickening how much Nvidia is obfuscating what’s actually happening with DLSS.
      [b<]Remember kids, 4K DLSS is 1440p. Do not even pretend it's 4K or try to compare to 4K.[/b<] I wouldn't have a problem with it if they called it "DL 4K upscaling" but they're lying by pretending that it's 4K when it's only rendering an image at 1440p. They're intentionally trying to compare 1440pDLSS to make their 2000-series look better than both their own 1000-series and AMD's cards, which is complete BS.

        • Prestige Worldwide
        • 1 year ago

        [quote<]Remember kids, 4K DLSS is 1440p. Do not even pretend it's 4K or try to compare to 4K.[/quote<] WAT?

          • Voldenuit
          • 1 year ago

          In retrospect, we should have predicted this.

          Turning on TAA doesn’t cost a 40% performance hit in modern games.

          How else did nvidia magically get a 35-40% boost by turning off TAA and turning on DLSS?

          By lying about the resolution, of course.

          • Chrispy_
          • 1 year ago

          According to Eurogamer.net, “4K DLSS” works internally at 1440p; Their proof is solid:

          DLSS is a driver/profile/texture-weighted upscaler, similar to the one in the OneX and PS4Pro. Since it’s using temporal AA, the first frame after any scene-change is unfiltered, because there’s no previous frame to work with. This allowed Eurogamer to count the pixels from their captures and show that DLSS renders at 1440p

          [url<]https://www.eurogamer.net/articles/digitalfoundry-2018-dlss-turing-tech-analysis[/url<] Nvidia obviously aren't going to call it "upscaling" because people know from TV/DVD/Blu-Ray that upscaling from SD to HD isn't as good as native HD content. I just object to them using the term "supersampling" because that's also a lie. It's actually [b<]subsampling[/b<]. [url=https://i.imgur.com/7DTWwur.jpg<]Here's a good example of DLSS "4K" vs actual 4K.[/url<] The overall result is too much blur (just like strong temporal filtering) and the 'deep learning' algorithms are just as clueless as most other methods when it comes to text.

            • ptsant
            • 1 year ago

            Thanks for the images. This is what we need. Videos are nice, but it’s easier to have a side-by-side image comparison.

            In that case, although the DLSS image is pretty good, it obviously doesn’t have the same resolution, for example some letters are clearly not readable.

            Still, I would pick DLSS if it allowed me to run native 4k instead of having to scale from 1440p. I suppose it probably upscales better than the monitor itself. Obviously, as auxy said, I’d rather have true 4k but in a given situation (given card, monitor, game) I’d rather have the option of turning it on, than not.

            • Chrispy_
            • 1 year ago

            Based on videos and screenshots I’ve seen, I’m happy to call DLSS “top quality upscaling”.

            It’s certainly better than your monitor’s scaler. It’s probably better than even a bucibic GPU scaler, but there’s a post here from Andrew Lauritzen questioning how expensive DLSS is though; It may be the best upscaling possible, but how expensive is it compared to native 1440p with very cheap temporal AA?

            Synthtel2’s original post this reply chain refers to is exactly this question: Doom at 70% resolution with temporal AA is [i<]fast[/i<], whilst DLSS is technically only 66% resolution and yet it's expensive. [list<][*<]1080, 1080Ti both run modern games at 1440p around 60-70% faster than they do at 4K. [/*<][*<]The 2080, 2080Ti both run modern games at 4K with DLSS around 30-40% faster than native 4K[/*<][/list<] DLSS is, seemingly, [b<]VERY EXPENSIVE[/b<] compared to other upscaling methods that are effectively free. I would accept it more willingly if Nvidia were honest about it; If they said [i<]"Hey, we have this new super AA method that combines deep-learning and upscaling at the same time. It's expensive to render but our new cards are powerful enough to handle it anyway. "[/i<] - then that would be honest. No, instead they're attempting to trick everyone into thinking it's comparable to 4K. It's obviously not 4K, anyone can see the courser image and reduced detail and rendering mistakes, just like a frame interpolation of a 60Hz input to a 120Hz HDTV isn't really true 120Hz.

            • Marees
            • 1 year ago

            Techspot did an analysis of DLSS based on the two demos provided by NVIDIA.

            In one of the demos, DLSS 1440p upscaling was equivalent to TAA 1800p upscaling BOTH in terms of quality and performance

            • djayjp
            • 1 year ago

            Please look more closely at the DF comparison. There are about 50% of shots that look sharper via DLSS than TAA in FFXV.

            • Chrispy_
            • 1 year ago

            It depends what you’re looking at, exactly. TAA and DLSS are different AA methods that each have their own compromises. A scene that’s heavy on near-orthogonal edges in textures and geometry will probably look much better with DLSS whilst a scene that has texture detail rather than a lot of edge contrast will look better on TAA.

            All AA is a compromise because we don’t have infinite-resolution displays yet. Understanding what DLSS and TAA do is important – as important as remembering that a still image really doesn’t do TAA justice, because it’s a motion filter.

            • djayjp
            • 1 year ago

            Yep agreed Chrispy

            • Voldenuit
            • 1 year ago

            Anything that uses a detailed texture (eg text) becomes very soft with DLSS. Look at the ‘Regalia’ badge on the car, for instance. This is because one is being rendered at 1440p, and then upsampled, whereas the other is being rendered at 4K, then anti-aliased.

            Although the nvidia hasn’t actually divulged the details on how DLSS actually works, I’m going to wager that it’s just an adaptive subsampling algorithm. The deep learning training figures out the ‘best’ subsampling algorithm to use in various parts of the image based on a series of neural node criteria, and writes it out into a set of if/then statements that the Tensor cores just follow with the appropriate FMA operations on the color values of the pixels.

            This is going to be simple rules like ‘IF contrast > x AND B > 128 then SKY = ‘True’, Sample = Bilinear’. The Tensor cores are great at doing series of 4×4 matrix operations (perfect for color and quarternion/rotation operations), but they’re not a magic Skynet neural net that can learn to say, ‘Hasta la vista, baby’ before shooting you with a grenade launcher.

            • djayjp
            • 1 year ago

            Haha

            Idk I’ve seen NN literally create visual information out of thin air (well you get my meaning).

            There are some scenes like the swamp shots on DF that have sharper textures with DLSS.

            • Voldenuit
            • 1 year ago

            Are we talking about the Google Dream AI that turns everything into dogs? Because I’d play a CoD game that was entirely shaded with weiner doggies.

            EDIT: Also, on a more serious note, were you talking about real-time AI inferencing at 60 fps? Because there’s constraints to a GPU implementation that aren’t felt on still image or offline image AI.

            • djayjp
            • 1 year ago

            Brings a whole new meaning to dog tags lol. Yeah I’ve seen those renderings… Crazy! But was thinking more of AI algorithms that try to fill in the gaps of information. Supposedly yeah that’s how DLSS works to some extent in that it runs a pattern matching algo/NN on the Nvidia supercomputer then you download the algo and it runs, optimized, locally and in real-time on your shiny new GPU, secret sauce included lol

            • Voldenuit
            • 1 year ago

            I mean, you can make an AI that can fill in blank spaces with data, but that takes a *lot* of training, with lots and lots of data samples. Recreating blank spaces or blurry information in a still image is also easier than a moving image – for one thing, you don’t have to worry that the blanks you fill in in one frame have to be consistent with the interpolated image in the next frame (whether relevant sections are both bits of missing image, or more egregiously, if a future frame has real information that your AI guessed at in a previous frame).

            Deepfake videos are a thing, but they have a large sample of images to refer to, they’re (usually) not realtime, and if they aren’t real-time, they can feed-forward and back.

            Now look at DLSS, it has to run faster than video (60-144 fps vs 24/30), it can’t hide fidelity with video compression artefacts, scenes are not static, and crucially, even if you gave it more time or made it faster, there is a constraining database limit. Nvidia (will) sends out the DLSS trained data in driver and GFE updates that are mere megabytes. They have tens or hundreds of megabytes (we’ll have to see how they do it) to encapsulate all the DLSS rules for every representative scene and/or environment for an entire game. I don’t want a 1 GB driver update just for BFV, and another 500 MB update just for COD:WWII, etc, and I imagine neither will most players.

            Because of this, I’m skeptical that the AI ‘Deep Learning’ in DLSS will be especially sophisticated. It’s going to be good enough for games, but it didn’t fool me into thinking that ‘4k’ image was rendered natively at 4k – it looked better than I’d have expected 1440p to be, for sure. But I’m not expecting miracles from it. If I ever get a card with DLSS on it, I’d rather run the DLSS2x that’s supposed to render out to a 1440p framebuffer at 1440p with DLSS, instead of rendering out to a 4K framebuffer at 1440p with DLSS.

        • psuedonymous
        • 1 year ago

        [quote<]Yeah, it is kind of sickening how much Nvidia is obfuscating what's actually happening with DLSS.[/quote<] In what way have they 'obfuscated' it? They have been clear with how it functions from the point of announcing it, and in the Turing architecture overview document. It's not exactly a complicate concept to grasp.

        • DoomGuy64
        • 1 year ago

        [url<]https://www.youtube.com/watch?v=VxNBiAV4UnM[/url<] It may be 1440p, but you're not going to notice unless you have a TV sized monitor. 4k is unnecessary for desktop gaming, and only serves to sell ultra high end gaming hardware. Anyone with a smaller screen literally won't see the difference. You can see the artifacting more than the pixels, so once that is eliminated nobody can tell. It's the perfect snake oil solution to a snake oil problem.

          • auxy
          • 1 year ago

          Mr. DoomGuy64, what you’ve just said is one of the most insanely idiotic things I have ever heard. At no point in your rambling, incoherent response were you even close to anything that could be considered a rational thought. Everyone in this thread is now dumber for having read it. I award you many downvotes, and may God have mercy on your soul. (ー人ー ゚)

            • DoomGuy64
            • 1 year ago

            Oh, look it’s a pop culture refenence without a argument based on facts. Good job! You totally disproved my point that most people are physically incapable of seeing pixels that small on tiny monitors.

    • Demetri
    • 1 year ago

    I only see DLSS discussed in regard to 4K. Any plans to offer it for other resolutions?

      • Jeff Kampman
      • 1 year ago

      Getting good native rendering performance at sub-4K resolutions is not a problem for sub-$500 graphics cards, so it’s not really a good fit for DLSS “1X” (i.e. the half-res rendering, then AI upscale) mode. DLSS 2X might be a better use of the tech at those resolutions to improve image quality. We’ll have to see.

    • derFunkenstein
    • 1 year ago

    If you like it, you like it, and if the trade-offs are worthwhile (and I always favor framerate and smoothness above all else, so to me it is) then it’s nice to have the option.

      • derFunkenstein
      • 1 year ago

      Well, now that I’ve watched that FF XV video, I’ve come to the conclusion that DLSS is not for me. That’s STILL too jerky to be enjoyable. Watching the demo drive by cars on the road and seeing them jump around is a nothankyou.jpg, even when connected to my (cheaper) 4K TV via TB3-to-HDMI 2.0 cable on my MBP. I’d rather play in 1440 and smooth that out all the way, assuming the PC port isn’t a steaming pile of garbage and actually scales.

        • Jeff Kampman
        • 1 year ago

        Make sure you’re not dropping frames; YouTube uses VP9 for 4K and your machine may not support hardware decoding.

          • derFunkenstein
          • 1 year ago

          I see the same in 1080p. It’s not a huge pause, it’s just stuttering.

          <edit: 51 seconds beyond 16.7 ms doesn’t tell the whole story, because a frame that takes 18 milliseconds is only contributing 1.3 milliseconds to that total. More than half the demo runs at sub-60 frame rates. /end edit>

          The 50th percentile frame time is like 18 or 19 milliseconds. Since I prefer vsync to frame tears, that means at least half the time the demo would be playing at 30fps or less. So that’s why I’d rather run the game without AA if it’s smooth at 4K, or run 1440p with AA if it’s not. The game would still look good on my slightly smaller, slightly farther from the couch 4K TV. It would also be smoother.

          The whole resolution/settings thing is a matter of taste and the tested settings are not for me. Those settings are perfect for lots of other people, though, and there’s no argument that the performance is a huge improvement in smoothness over other AA methods.

          That’s what’s cool about PC gaming, and why most people who come to TR are here. If you played FF XV on PS4, the game would have [url=https://www.eurogamer.net/articles/digitalfoundry-2017-can-final-fantasy-15-hit-60fps-on-ps4-pro<]sub-4K resolutions at 30 frames per second[/url<] in high performance mode and even the XBox One X [url=https://www.eurogamer.net/articles/digitalfoundry-2017-final-fantasy-15-xbox-one-x-report<]uses dynamic resolution scaling[/url<].

            • djayjp
            • 1 year ago

            This lack of smoothness is more an issue of the lag happy gameworks features being forced to on. I bet the results would be similar at 1440p without DLSS (DLSS renders at half resolution).

            • derFunkenstein
            • 1 year ago

            It could be the case, but every test on the internet these days seems to be about “turn on all the things!” rather than figuring out what’s affecting performance.

        • auxy
        • 1 year ago

        Yah I am in the same place. Neither option is nearly fast enough for me, although I would probalby turn off TAA too. Regular old FXAA does a pretty decent job with a native 4K render, at least at ~180 PPI. Both of those videos are really gross. (´・ω・)

    • NTMBK
    • 1 year ago

    The thing is, canned demos are a [i<]perfect[/i<] showcase for this tech. You know exactly what scenes you will see, so you can perfectly train a neutral network to give fantastic results on this one case. It's a useless toy benchmark. Until we can see results on a full game, it's all academic.

      • Jeff Kampman
      • 1 year ago

      The FFXV demo has dynamic elements in its most extended sequence; the image quality remains consistent there. I’ve probably watched it a dozen times.

        • NTMBK
        • 1 year ago

        But it’s still basically the same scene composition, right? It’s not some totally novel, difficult to categorize scene.

        EDIT: Put it this way, training a network on 20 runs of one demo scene is orders of magnitude simpler than training it on an entire game.

          • Freon
          • 1 year ago

          I wonder if they can or will swap out weights/biases for different scenes or levels. I.e. art direction shifts and such, or even different camera angles or types (imagine Skyrim having a different data for dialog). I don’t imagine streaming these onto the GPU is that big of a deal. I can’t imagine these are that huge, maybe dozens or a couple hundred MB.

          Creating the training data is a single up front cost for the most part, and downloading a few hundred MB of extra data is sort of trivial in the age of 40GB games. Transfer learning probably greatly reduces the training resources. Do it once for one game and that’s probably 80-90% of the training needed for all future sets.

      • NovusBogus
      • 1 year ago

      This is a very good point. AAA games’ famous one-dimensionality may wind up working in their favor.

        • NTMBK
        • 1 year ago

        Hah, yeah, I bet this would work well with 360 era games. “Neural net says add more brown”

      • daniel123456
      • 1 year ago

      Yes. Very fascinating indeed how DLSS will perform under “live” gameplay.

      Also, is there any mention of how long or expensive it is to train the neural network for DLSS?
      It would be good to know how easy it is for a developer to implement this feature on their own.

      In the long run, DLSS or similar NN upscaling (essentially) will likely be the norm.
      For now, however, consumers are the mercy of nvidia in terms of DLSS support.
      This is probably one of the reasons why enthusiasm for DLSS is slightly dampened 🙁

      • ptsant
      • 1 year ago

      Until then, the public will rave about it and when some minor artifacts appear in an AAA game running at 100+ fps, nobody will notice.

      Public scrutiny is happening now. When all sites have decided that DLSS and TAA are equivalent based on the demos, they will be reluctant to change their mind 6mo later.

      I will be curious to see follow-up reviews 3-6mo later.

      • Tirk
      • 1 year ago

      The way Nvidia is shoehorning its narrative in future promises in performance as a key selling point for the RTX seems quite odd coming from a company that usually shouts about current performance as a reason to buy their cards. Pay a lot more than you ever have for a class of GPU, “we promise you’ll see the benefit in the future”. I’m not buying the narrative.

      That, and the fact that there was absolutely no push to get this generation out until it was ready leaves me suspect to features only shown in canned demos. They “invented” a whole new measuring system for these cards so people could know that these are great at future ray-tracing in games. Yet, they couldn’t be bothered to make sure a playable game with such features was launched before these cards were. Let alone DLSS.

      Criticisms were leveed against the Vega advertising at the time of its launch for doing the exact same thing and those features never fully materialized. Seems warranted to hold Nvidia to the exact same standard, more so considering they are asking you to bet your cash on it currently.

        • auxy
        • 1 year ago

        As many others have pointed out, this isn’t new. Nvidia did the same thing with Fermi and tessellation, NV20 and programmable shaders, and even NV10 (GeForce256) and hardware T&L. AMD (ATI) has also done its share of marketing based on the promise of future apps making use of new hardware resources, like TruForm, TrueAudio, and Eyefinity. It’s weird to me that so many people are falling for it this time. (・へ・;)

        • Freon
        • 1 year ago

        There’s some chicken and egg going on. You could cherry pick examples of various tech flopping or not.

        SMP, async, programmable shaders, T&L, tessellation, quincunx AA, Turbocache, displacement mapping.

        Sometimes it just takes time for things to catch on. I’m not disappointed to see new features, only that they’re nailing us hard on price and not offering any improvement on fps/$. This will be a bigger issue for the 2070 and below where the 1080 and such are direct self competition.

          • Tirk
          • 1 year ago

          Oh don’t get me wrong, if these techniques truly make it into the gaming ecosystem and provide tools that will make games going forward better, I’m all for them.

          The price hike and the scarce ability to test these features seem entirely intentional by Nvidia. For a company with such a large market share to roll out a product in this way, I believe loses consumer trust. If they were an underdog, it would be somewhat understandable to rush out a product to get the features out there and disrupt the market. Nvidia’s market share and finances are strong, so I am going to be critical of them rolling out a product poorly and at worst deceptively.

Pin It on Pinterest

Share This