Nvidia harnesses eye-tracking to improve VR rendering efficiency

Talk to any forward-looking engineer at AMD or Nvidia about VR, and they'll tell you that producing life-like scenes is going to require new approaches than the brute-force methods that we employ today with traditional monitors. One of those approaches is foveated rendering, which improves efficiency by taking advantage of the fact that we see fine detail in only a narrow range of our field of vision. Nvidia has already toyed with a similar idea with its "Multi-res Shading" approach, which renders different portions of the VR frame at different resolutions to improve efficiency.

The most advanced methods of foveated rendering require a VR headset with eye-tracking built in. As it happens, German company SMI has built just such a headset. In turn, Nvidia established a partnership to use that hardware in developing a new method of foveated rendering that purports to deliver a major increase in efficiency.

Nvidia says it first discovered that aggressive resolution reductions (or foveation) in the periphery of images tends to produce noticeable artifacting, like what might happen with Multi-res Shading. Blurring the edges of images instead can apparently result in a "tunnel vision" effect, thanks to the reduction in contrast it causes. By adding a contrast-preservation step to the blur approach, Nvidia found that users can tolerate a blur that's as much as twice as strong as would be the case without the heightened contrast. 

Nvidia and SMI will be showing off their work in the Emerging Tech section of the SIGGRAPH conference, which opens on July 24.

Comments closed
    • fhohj
    • 3 years ago

    oh look. nvidia in the news for attempting to inject their crappy patented vendor-locking nonsense blackbox-software mafioso family garbage into some new aspect of the games industry.

    if you’ve read about the foveated rendering thing before, then you’ve read the problem was not with software capability but with hardware limitations. it didn’t need nvidia to come in and design oops I meant buy up a bunch of software to push this forward. what it needed was better HMD capabilities. the current technological limitations of which, meant that the extra GPU work required to work around those limitations actually reduced performance. work was being done on implementing this stuff in DX12 and Vulkan that, when paired with proper HMD advancements, would serve just fine.

    it doesn’t surprise me that I read about nvidia creating a special nvidia thingy for this. given that, once all the ducks were in a row, one big implication of this was actually apparently a major reduction in compute power required to nicely render a sophisticated scene (because you’re only rendering a piece of it precisely) when compared relative to what is required for a traditional display environment using conventional methods.

      • MathMan
      • 3 years ago

      So what you’re basically saying, is that Nvidia isn’t allowed to do research into novel graphics-related techniques?

        • fhohj
        • 3 years ago

        you know and I know you aren’t being completely literal and that you didn’t miss my point.

        in the grand nvidia-foveated future, I sincerely hope that in two to four years time, you find yourself unable to toggle on or make best use of the graphics settings in a VR title you bought, because you purchased an AMD or ATi GPU or because you find yourself one nvidia SKU lower than you need to be.

          • floodo1
          • 3 years ago

          AMD should be dead by then and Nvidia will be the only choice so we will all benefit from nvidia X proprietary goodness (-8

        • DoomGuy64
        • 3 years ago

        He’s saying this is Gsync all over again, not that Nvidia isn’t allowed to research graphics tech. Most unbiased people find misappropriated vondor lock-in tech to be unsavory. Just look at how people view oculus’s handling of VR.

        Sure, congrats to Nvidia for implementing this, but we don’t need the vendor specific methods required to use it. Developers won’t bother to implement it outside of gameworks contracts, because it’s extra work to implement something that isn’t universal.

        VR can’t possibly become mainstream when you segment the market worse than it already is. VR right now is in a BAD situation, and the only company trying to save it appears to be Valve. Everyone else is just trying to cash in, which is very unfortunate, and doesn’t help VR adoption in the slightest.

          • DPete27
          • 3 years ago

          Sadly yes. The gaming graphics world would be a better place if all these little tools were not vendor locked. Like all the features in [url=https://techreport.com/news/30099/vr-funhouse-brings-all-of-vrworks-to-bear-on-an-nvidia-carnival<]NVidia's VR Funhouse[/url<] this feature probably only work with Nvidia GPUs. Hence limiting market adoption because it requires specific coding that doesn't even benefit all users. At the same time, there's nothing preventing AMD from working up their own coding tools to implement the same features for their GPUs. Just look at AMDs TressFX vs Nvidia's HairWorks. Same feature, but both companies had to develop it themselves. I don't know if these things are a product of competition or simply hardware architecture differences.

            • ET3D
            • 3 years ago

            Difference of course being that even though TressFX didn’t initially work well on NVIDIA, AMD fixed that, then opened the code for everyone. NVIDIA did eventually make source code available, but only for registered developers after agreeing to a EULA, stating amongst other things that any changes you make become the property of NVIDIA, while AMD has TressFX posted publicly with the simplest permissive license.

    • caconym
    • 3 years ago

    Along those lines: since human eyes are effectively disconnected when they move between positions — to avoid disorientation — I wonder if you could also drop to a really low render quality during eye movements without people noticing. Or even just stop rendering and hold the frame buffer for a split second. It might be tough to predict how long a saccade will last. Possibly you could only get away with rendering a couple frames at low detail, in case a movement ends up being really quick.

    It might still be worth it for power/thermal savings though, especially when VR eventually gets more mobile.

      • Voldenuit
      • 3 years ago

      The aliens in Blindsight were able to use saccades to “hide” in plain sight from the protagonist.

        • caconym
        • 3 years ago

        Blindsight is where I first learned the term, actually. That book definitely got me into reading more about vision and free will / consciousness. I love that Watts puts a bibliography at the end of his books!

          • Voldenuit
          • 3 years ago

          Great book. Did you read the sequel, Echopraxia?

          Watts is a marine biologist, and it shows in his writing (in a good way).

            • caconym
            • 3 years ago

            Yeah, although I need to give it a re-read, because both of those books come at you dense and fast. Blindsight is the only book where I’ve finished the last page and then immediately flipped back to chapter one and started again.

      • psuedonymous
      • 3 years ago

      “Along those lines: since human eyes are effectively disconnected when they move between positions — to avoid disorientation — I wonder if you could also drop to a really low render quality during eye movements without people noticing. ”

      Sadly you cannot. This is because your eyes are not ‘disconnected’ during saccades, but merely suppressed. But that suppression is driven by the view of the world rather than an inherent ‘trigger’. The difference comes in that a VR HMD is using low-persistance driving to prevent blur. But that pulsed visual input, while not consciously perceptible, is sufficient to trigger the visual system that “hey, things are changing dramatically during the saccade! Inform the visual cortex to process it posthaste!”. The upshot of that is; if you try and change the scene during the saccade blanking interval on a low-persistance display, the changes to the display are consciously detectable.

        • caconym
        • 3 years ago

        That’s really interesting! I’m always happy to be corrected by somebody who knows more than I do. Thanks for sharing.

      • tipoo
      • 3 years ago

      You know cool eyeball things. I never thought they’d un-align when moving.

      Edit: Or did you mean disconnected from the brain, not each other?

        • caconym
        • 3 years ago

        From the visual cortex, but as pseudonymous has pointed out, it’s more complex that I thought.

        • Voldenuit
        • 3 years ago

        Human vision is disconnected from teh brain when the eyeballs move.

        You can check it in the mirror by looking at your left and then right eye in quick succession. You never see them move, even though you know they must have to get between the two points.

    • Voldenuit
    • 3 years ago

    Who will be first to market with this?

      • psuedonymous
      • 3 years ago

      Whoever can integrate a sufficiently high quality eyetracker (camera, EOG, or otherwise) into a consumer HMD. The ‘sufficiently high quality’ bit is important: there is a gap between an eye-tracker useful for gaze-targeting only (can be low update rate), an eye-tracker useful for gaze-targeting and optical distortion correction and automatic IPD measurement (needs to do ‘3D’ pupil tracking, through multiple cameras or controlled illumination sources), and an eye-tracker that is suitable for foveated rendering (needs to have a very high update rate, and a VERY low latency).

      The 120fps cameras in the Fove, for example, are not fast enough for Foveated Rendering to have a great effect: the ‘foveal’ circle ends up being so large that the render time gains from the low-detail rendering region are offset by the overhead of having two render regions per eye.

Pin It on Pinterest

Share This