Oculus asynchronous spacewarp sorcery goes live

Remember Oculus' "asynchronous spacewarp" technology? If you're not up to speed on the topic, the technique uses complicated interpolation and reprojection algorithms generate extra frames for VR applications. That means that you can be looking at 90 FPS while your PC is only churning out 45 FPS. Oculus revealed the technology early last month, but today it's out of testing and officially available on the Oculus 1.10 runtime.

The end result of asynchronous spacewarp is that Oculus was able to relax the stringent requirements for its Rift headset. While the recommended specs still call for GeForce GTX 970 or Radeon R9 290 graphics cards mated to an Intel Core i5-4590 CPU, the company now offers a minimum requirements specification. The spec includes all video cards from Nvidia's GeForce 900 and 1000 series, as well as the whole Radeon RX 400 series. The CPU requirements have also been relaxed—all buyers need is an Intel Core i3-6100 and AMD FX-4350 CPU.

There are a few caveats when using asynchronous spacewarp, of course. The technology requires Windows 8 or 10, so Windows 7 users are out of luck. Visual artifacts can appear, caused by quick brightness changes, rapidly-moving repeating patterns in the environment, and head-locked elements that move too fast to track properly. Oculus also acknowledges that Spacewarp is a band-aid rather than a real performance optimization. The company still recommends that developers and users alike target a machine more akin to the recommended specs than the minimum one.

If you're not sure how well the hardware in your machine can handle high-end VR, you can grab Oculus' compatibility tester. If you already have a Rift and want to know how to use asynchronous spacewarp, congratulations! You're already doing it. With today's launch, the feature is enabled by default on all compatible systems.

Comments closed
    • Topinio
    • 3 years ago

    Link?

    Also, “the whole Radeon RX 400 series”? Including the OEM Oland-based crap?

    Even if not, not sure how a 460 is good enough if a 285 is not, any ideas anyone?

    • DPete27
    • 3 years ago

    I still like Nvidia’s SMP idea better. If your hardware cant hit 90fps using SMP, take a page from the consoles and upscale.

      • zqw
      • 3 years ago

      It just hit Unity betas this week. Remember when it was announced? 🙁

      • synthtel2
      • 3 years ago

      That’s a very different thing with much smaller gains, though I would agree it’s a cooler one.

        • lycium
        • 3 years ago

        Best part is that it should help for more than just VR, for example when rendering shadow maps.

          • synthtel2
          • 3 years ago

          Yeah, it should enable some nice resolution / geometry / map count gains with simple shadow mapping (moving ever-closer to an entirely bandwidth-limited world in Nvidia-land). I doubt it’ll be a big deal for other areas like reflection probes unless a dev has messed something else up, and its usefulness diminishes with more advanced shadow techniques like VSMs, but that’s still the bit I’m most hyped about.

            • synthtel2
            • 3 years ago

            Waitaminute, this opens up some new possibilities for contact hardening shadows, doesn’t it? Why didn’t I notice that earlier? 😀

            • synthtel2
            • 3 years ago

            I forgot another one: foveated rendering! If I’m interpreting Nvidia’s implementation correctly, it’s a really big deal for that.

      • psuedonymous
      • 3 years ago

      Many games already use dynamic framebuffer scaling. SMP/MRS are more a ‘flat’ load-reduction technique (static reduction in pixel draw count), rather than a dynamic one that reacts to changes in framerate.

        • DPete27
        • 3 years ago

        So use them all, SMP, Foveated Rendering, and Spacewarp. Now you can VR game on an Intel IGP!!!

          • psuedonymous
          • 3 years ago

          SMP will not work on Intel (Nvidia-only feature, using hardware in Pascal and Maxwell to do multiple projection transforms for one geometry call), ASW also doe snot work on Intel and is likely also a hardware issue (though it may just be Intel have never bothered because they would never met even the newer minimum spec). Foveated rendering could potentially work, but nobody has demonstrated an eye-tracker sufficiently high fidelity AND low-latency enough for it to actually be implemented for a HMD (implementing it for a desktop display has much smaller saccades – and thus lower saccade velocities – to deal with).

          An Intel GPU [i<]could[/i<] pump out the resolution and refresh rate to drive a Rift or Vive. It would only be able to render extremely simple scene geometries though.

            • DPete27
            • 3 years ago

            I’m aware of proprietary Nvidia features. They’re almost always a cancer to the market they’re designed for because developers don’t want to waste their time coding for a feature that only affects a limited portion of systems, so nobody wins. But they do have some good ideas that should be adopted to all GPU manufacturers. I’d hope Nvidia doesn’t own legal IP rights to the concept of things like SMP (though I wouldn’t be surprised) and that AMD or Intel or an open standards committee could design their own rendition based on the same concept.

            My comment was a theoretical one. Combine all those resource-saving technologies and you wouldn’t need much GPU power at all. The Intel IGP comment was a reference to GPU performance.

            • psuedonymous
            • 3 years ago

            [quote<] I'd hope Nvidia doesn't own legal IP rights to the concept of things like SMP[/quote<]The technique for rendering multiple viewports per display has been around for a long time. Any fulldome system with realtime rendering will be doing it, and things like Fisheye Quake that render a cubemap then sample from it for arbitrary foVs and distortions. The diea itself is not new, but rarely used because of the overhead of rendering multiple viewports (the same reason rendering realtime reflections for mirrors in games all but disappeared over the last decade). The clever bit is in the hardware itself, cutting overhead by rendering multiple viewports without having to dispatch multiple calls. That's not something you can do without a hardware revision.

    • DPete27
    • 3 years ago

    I still like Nvidia’s SMP idea better. If your hardware cant hit 90fps using SMP, take a page from the consoles and upscale.

      • DoomGuy64
      • 3 years ago

      Your auto-spam software double posted.

    • dmitriylm
    • 3 years ago

    I’ve been using this for the past month via registry hack on the old runtime and its very impressive. Fluidity improvement is massive and games like Project Cars which ran poorly on my GTX 970 in VR are now smooth with many of the settings maxed out.

    • lilbuddhaman
    • 3 years ago

    [quote<]The technology requires Windows 8 or 10, so Windows 7 users are out of luck.[/quote<] So where's the registry entry to disable this limitation?

      • zqw
      • 3 years ago

      Win 8 and higher can manipulate render targets and framebuffers faster. It’s a common requirement, like here: [url<]http://store.steampowered.com/app/382110/[/url<]

    • thesmileman
    • 3 years ago

    And why we can’t use this without VR to help fill in framerate drops?

      • zqw
      • 3 years ago

      It’s not a general solution. It uses motion vectors, so arcing objects do a stair-step move. But, in practice it’s easily better than the TV motion smoothing, and doesn’t add latency! I expect it will become a general driver feature / OGL extension that engines can support.

      • synthtel2
      • 3 years ago

      We can, it’s just not as useful, since it’s purely a smoothness boost and doesn’t get any info from the game world to your eyes faster. VR is the field that’s uniquely sensitive to smoothness problems, so it makes sense for it to show up there first.

      • Entroper
      • 3 years ago

      We don’t need to. Async spacewarp and timewarp are aimed at reducing latency between the HMD’s reported position and orientation and the rendered image. Without VR, there is no adjustment of the rendered image based on the user’s head movements.

        • synthtel2
        • 3 years ago

        There can be adjustment based on camera position in the game world, regardless of the view device. It would take some messing around in the internals of a game, and for some games it really wouldn’t be workable, but there’s nothing inherently wrong with it.

          • Entroper
          • 3 years ago

          Yeah, if you have TrackIR [url=https://www.youtube.com/watch?v=Jd3-eiid-Uw<]or something[/url<], you can make the monitor like a "window" into the virtual world, and then ASW becomes useful. But I would call that a primitive form of VR.

            • synthtel2
            • 3 years ago

            Envision a perfectly conventional gaming rig running a normal game. There’s a good chance you’d need to render a whole frame anyway to figure out what’s actually going on in the viewport, but a whole lot of smoothness isn’t about what’s in the viewport, it’s about movement of the camera, and we can make pretty good guesses about where the camera will be in 10 millis (and those guesses can get a whole lot better with a few more hints from the game logic). As of last frame, the player was strafing right at 3 m/s and the button for that is still being pushed? Well, why don’t we go ahead and reproject to get an intermediate frame with a camera position 3 cm to the right (if interpolating to 100 fps)?

            *If* a game is already rendering a guard band for angular timewarp (whatever that was called), I don’t see much reason to not make this an option too. It’s a cheap algo, and it could be useful.

            The real question is what games would benefit from that guard band and more basic reprojection outside of VR. It could do wonders for mouselook latency (probably 30-45 ms reduced in a typical throughput-oriented pipeline at 60 fps), but there is a very real framerate penalty for it. There would be a couple other shenanigans involved for a dev as far as making everything feel right, but they’re not insurmountable and I think it would be a pretty cool feature.

      • jihadjoe
      • 3 years ago

      How?

      The thing with VR is you have a general idea of how to change the frame based on the changes in the head’s position, but with a traditional monitor the viewport is fixed so there’s nothing to morph to unless you already know what the next frame looks like, and if you know that then that means’ you’ve completed the next frame and might as well display it.

    • chuckula
    • 3 years ago

    I have a solution that [url=https://www.amazon.com/Dramamine-Non-Drowsy-Naturals-Natural-Ginger/dp/B00SD9IE9O<]doesn't require software[/url<].

    • Firestarter
    • 3 years ago

    Is there any indication that the Vive people have something similar on the way?

      • zqw
      • 3 years ago

      They got their version of ATW (rotations but not translations) into SteamVR beta a few weeks ago for NVidia cards only.

Pin It on Pinterest

Share This