TPCast wireless VR kit lets Oculus Rift owners roam free

Just about everyone agrees that VR is more immersive when it works without a rat's nest of cables. Not worrying about tripping over wires is part of the allure of untethered operation, but reducing the number of connections that must be made to get the experience started is likely just as important. We covered TPCast's wireless interface kit for HTC's Vive VR headset, as well as the Sixa Rivvr add-on that is said to work with HMDs from both HTC and Oculus. HTC also has a partnership with Intel to develop a cord-deletion kit using 802.11ad Wi-Fi. TPCast is now back again, announcing an upcoming kit made to work with the Oculus Rift.

RX Module for HTC Vive-specific wireless kit

TPCast claims that its wireless adapters can deliver "2K" video at up to 90 FPS with just 2 ms of latency. John Carmack estimates that anything less than 20 ms of latency is acceptable for most individuals. The battery pack should deliver about five hours of use before a recharge is needed. TPCast's system requires a line of sight between the headset and the transmitter box, but the connection is reportedly difficult to break.

TPCast's kit for the HTC Vive has been available in China for six months and is now shipping in Europe, though it has yet to go up for sale in the US just yet. According to Tom's Hardware, US shipments were supposed to begin last month but units have yet to be delivered to pre-order customers. TPCast plans to have its Rift-specific kits on sale before the end of 2017. The company didn't provide pricing information, but a since-removed page in Microsoft's online store suggested a price of $299. As a reminder, the price of the Oculus Rift itself was just reduced permanently to $399.

Comments closed
    • Freon
    • 5 years ago

    [url<]https://en.wikipedia.org/wiki/2K_resolution[/url<]

    • psuedonymous
    • 5 years ago

    [quote<] I really don't understand why people can't grasp this fundamental issue![/quote<] Because game-event latency is a massively distant second* to [b<]head movement latency[/b<]. And because you can time events to occur at fixed times prior to scanout occuring, you can pre-time to: - Sample the IMU at the last possible point - Perform Timewarp/Spacewarp (both operations taht are effectively time-invariant and will always complete in the same number of operations, so can be precisely pre-timed) - Start scanning out immediately as the warp completes This reduces your IMU forward-prediction time to the lowest possible, AND makes it a set forward-prediction time rather than a variable one. On the other hand, if you treat a HMD as 'just another monitor' and go through the compositor, you don't even have the guarantee that the buffer refresh interval will be in phase with the panel refresh, even if you set all displays to the same refresh rate! If you have a HMD at 90Hz and a primary monitor at 60Hz, you're just SOL. *Distant third really: you want to give priority to hand movement too. If you render hands to a separate buffer than the rest of the game world you can start that render later on in the pipeline than the game view render, clawing back some latency, and possibly even do a translation warp before compositing to correct remaining position (but not orientation) offset.

    • Chrispy_
    • 5 years ago

    I dunno. 1920 isn’t 2K and although I’ve come to accept such a sloppy adoption of the label “2K” when specifically talking about display resolution, 2K is either 2×1000 or 2x(2^10).

    If I only received 1920/2048 of what I paid for all the time, I’d start to get angry. It’s out by almost 10%

    • Chrispy_
    • 5 years ago

    I like the way you’re thinking but it’s not the type of prediction I or Psuedonymous are talking about;

    Yes, ATW prediction is generally very good because – as you point out – the human head is easy to predict – it moves relatively slowly compared to limbs and digits because it hurts to make sudden direction changes too frequently. Whilst ATW uses this to great effect updating the viewport, it obviously fails on everything else such as actors and objects in the scene.

    There’s already noticeable judder of actors/objects with ATW and that is completely separate to the input lag experienced with objects controlled directly by the player. ATW can’t influence anything other than the viewport and that is why it doesn’t (and can’t) help with input lag. Even assuming that ATW’s prediction does a perfect job (which it doesn’t, but it’s good enough to not worry about), it permits a reduced rendering framerate which doubles the input lag in terms of other actors/objects. The two most important actors/objects in VR are the player character and the character’s visible limbs/weapons/tools in the viewport.

    The best way to explain input lag in VR is with a traditional FPS on a large screen: You move your head/eyes around to look at different parts of the screen, and that is the bit that ATW helps with when the framerate is lower. The important stuff, like when your character jumps/shoots/dodges, is still subject to display latency and – correlating loosely with Carmack’s 20ms guidelines, popular gaming-monitor reviews point out that <16ms of display latency is acceptable, 16-33ms of display latency is only for casual gaming, and >33ms will feel laggy for anything. That sub-16ms display latency is usually added to ~8ms of game engine (120Hz) and rendering latency to give a 25ms threshold to the total system latency of acceptable gaming input lag, which is also (not coincidentally) what’s required for the oft-cited framerate that human vision perceives as “smooth” (Simon Cooke, Microsoft Research Group – [url=http://accidentalscientist.com/2014/12/why-movies-look-weird-at-48fps-and-games-are-better-at-60fps-and-the-uncanny-valley.html<]this article[/url<] is an introduction with links to multiple in-depth papers).

    • Laykun
    • 5 years ago

    But this is exactly what ATW does, and it’s active on both high-end headsets. Human head motion is actually incredibly predictable so it’s easy to forecast into the future.

    • Puiucs
    • 5 years ago

    At least they aren’t using 2K wrong here. I’ve seen so many use 2K thinking it means 2x 1080p

    • Chrispy_
    • 5 years ago

    Sadly, prediction makes it worse, because humans recognise latency not from contant motion, but from changes in motion, exactly where prediction hinders rather than helps.

    The only way to reduce the [i<]feel[/i<] of latency is to actually reduce the latency. This is why a lot of people do everything possible to turn down details and render at 90fps rather than hit 45fps and use ASW.

    • Chrispy_
    • 5 years ago

    It doesn’t even matter if you use Direct Mode. If the rift refreshes at 90Hz, there’s an 11ms wait until the next refresh, even assuming 0ms response time on the OLEDs.

    If an event happens immediately before a display refresh, the latency from the display alone is 0ms. If it happens immediately after the display reresh, the latency from the display alone is 11ms. That’s ignoring all the other things in the chain that can add delay. The average delay for events to be updated on screen is halfway between 0ms and 11ms, which is 5.5ms.

    There is no magic or voodoo going on here. If the display has just refreshed and some event happens, the next possible refresh of a 90Hz panel is 11ms away. That’s unavoidable latency and the only way to avoid this is to have an infinity Hz display. I really don’t understand why people can’t grasp this fundamental issue!

    • psuedonymous
    • 5 years ago

    [quote<]Thanks to the fixed V-Sync rate of current VR headsets, you have any automatic 0-11ms latency built into the display refresh. [/quote<] This is not the case. Byt using Direct Mode (rather than treating a HMD as merely an additional monitor) its refresh timings can be controlled separately from the desktop compositor. You also can schedule jobs to run at specific times in advance of VSYNC, because this functinality was added to drivers (and exposed through Windows) at the behest of Oculus during post-DK2 development prior to CV1 release. This is what allowed the switch from synchronous Timewarp to Asynchronous Timewarp.

    • Laykun
    • 5 years ago

    This assumes asynchronous time warp doesn’t exist. And I don’t not believe the Rift is stuck waiting for v-sync. The way Carmack describes it, because it’s an OLED they can stay just ahead of the raster scan-out when rendering, there’s like lots of dark magic happening behind the scenes.

    • Laykun
    • 5 years ago

    I wonder if this 5ms delay could be mitigated with asynchronous space warp, where you basically predict the head motion ahead of time and render for when you expect the image to show up on the device. This would mean the Oculus SDK would have to be aware of the TPCast though.

    • willmore
    • 5 years ago

    TPcast? Like throwing rolls of it?

    • Chrispy_
    • 5 years ago

    This.

    Thanks to the fixed V-Sync rate of current VR headsets, you have any automatic 0-11ms latency built into the display refresh. Average that and you’ve lost 5.5ms from your 20ms without even considering any frame-buffering whatsoever.

    Realistically, you cannot afford any extra latency and the motion sensor latency is already high enough compared to a keyboard and mouse that even leyman users can notice and comment on (source: client VR open days).

    • Sargent Duck
    • 5 years ago

    Make it a sixth, just to be safe.

    • Voldenuit
    • 5 years ago

    So, the solution is to drink a fifth of vodka to underclock my brain before gaming?

    • psuedonymous
    • 5 years ago

    Shipping for the Vive version has been repeatedly pushed back, and nobody has yet to actually measure the end-to-end latency of their system. The SiBeam WHD modules that power the system only spec for 5ms average end-to-end latency.

    Remember that the 20ms motion-photons budget is your TOTAL time budget. If you spend 5ms of that on transmission, that means 5ms has to be be clawed back from somewhere else in the pipeline. The only part your otherwise have control of is the rendering process, so you lose 5ms of rendering time.

    With the Rift you have about 17ms of redner budget available after fixed overheads (sample, scanout, etc), and with the Vive about 11ms (SteamVR has a greater latency overhead, lacking some optimisations and often being stuck with protocol translation). A 5ms latency addition from a wireless adapter could mean between 30% and 45% of your available render budget has been lost.

    Worse, transmission latency is added AFTER late-sampling (e.g. TimeWarp) so cannot be compensated for.

Pin It on Pinterest

Share This

Share this post with your friends!