Virtu goes Universal, tackles vsync

Computex — We sat down with the folks at Lucid yesterday to discuss the company’s plans for its Virtu GPU virtualization software. The big news is that Virtu is no longer restricted to Intel’s integrated graphics platforms. A new version of the software called Virtu Universal is designed to work with a broader range of platforms, including AMD’s upcoming Llano APUs. In addition to demoing the software on a Llano system paired with a GeForce GTX 260, Lucid also had it up and running on a couple of notebooks. Yep, Virtu is going mobile, too.

Virtu Universal will support both i-Mode and d-Mode, allowing users to choose whether to have their IGP or discrete graphics card serve as the primary graphics adapter. We’ve found that d-Mode offers the best performance because it can take advantage of game-specific optimizations built into discrete graphics drivers, making Lucid’s latest Virtu feature a little puzzling.

Dubbed Virtual Vsync, this new addition aims to enable frame rates well in excess of a monitor’s refresh rate without subjecting users to unsightly tearing. The software is ready to go, and we saw it running the Devil May Cry 4 benchmark at close to 100 FPS with Vsync enabled (on a 60Hz display) and no visual artifacts. Virtual Vsync is meant for folks who want the best performance possible, but it works its magic on the IGP, so you’ll need to be running in i-Mode. That’ll cost you some performance right off the bat, which would seem to be contradictory to Virtual Vsync’s mission. Eliminating tearing at high frame rates is still a neat trick, though, and licensees can choose whether or not to support the feature. Lucid still has no plans to offer Virtu software for sale to the general public, so it’s up to motherboard and notebook makers to bundle the software with their products.

Comments closed
    • willmore
    • 11 years ago

    I bow to your superior knowledge.

    • Meadows
    • 11 years ago

    You’d need a *lot of* intermediary frames for what we call “motion blur”, because blending a few pictures either generates an unsightly blur, or a clear double/triple vision.

    • odizzido
    • 11 years ago

    I wonder if this would apply vsync to everything? Even with vsync always on in my drivers a lot of stuff just doesn’t sync.

    • willmore
    • 11 years ago

    How about keeping track of what point in time a particular frame was rendered (or even not rendering a whole frame, instead predicting when in time a portion of a frame will be actually displayed and only rendering that portion) and then blending those frames together weighted by how close to ‘now’ they are?

    It would be like a sort of temporal oversampling. The blending would be the critical step because it would act as a low pass filter to prevent temporal aliasing–like wheels appearing to spin backwards due to the mixing ofr the sampling frequency and the inherent frequency of the observed event.

    (I’m sure this is complete bullocks because I have never done any graphics work. But I have done some signal processing work.)

    • cygnus1
    • 11 years ago

    [quote<]From wikipedia: Another method of triple buffering involves synchronizing with the monitor frame rate. Drawing is not done if both back buffers contain finished images that have not been displayed yet. This avoids wasting CPU drawing undisplayed images and also results in a more constant frame rate (smoother movement of moving objects), but with increased latency.[2] This is the case when using triple buffering in DirectX, where a chain of 3 buffers are rendered and always displayed.[/quote<] Sorry for quoting wikipedia, but I've always been under the impression that triple buffering with vsync on rendered up to 2 or 3 frames ahead and stopped rendering until the buffers were cleared. That delay in rendering can look bad with fluctuating high frame rates and lots of motion as the amount of change between those rendering stalls would end up looking jittery. I think the difference between virtual vsync and regular triple buffering and no vsync is that vritual vsyc will keep the dedicated card rendering non-stop and drop the excess frames more intelligently to avoid things looking jittery while at the same time avoiding tearing when frame rates go above your refresh rate.

    • Damage
    • 11 years ago

    Like… vsync with triple-buffering? 🙂

    • cygnus1
    • 11 years ago

    I’d wager they are having the IGP simply grab frames and dropping frames that are out of the timeslot. but doing it intelligently so that don’t let too many frames go by to where you’d notice stutter.

    • kamikaziechameleon
    • 11 years ago

    lucid has some awesome technologies but they need to make it so the average consumer can more easily acquire them. They are more often than not bundled with undesirable mobos.

    • willmore
    • 11 years ago

    I’m guessing it might be more like the ‘racing the beam’ type of lag reduction that old scanning cameras and TVs used to use. For them, the scene was ‘observed’ and ‘drawn’ one little spot at a time. If things moved while they were being scanned, they would be drawn as they were seen. This can lead to a number of temporal/spatial artifacts, but ‘tearing’ isn’t one of them.

    Now, we do amost everything more like movies where we capture a complete scene at one go and display it the same way–this is what vsync ensures. If Lucid was to do something clever (which I think they’ve shown they can be) then they might have found a way to act more like a scanning camera system and a static frame camera system.

    Either way, I’m not sure I see the point of it. 🙂 But, I’m not an “OMG, must have!” over the top gamer.

    • theonespork
    • 11 years ago

    Not necessarily. Instead of tearing you could end up with smearing. I guess taking the time to find a white paper on this would be most useful, but conjecture is fun, right? Could the software just be allowing some sort of graphic oversampling/supersampling to occur? From the little info here it would make sense that the final export would be on the lesser card, allowing the more powerful card to be some sort of a hot tub time machine, err, slave engine handling a lot of difficult, high horsepower computational work on the raw files.

    Feel free to shred my conjecture in 3…2…1…

    • Meadows
    • 11 years ago

    Contradictory. You either have new information appear as tearing or interleaved motion, or you don’t have new information. You can’t put 100 frames per second into 60 frames per second and eliminate tearing without losing 40 frames. It’s elementary knowledge.

    On the other hand, I’m pretty sure this will eliminate vsync-related hiccups, framerate aberrations (like having it flicker between 45 and 60 if your frame missteps by a millisecond), and lessen the input delay by a good margin. All good things, that’s what the feature should be advertising.

Pin It on Pinterest

Share This

Share this post with your friends!