MotionDSP’s vReveal and Nvidia’s CUDA

We already know Nvidia graphics processors can do more than play games. Thanks to Nvidia’s CUDA architecture, GeForces can be used for tasks ranging from game physics acceleration to video transcoding. Today, we’re going to look at another item in the CUDA bag of tricks: video enhancement.

Chances are you may not have heard of MotionDSP: it’s a Silicon Valley start-up that’s made a business out of improving video quality for security firms and government agencies thanks to its Ikena software suite. You know those CSI moments in movies and TV shows when a video is being analyzed, and with the touch of a button, the image can be zoomed and enhanced to reveal the identity of the bad guy? That’s the kind of software MotionDSP produces.

MotionDSP made its first foray into consumer products with That tool brought some of the firm’s video enhancement algorithms to mainstream consumers, who are used to working with low-quality video shot with devices like mobile phones. FixMyMovie is no more, however, and in its place MotionDSP has launched vReveal. Like FixMyMovie before it, vReveal is designed to bring the power of enterprise-grade video enhancement software down to consumers—but with a twist.

Consumer apps need to support consumer hardware, and to that end, MotionDSP went to great lengths to refine its algorithms as much as possible for mainstream configurations. Then it went one step further by partnering with Nvidia to build in CUDA support, with the intent to vault vReveal performance to new levels. The pairing seems like a natural fit, since it should theoretically mimic the performance advantages of other GPGPU apps like Badaboom’s video transcoder.

vReveal’s features

vReveal is not a video editor. It doesn’t compete with iMovie, Adobe Premiere Elements, or even Windows Movie Maker. Instead, vReveal is a video enhancer, offering features not found in traditional video editors. The program’s interface is fairly straightforward, and you’ll feel right at home if you’ve ever used Google’s Picasa.

So, what makes vReveal special? Here’s a breakdown of its different features:

  • Clean — Removes visual artifacts induced by low-quality image sensors. Noise from dim filming conditions can be cleaned up, too, as can macroblocking caused by compression algorithms.
  • 2x Resolution — This is the big one. MotionDSP claims vReveal is the only consumer app with “super-resolution” capabilities, which is basically a fancy marketing term for advanced resizing algorithms. Other programs use various interpolation methods, but vReveal relies on adjacent video frames to enhance the image. By comparing the contents of previous and following frames, 2x Resolution is able to enlarge a video more accurately and fill in the new pixels.
  • Sharpen — It does exactly what you’d expect: sharpen object edges and reduce video blur.
  • Auto Contrast Once again, the name is pretty self-explanatory. This feature will automatically improve washed-out or overly dark content.
  • Stabilize By cropping a bit of the image on all sides, the software produces a more stable final picture. This feature is similar to image stabilization options in cameras and other video software.
  • Fill light Brightens up the foreground of a video in the event of an underexposed subject.
  • CUDA support — Accelerates many enhancement effects, offloading work from the CPU onto compatible GPUs to improve performance. Currently, the software lacks support for SLI multi-GPU configs and G80-powered video cards (like the GeForce 8800 GTX, GTS 320, GTS 640, and Ultra). Support for those hardware configs may make it in down the line, though.
  • Batch modification vReveal can modify and export a series of video files in order to save time.
  • YouTube uploading Exports videos directly to YouTube.
  • Broad file format suppport — With the appropriate DirectShow codecs installed, vReveal should import just about any video under the sun, including 3GP, MP4, and MOV.

As a mainstream application, vReveal does have some limitations. After all, MotionDSP wouldn’t want to cannibalize Ikena sales. You can import video with any resolution you want, but vReveal only lets you apply enhancements to videos with a vertical resolution of 576 lines. The 2x Resolution feature is effectively limited to clips with resolutions of 352×288 and below, which is fine for most camera phones that shoot at 320×240. vReveal’s “super-resolution” algorithm also uses fewer frames than Ikena’s, further limiting the extent to which you can blow up images.

Our testing methods

We had intended to use a Phenom-based system with a GeForce 8800 GTX to test vReveal, but the previously mentioned G80 limitation sunk those plans. Instead, we did our testing with a GeForce 9400M-powered Apple MacBook and a GeForce 8800 GT-based desktop system. As always, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Intel Mobile Core 2 Duo P7350 2.0GHz Intel Core 2 Duo E6400 2.13GHz
System bus 1066MHz (266MHz quad-pumped) 1066MHz (266MHz quad-pumped)
Motherboard Apple Mac-F42D89C8 MSI P965 Platinum
BIOS revision MB51.88Z.0073.B06.0810291326 1.8
Chipset Nvidia GeForce 9400M P965 MCH
Chipset drivers ForceWare 179.48 INF update, Intel Matrix Storage Manager 8.7
Memory size 2GB (2 DIMMs) 4GB (4 DIMMs)
Memory type 2x 1GB Hynix DDR3-1066 SDRAM 2x 2GB Corsair ValueSelect DDR2-667 SDRAM
CAS latency (CL) 7 5
RAS to CAS delay (tRCD) 7 5
RAS precharge (tRP) 7 5
Cycle time (tRAS) 20 15
Command rate 2T 2T
Audio Realtek ALC885 Creative Sound Blaster X-Fi XtremeGamer
Graphics Nvidia GeForce 9400M
with ForceWare 179.48 drivers
Zotac GeForce 8800 GT Amp! Edition

with ForceWare 182.06 drivers

Hard drive Fujitsu 160GB 5400RPM SATA 2x Western Digital Caviar SE16 320GB SATA in RAID-1 mode
OS Windows XP Professional Windows Vista Home Premium x64
OS updates Service Pack 3, latest updates at time of writing Service Pack 1, latest updates at time of writing

We used the following versions of our test applications:

  • MotionDSP vReveal 1.0

The tests and methods we employ are usually publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

What can vReveal do?

Although vReveal deals with video, we can better study its touch-up features using still screenshots. The only feature that really must be seen in motion is image stabilization. We’ve resized our shots of the original clips to provide a more direct comparison with their higher-resolution, enhanced counterparts.

Unless otherwise noted, all of these videos were shot with a Nokia N82 camera phone at a resolution of 320×240 at 15 frames per second. After all, it wouldn’t make sense to put vReveal through the paces with footage from a high-defintion camera, and camera phones are some of the most popular devices for capturing video on the go. First up is a video of some racked billiard balls:


Enhanced with Clean, 2x Resolution, Sharpen, and Auto Contrast

Right off the bat, we can see how vReveal can clean up a typical video from a handheld device. The original video is rife with macroblocking (also known as tiling) as well as a lack of definition around the hard edges of the billiard balls. The entire image is also a little desaturated, which is a common issue with lower-quality imaging sensors.

The enhanced video has noticeably more impact thanks to the automatically adjusted contrast, and the cleaned up macroblocking and sharpened edges have added definition to the image, as well. vReveal’s 2x Resolution algorithm has also improved clarity, particularly with the numbers on the balls. While the video still won’t win the Oscar for best cinematography, vReveal’s got it looking better than it did before. Next up, let’s expose my unabashed fandom for Batman comics:


Enhanced with Clean, 2x Resolution, Sharpen, and Auto Contrast

We saw a hint of this with the numbers on the billiard balls, but vReveal does a surprisingly good job of cleaning up text in low-quality videos. The combination of vReveal’s 2x Resolution algorithm and contrast adjustment generally makes text more legible, and it makes lettering pop out instead of getting lost in macroblocking. I don’t know who would normally shoot a video of comic books, but my Batman graphic novels show what this software can do with a more complicated scene.

vReveal’s algorithms can only go so far to compensate for low-resolution videos, but the text is definitely improved on the covers for The Killing Joke and Dark Victory. High contrast areas receive the most love, with the white lettering and detail in Commissioner Gordon becoming more noticeable. Once again, the enhanced picture is more vibrant than the original.


Enhanced with Clean, 2x Resolution, Sharpen, and Auto Contrast

Behold my keyboard in all of its grody glory! Thankfully, the 320×240 video masks how dirty it really is, and not even vReveal can restore that. The improved contrast is the most noticeable enhancement to this video, but Clean, Sharpen, and 2x Resolution also make hard edges appear more defined. The bottom edge of the space bar (going from black to silver) looks noticeably better, as does the line below it where the wrist rest meets the keyboard. vReveal wasn’t able to restore some of the smaller symbols, although the larger lettering looks slightly clearer.

Our final clip is not one I shot myself. It’s a short 640×480 video of a dog chewing on a bone while lounging on the couch. What a life.


Enhanced with Clean, Sharpen, Auto Contrast, Stabilize, and Fill Light 50%

Because of this original video’s resolution, we were unable to use the 2x Resolution algorithm. That’s all right, though, because this clip’s real problem is the dim lighting. There’s a large amount of noise in the picture, and the entire image is underexposed. Clean and Sharpen cleared up a lot of the artifacts, but the image was still too dark, even after Auto Contrast did its thing. To help, we turned to the Fill Light feature, which did a good job of bringing out detail in the foreground without completely blowing out the brighter background.

The final image has noticeably more detail than the original, though vReveal isn’t able to do anything about the extremely dark areas near the dog’s body—the source video just doesn’t have enough information. You might notice some slight cropping in the enhanced image, and that’s due to the Stabilize feature we used for this video. The resulting clip didn’t have subtle camera shakes anymore, although that came at the cost of some of the picture near the edges. That’s certainly a worthwhile sacrifice, though.

Once again, we’re left with a video that’s been improved just about as far it can be with software. To improve the scene any further, the video would probably need to be shot again with better lighting.

Performance considerations

Of course, one of the standout features of vReveal is its ability to leverage the power of Nvidia GPUs to improve performance. With that in mind, the question becomes: how does GPU-accelerated rendering affect performance?

vReveal provides an easily accessibly toggle for GPU acceleration, so benchmarking was surprisingly easy. We simply rendered our clips out to Windows Media Video format using the previously noted enhancements, and we measured encoding times with both GPU assistance and software rendering only. vReveal reported the final encoding time after each job, so there was no reason to break out the stopwatch.

Something to keep in mind is that these clips were all very short. The keyboard clip was only seven seconds, the comics video was nine seconds, and the billiards one was the longest at 10 seconds. Meanwhile, the clip of the dog lounging on the couch was the shortest of all at only five seconds long, but it had double the resolution and a higher frame rate. You’ll see shortly how that affected performance. The first three clips all used identical enhancement settings, so they provide an interesting look at how well CUDA can accelerate the same features across different clips. Let’s take a look at the MacBook test system first:

Color me surprised—whatever color that may be (maybe lilac). For the first three videos, GPU-assisted encoding increased performance by an average of roughly 33%. Moving on to the dog-on-the-couch video, we also see some interesting results. Despite being the shortest clip of the bunch, it takes around twice as long to render, likely due to the higher frame rate. Also, if you remember, the couch dog clip couldn’t use the 2x Resolution enhancement, and we had Stabilize and Fill Light enabled. Even with different settings and a higher-quality source video, our integrated GPU managed to provide an 18% boost over software rendering. That’s pretty impressive for a graphics chip with only 16 stream processors, especially when you consider the GeForce GTX 280 and 285 have 240 of those.

Now let’s see what sort of performance boost an inexpensive GeForce 8800 GT with 112 SPs provides over the 9400M:

Perhaps the first observation to get out of the way is that our two test systems’ CPUs performed almost identically. The real star of the graph, however, is the GeForce 8800 GT, which managed to decrease encoding times by around 75% across the board. Seven times the stream processors doesn’t net seven times the performance, and diminishing returns have to kick in somewhere. Nevertheless, it’s impressive to see how well CUDA scales from an integrated graphics solution to a fairly modest discrete graphics card.

I’m also relieved that users can reap noticeable performance gains in CUDA applications without having to break the bank. (A GeForce 9800 GT, which is essentially the same as the old 8800 GT, can be had for as little as $100 at Newegg.)

CPU usage

The Intel Core 2 Duo is no slouch in terms of rendering performance, so it’s impressive to see even a lowly integrated graphics chipset improve performance by such a noticeable margin. But what sort of impact does GPU-assisted video encoding have on CPU usage and multitasking in general?

Software rendering with 2GHz Core 2 Duo P7350

Software rendering with 2.13GHz Core 2 Duo E6400

With GPU assistance disabled, vReveal managed to max out both cores during video rendering, just as it should. Getting anything else done while encoding was a pain, and even web browsing was noticeably affected. Unless you go into the Task Manager and change vReveal’s priority (which can only help so much), it’s a safe bet you won’t be using your computer while it’s rendering movies.

GPU-assisted rendering with GeForce 9400M

GPU-assisted rendering with GeForce 8800 GT

With the GPU switch in vReveal flipped, the CPU usage graph turns out to be somewhat surprising. While GPU-assisted rendering helps complete the task faster, vReveal leaves plenty of CPU cycles unused, letting the user multitask with relative ease. There’s a spike when the file creation process begins, before CPU usage settles down to around 65-70% for the majority of the job.

You might notice differences in the load balance between Windows XP and Vista, with the XP machine more evenly distributing the work, while Vista places the brunt of the load on one core and uses the secondary core more sparingly. This is likely due to Vista’s new thread scheduler, rather than anything vReveal is doing, and it could result in a better multitasking environment.

Evidently, vReveal is using the GPU to do some of the CPU’s work instead of loading up both chips fully. Plenty of users will no doubt appreciate the ability to use their computer while it’s rendering, but I would love to see a high-performance mode that would not only use the GPU for encoding, but also maximize CPU cycles, as well. That assumes, of course, that fully utilizing both the CPU and GPU at the same time could substantially improve rendering times.


MotionDSP has definitely brought an interesting product to market in vReveal. Current mobile video devices certainly produce low-quality content, and by offering a stripped-down version of its enterprise tools, MotionDSP gives users access to video enhancement technology that would otherwise be out of their price range. MotionDSP also managed to package vReveal into a simple, straightforward user interface. And there’s no question that the technology works—videos captured from low-quality sources look better after being run through the app.

The software’s limitations are a bit disappointing, though. The resolution cap on the product’s most compelling feature—the 2x Resolution filter—is a major handicap that could prevent many users from getting the results they desire. Old camcorder videos in standard-def format cry out for this sort of enhancement, but that wedding video probably won’t benefit from vReveal’s best filter—unless it was shot on a camera phone, which raises all sorts of other personal issues. Beyond that, the restriction of vReveal’s remaining image enhancement features to SD video really limits its relevance to the short term. Even camera phones are moving to resolutions of 640×480 and beyond, while YouTube (vReveal’s primary video sharing destination) already supports high-definition content. And most of those other filters are already available in mainstream video editing suites.

Bottom line, vReveal’s two biggest virtues are its almost-magic 2x Resolution filter and its speedy GPU-based video rendering. Those features, along with its batch-processing capabilities, could make it a decent addition to a video editing toolbox that already includes a full-featured program like Premiere Elements or iMovie. In light of its limited role and the resolution caps involved, though, the $49 price of entry seems a little high to me. MotionDSP already has plans for an enhanced version of vReveal with higher-resolution video support, but as one might expect, that will come at an added cost. The higher price tag could put vReveal into direct competition with all-purpose video editing suites, which again seems a little steep for a limited-use consumer application.

As for Nvidia’s CUDA, there’s no question GPU optimization can yield performance gains in the right applications. Of course, since this is image processing, it’s not very far afield from the GPU’s original mission. Perhaps the biggest surprise for us was the extent to which an integrated graphics chipset like the GeForce 9400M can improve performance over a dual-core CPU alone. This reality could open the door to CUDA (and, in the future, OpenCL) applications on a variety of entry-level computers. And a mid-range card like the GeForce 8800 GT or 9800 GT can provide an even more substantial boost in CUDA applications. We’d like to see more of this sort of thing soon, please. For now, though, we’ll have to get by with the handful of apps like this one that demonstrate GPU computing’s potential.

Comments closed
    • pookien
    • 12 years ago

    Turns out for video resolution increasing vReveal is not only not the best but one of the worst solutions. There are much better ones, many of them for free:
    http:/ /

    • Manabu
    • 13 years ago

    Very good for an user-friendly semi-automated app. But no comparison, or look at similar programs.

    Look at what you can do with Avisynth, that is free-software:

    §[<<]§ For now, there is only one gpu-acelerated (by open-gl, not cuda, so even your ATI 9800 can run it) filter for avisynth, but certanly more will come as OpenCL arives. And there is many advanced ways to blow-up resolution, not only their technology. Again, look at the video, and see the POWER of avisynth! ^_^

    • blubje
    • 14 years ago

    re “2x resolution” it’s still interpolation even if it’s using adjacent frames 🙂

    • MadManOriginal
    • 14 years ago

    Yeah but sharing a core between different processes is slower, either for the foreground app or one that you can let run its course like this video editing one, especially when one of the apps pegs cores to 100%.

    • Meadows
    • 14 years ago

    Unlikely, maxing out a core doesn’t mean that every other process will stop like absolute zero degrees, the OS makes sure of that. And Vista/W7 makes it even more sure than XP does.

    • Delphis
    • 14 years ago

    If you have one of AMD’s triple-core thingies then maybe that might be good, since it might just -[

    • Meadows
    • 14 years ago

    I don’t need such a thing, but do you plan to support odd cores? AMD will be displeased.

    • motiondsp
    • 14 years ago

    Check our support forums paulpod — we have a solution to combine HQ de-interlacing with vReveal enhancement. §[<<]§

    • motiondsp
    • 14 years ago

    Actually. vReveal supports multiple CPU cores: 2, 4, 8, etc. Go ahead and try it — and report back on the performance difference you see. MotionDSP

    • Meadows
    • 14 years ago

    I think the software supports no more than 2 cores.

    • LoneWolf15
    • 14 years ago

    I’m curious to wonder how a quad-core (software render) compares with a dual-core and a CUDA-equipped graphics card. The software seems multithreaded (it heavily loads both cores of a Core2Duo), so I think this would make for an interesting comparison.

    • flip-mode
    • 14 years ago

    Um, all that CUDA acceleration is great, but above and beyond that, that vReveal software is sweeeet! That software just shot straight to the very top of my “want to buy” list. I’ll let my CPU do the crunching for now, though. Maybe in the future my Radeon will be able to accelerate it.

    Are there any alternatives for Radeon owners?

    • DrDillyBar
    • 14 years ago

    It totally is. 🙂

    • paulpod
    • 14 years ago

    But what about handling interlaced video?!?!

    They make a lot of bold quality claims but it is not readily apparent from their website whether they properly handle interlaced video. Namely that they perform advanced de-interlace (motion/vector adaptive, etc.), process the 60fps result and then re-interlace.

    The software is useless for old camcorder video without this. Anyone know how well they handle interlaced video?

    (Oh, and it is also useless if you can not set it to force an assumed TFF or BFF interlace on 30 fps AVI files.)

    • UberGerbil
    • 14 years ago

    Done appropriately, it’s a cool effect. Just don’t mistake it for an accurate rendition of the real world (which is what people looking for more saturation in their snapshots are doing).

    • no51
    • 14 years ago

    I was gonna go with Super Troopers.

    • Chryx
    • 14 years ago

    It’s from Blade Runner

    • Tamale
    • 14 years ago

    Sounds like minority report to me.

    • Meadows
    • 14 years ago

    I have no idea what he just said, but I have a hunch it’s a quote from some movie or TV series where they used hilarious gibberish in a forensic scene that they hoped wouldn’t get busted by people with actual know-how.

    • crazybus
    • 14 years ago

    Hey, combine oversaturation with minimal global contrast but ridiculous local contrast, add a copious amount of “sharpening” and voilà, you have “HDR”.

    • SecretMaster
    • 14 years ago

    You feeling okay?

    • Vasilyfav
    • 14 years ago

    Enhance 224 to 176. Enhance, stop. Move in, stop. Pull out, track right, stop. Center in, pull back. Stop. Track 45 right. Stop. Center and stop. Enhance 34 to 36. Pan right and pull back. Stop. Enhance 34 to 46. Pull back. Wait a minute, go right, stop. Enhance 57 to 19. Track 45 left. Stop. Enhance 15 to 23. Give me a hard copy right there.

    • UberGerbil
    • 14 years ago

    Yeah, people luv them some saturation. Back in the film era there were films with unnatural saturation characteristics that were really popular because they produced pictures with a lot of “pop.” Of course when you compared to the actual scene you realized the colors were completely unnatural, but they match what people think they see (or remember).

    I do recall pointing out to someone that the snapshots of their kids showed them to have a magenta (rather than rosy) flush to their faces. Once they noticed it, it really bugged them 😉

    • UberGerbil
    • 14 years ago

    But with video you don’t just have four pixels. You have pixels extending out in both space and time. That can allow you to factor out things like noise and go beyond the Nyquist limit for any single frame. And you don’t have to stop there, because there is additional information available if the subject moves over the course of many frames, offering different perspectives. This can be particularly useful when trying to extract things like the digits off a license plate, because the text is known to be planar but is rarely aligned with the sensor pixels in any given frame; however if the text rotates through space the sensor may pick up different aspects of the letters/digits in different frames, allowing them to be reconstructed even though no single frame has enough information.

    Of course, that takes a lot of time and processing and has a lot of limitations. The magic “click one button and pull a clear shot of the license plate text from a single pixel” that you see in shows like CSI remains complete bunk. But, like instant DNA screening, it’s sometimes necessary to move the plot along (and often the writers are just lazy).

    • ssidbroadcast
    • 14 years ago

    Ever see /[

    • shiznit
    • 14 years ago

    Looks good but they can save the auto-contrast for the masses, I don’t like my colors messed with.

    • jackaroon
    • 14 years ago

    You’re not thinking 4th dimensionally, Marty!!

    It’s mimicking persistence of vision (or some portion of the human phenomenon). You’ve probably seen a few blurry videos and noticed that the stills from it may look like complete garbage, but while watching it play, you get the “idea” of what someone’s face looks like, because you know it’s the same object, can infer its speed and sub-pixel position of features, and can mentally reconstruct details from the common parts of many blurry images. It’s just making that inference explicit by adding it to the stills. In other words, it’s not 4 pixels, it’s 4 pixels * the number of frames. Like you said, though, you can only take it so far. For another thing, these inferences could be wrong.

    I imagine that there are some situations where it’s better than natural human persistence of vision, and some situations where it is worse. It could be fun to try and develop optical illusions tuned for its algorithms.

    • Meadows
    • 14 years ago

    Of course you can’t. Movies exaggerate to such degrees that it destroys any remaining suspension of disbelief.

    It’s possible however to reconstruct things like license plate numbers from seemingly broken cell phone photos through algorithm reverse-engineering and a bucket of secret sauce.
    Here’s what MotionDSP earns a livelihood with: §[<<]§ Check the videos and the example images. This is how it's done in the real world, movies are just nonsense (except back in the day when series such as The X-Files presented reasonably believable forensic methods throughout).

    • Meadows
    • 14 years ago

    I just thought it would be impressive to show that this software is nearly an order of magnitude more expensive than the latest and greatest Photoshop version.

    • TravelMug
    • 14 years ago

    Of course you can. Seen it done in movies countless times.

    • Forge
    • 14 years ago

    What, no Pirate Bay or Demonoid link?

    Commenters these days! Get off my lawn!

    • Flying Fox
    • 14 years ago

    But can you really beat the laws of (optical) physics? If you only have 4 pixels you can’t seriously believe that you can reconstruct the whole face out of those?

    • Meadows
    • 14 years ago

    §[<<]§ §[<<]§ You're going to have to save a little bit of money.

    • GFC
    • 14 years ago

    Thanks for the review, i liked reading about it, hope we will get more cuda apps soon.

    • cygnus1
    • 14 years ago

    dang, the limits on vReveal kind of suck, how much is Ikena?

Pin It on Pinterest

Share This

Share this post with your friends!