Nvidia unveils Super SloMo deep learning-powered motion interpolation

I'm sure that, like me, you probably smirk or roll your eyes when you see a TV investigator confidently command a subordinate to "enhance" a low-resolution image. Well, we live in the future, folks. Nvidia's just demonstrated a similar sort of technology for slowing down standard-speed video. Check out this video of what the company calls Super SloMo.

Nvidia says that the system runs on Tesla V100 GPUs and uses the Pytorch deep-learning framework. Apparently, the team that created this technology trained their system on over 11,000 videos shot at 240 FPS. Once it was trained, the neural network was able to take a regular video and create completely realistic-looking intermediate frames to produce a higher-speed version.

That higher-frame-rate video can then be played back at the original speed to produce a slow-motion effect, even when the video was originally recorded using a low frame rate. Alternatively, you could watch the high-speed video in its new frame rate—assuming you have the display to reproduce it, of course.

The effect, at least as demonstrated in the video above, is incredibly realistic. It's easy to envision folks making use of this technology on a future GeForce product. If we step a bit into the realm of fantasy, it's also easy to imagine this technology—perhaps along with a fixed-function accelerator—being used to improve the smoothness of movies, TV shows, or games.

If you're a developer looking to learn exactly how Super SloMo works, you'll have to wait until Thursday. Nvidia's researchers will be talking about the technique in a presentation at the 2018 Computer Vision and Pattern Recognition conference in Salt Lake City, UT.

Tip: You can use the A/Z keys to walk threads.
View options

This discussion is now closed.