Nvidia has learned to create "ultra-slow" video

Slow motion video, aka slow-mo, is incredibly popular with viewers, but it is extremely difficult to create it. The equipment needed for this is very expensive, and the need to store footage shot at 300 thousand frames per second somewhere quickly becomes a problem. However, new technology from Nvidia is better suited to this task.

The method is called “multi-frame interpolation of variable length” and is based on machine learning when analyzing the source material so that the neural network can “guess” the missing frames. It doesn't matter if you want to achieve virtual slowdown of 8 or 15 times, this technology has no upper limit, and the system can generate any number of images that fit perfectly into the frame. More precisely - the viewer will not notice the catch.

There are actually two neural networks at work here. The first analyzes the video itself at a given frame range, creates a map of the video stream in forward and backward directions, and forms a plan for inserting virtual frames. The second system interpolates the data, matching the generation capabilities with the plan to eliminate crooked pixels, ghosting, and other "artifacts." Now it remains to create from this data an arbitrary number of distorted versions of the first and second frames in tandem in order to insert them between them and "stretch" the video to the desired length.

To implement the technology, Nvidia Tesla V100 graphics cards and cuDNN's PyTorch deep analysis system were used. According to the creators, this means that the commercial version will not appear very soon, and in it, most of the calculations will have to be transferred to the cloud. But the result is amazing - the video sequence turns out to be very smooth, and there is an opportunity to "slow down" the already super slow-motion video.