Projects / ReCinematography
Re-Cinematography is video stabilization taken to the next level: rather than just getting rid of some of the jitter, the methods try to figure out what camera movements might have been done by a professional with good equipment, and then alter the video to look like that.
If you don't believe me, look at the video examples below!
On this page... (hide)
This article presents an approach to postprocessing casually captured videos to improve apparent camera movement. Re-cinematography transforms each frame of a video such that the video better follows cinematic conventions. The approach breaks a video into shorter segments. Segments of the source video where there is no intentional camera movement are made to appear as if the camera is completely static. For segments with camera motions, camera paths are keyframed automatically and interpolated with matrix logarithms to give velocity-profiled movements that appear intentional and directed. Closeups are inserted to provide compositional variety in otherwise uniform segments. The approach automatically balances the tradeoff between motion smoothness and distortion to the original imagery. Results from our prototype show improvements to poor quality home videos.
The final journal paper:
The original conference paper (that was best-in-track winner). Note: the journal paper (above) really is a big improvement. Some of the methods in the original paper are significantly improved, and there are new things added.
All source video was taken with a Sanyo Xacti C5 digicam (MPEG4, 640x480) with its default image stabilization on. So, all of these examples can be considered a comparison with the state of the art. Most videos include a side-by-side comparison with the source, as well as a 2X comparison.
Note that the examples are all full frame: we don't crop artifacts at the edges (like we did in some later work). The problem is that the method often has to guess what is off the frame, and these artifacts come from guessing badly.
There is a "video paper" that is almost 9 minutes long that was used to explain what was in the paper (it was targeted towards reviewers). It includes the examples below (but not in the order on the web page).
If you look at the video, you will undoubtedly notice artifacts. The biggest ones come from the fact that when we move the camera viewpoint, a portion of the frame becomes uncovered. In re-cinematography, we choose to "fill in" these portions of the frame from previous frames - in the future or the past. Here's a dramatic example from the "swing" video above:
Notice that the woman in the red hat doesn't appear in the source image - we had to pull here from another part of the video!
Our system uses a simple method for fill in, however, the system will look up to 3 seconds in the past or future to find things to use for fill in. The alternative would be to crop the frame - and make the resulting video tiny. This is what we did in the later 3D stabilization work. Its a tradeoff.
In follow on work, we are developing better methods to fix camera motions. Our first is a "3D Warping" method (Siggraph 2009 paper). It provides better results, when it works. Re-Cinematography gets decent/good results on hard examples, while 3D warping gets great results on not-so-hard examples. For some technical (and some pragmatic) reasons, we really haven't been able to run both methods on the same examples.