Light Field Video Stabilization |
ICCV 2009 |
Abstract |
We describe a method for producing a smooth, stabilized video from the shaky input of a hand-held light field video camera— specifically, a small camera array. Traditional stabilization techniques dampen shake with 2D warps, and thus have limited ability to stabilize a significantly shaky camera motion through a 3D scene. Other recent stabilization techniques synthesize novel views as they would have been seen along a virtual, smooth 3D camera path, but are limited to static scenes. We show that video camera arrays enable much more powerful video stabilization, since they allow changes in viewpoint for a single time instant. Furthermore, we point out that the straightforward approach to light field video stabilization requires computing structure-from-motion, which can be brittle for typical consumer-level video of general dynamic scenes. We present a more robust approach that avoids input camera path reconstruction. Instead, we employ a spacetime optimization that directly computes a sequence of relative poses between the virtual camera and the camera array, while minimizing acceleration of salient visual features in the virtual image plane. We validate our novel method by comparing it to state-of-the-art stabilization software, such as Apple iMovie and 2d3 SteadyMove Pro, on a number of challenging scenes.
|
Paper |
Brandon M. Smith, Li Zhang, Hailin Jin, Aseem Agarwala. Light Field Video Stabilization. IEEE International Conference on Computer Vision (ICCV), Sept 29-Oct 2, 2009. [PDF 5.3MB] |
3D stereoscopic video stabilization, Demo at ICCV 2011. |
|
Acknowledgement |
This work is supported in part by Adobe System Incorporated and National Science Foundation IIS-0845916 and IIS-0916441. |
Video |
Download [MP4 60.4 MB] |
Datasets |
The following four datasets contain five-view PNG image sequences. The frame rate is 25 fps, and the image size is 480 x 360. The images have been corrected to remove radial distortion. Intrinsic and extrinsic camera parameters for each of the five cameras are available here.
|
Presentation (PowerPoint 2007) |
Slides with embedded videos [ZIP (PPTX + AVI files) 206.6 MB] |
Slides only [ZIP 6.7 MB, PDF 0.66 MB] |
Poster [PPTX 7.7 MB, PDF 1.0 MB] |
Selected results |
Each column shows one dynamic scene in our experiments. The top row shows a frame in the original videos. The middle row shows a frame in the original video overlayed with green lines representing point trajectories traced over time. The bottom row shows a frame in the stabilized video, in which the point trajectories are significantly smoother. These examples demonstrate that our method is able to handle severe camera shake for complex dynamic scenes with large depth variation and nearby moving targets. Please see the accompanying video for a clearer demonstration. |