University of Wisconsin-Madison
This paper extends the classical warping-based optical flow method to achieve accurate flow in the presence of spatially-varying motion blur. Our idea is to parameterize the appearance of each frame as a function of both the pixel motion and the motion-induced blur. We search for the flows that best match two consecutive frames, which amounts to finding the derivative of a blurred frame with respect to both the motion and the blur, where the blur itself is a function of the motion. We propose an efficient technique to calculate the derivatives using pre-filtering. Our technique avoids performing spatially-varying filtering (which can be computationally expensive) during the optimization iterations. In the end, our derivative calculation technique can be easily incorporated with classical flow code to handle video with non-uniform motion blur with little performance penalty. Our method is evaluated on both synthetic and real videos and outperforms conventional flow methods in the presence of motion blur.
Related project: Video Denoising for Motion-based Exposure Control
Travis Portz, Li Zhang, Hongrui Jiang. Optical Flow in the Presence of Spatially-Varying Motion Blur. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2012. [PDF 3.74MB, Poster PDF 4.3MB]
This work is supported in part by NSF EFRI-0937847, NSF IIS-0845916, NSF IIS-0916441, a Sloan Research Fellowship, and a Packard Fellowship for Science and Engineering. Travis Portz is also supported by a University of Wisconsin-Madison, Department of Electrical and Computer Engineering graduate fellowship.
Source code for our C++ implementation with Matlab interface is available under the MIT license. [ZIP 1.8MB]
See the PDF linked to above for details on the blurred image derivatives and the grid construction.
In the examples below, we compute flow from a “target” image to a “source” image and warp the source back along the flow to the target.