Warp Propagation for Video Resizing

Yuzhen Niu 1,2, Feng Liu1, Xueqing Li2 and Michael Gleicher1

1Computer Sciences Department, University of Wisconsin-Madison

2School of Computer Science and Technology, Shandong University

Abstract
This paper presents a video resizing approach that provides both efficiency and temporal coherence. Prior approaches either sacrifice temporal coherence (resulting in jitter), or require expensive spatio-temporal optimization. By assessing the requirements for video resizing we observe a fundamental tradeoff between temporal coherence in the background and shape preservation for the moving objects. Understanding this tradeoff enables us to devise a novel approach that is efficient, because it warps each frame independently, yet can avoid introducing jitter. Like previous approaches, our method warps frames so that the background are distorted similarly to prior frames while avoiding distortion of the moving objects. However, our approach introduces a motion history map that propagates information about the moving objects between frames, allowing for graceful tradeoffs between temporal coherence in the background and shape preservation for the moving objects. The approach can handle scenes with significant camera and object motion and avoid jitter, yet warp each frame sequentially for efficiency. Experiments with a variety of videos demonstrate that our approach can efficiently produce high-quality video resizing results.
Paper
Yuzhen Niu, Feng Liu, Xueqing Li, and Michael Gleicher. Warp Propagation for Video Resizing.
IEEE CVPR 2010, San Francisco, CA, USA, June 2010. PDF
Demo
download it here or watch it on Youtube
Search engine friendly content