Nonparametric Context Modeling of Local Appearance for Pose- and Expression-Robust Facial Landmark Localization |
|||
Brandon M. Smith1 Jonathan Brandt2 Zhe Lin2 Li Zhang1 | |||
1University of Wisconsin – Madison 2Adobe Research |
|||
Abstract We propose a data-driven approach to facial landmark localization that models the correlations between each landmark and its surrounding appearance features. At runtime, each feature casts a weighted vote to predict landmark locations, where the weight is precomputed to take into account the feature's discriminative power. The feature voting-based landmark detection is more robust than previous local appearance-based detectors; we combine it with nonparametric shape regularization to build a novel facial landmark localization pipeline that is robust to scale, in-plane rotation, occlusion, expression, and most importantly, extreme head pose. We achieve state-of-the-art performance on two especially challenging in-the-wild datasets populated by faces with extreme head pose and expression.
Publication Brandon M. Smith, Jonathan Brandt, Zhe Lin, Li Zhang. Nonparametric Context Modeling of Local Appearance for Pose- and Expression-Robust Facial Landmark Localization, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), June, 2014. [PDF 1.4 MB]
Acknowledgements This work is supported in part by NSF IIS-0845916, NSF IIS-0916441, a Sloan Research Fellowship, a Packard Fellowship for Science and Engineering, and Adobe Systems Incorporated.
Supplementary Results Download [PDF 10.1 MB]
Supplementary Video Download [MOV 36.7 MB] |
|||