Collaborative Facial Landmark Localization |
|
Brandon M. Smith | Li Zhang |
Abstract |
In this paper we make the first effort, to the best of our knowledge, to combine multiple face landmark datasets with different landmark definitions into a super dataset, with a union of all landmark types computed in each image as output. Our approach is flexible, and our system can optionally use known landmarks in the target dataset to constrain the localization. Our novel pipeline is built upon variants of state-of-the-art facial landmark localization methods. Specifically, we propose to label images in the target dataset jointly rather than independently and exploit exemplars from both the source datasets and the target dataset. This approach integrates nonparametric appearance and shape modeling and graph matching together to achieve our goal. |
Publication |
Brandon M. Smith, Li Zhang. Collaborative Facial Landmark Localization for Transferring Annotations Across Datasets, European Conference on Computer Vision (ECCV), September, 2014. [Author's version: PDF 942 KB] |
The original publication is available at www.springerlink.com, here. |
Acknowledgements |
This work is supported in part by NSF IIS-0845916, NSF IIS-0916441, a Sloan Research Fellowship, and a Packard Fellowship for Science and Engineering. |
Supplementary AFLW Landmarks |
A prime target dataset for our approach is the Annotated Facial Landmarks in the Wild (AFLW) dataset, which contains 25k in-the-wild face images from Flickr, each manually annotated with up to 21 sparse landmarks (many are missing). Our approach is well-suited to automatically supplementing AFLW with additional landmarks. We use the following source datasets: CMU Multi-PIE, Helen, and several annotated datasets (LFPW, AFW, IBUG) from the 300 Faces In-The-Wild Challenge (300-W). We provide 64 supplementary landmark types and we fill in missing landmarks among the 21 types that AFLW defines for a total of 85 landmark types. Our supplementary landmarks are available for download below. Please see the included readme.txt file for more information. |
Download [ZIP 25.5 MB] Added September 11, 2014 |