Learning to Segment Under Various Forms of Weak Supervision |
||||||
Jia Xu1 Alexander G. Schwing2 Raquel Urtasun2 | ||||||
1University of Wisconsin-Madison 2University of Toronto |
Abstract |
Despite the promising performance of conventional fully supervised algorithms, semantic segmentation has remained an important, yet challenging task. Due to the limited availability of complete annotations, it is of great interest to design solutions for semantic segmentation that take into account weakly labeled data, which is readily available at a much larger scale. Contrasting the common theme to develop a different algorithm for each type of weak annotation, in this work, we propose a unified approach that incorporates various forms of weak supervision -- image level tags, bounding boxes, and partial labels -- to produce a pixel-wise labeling. We conduct a rigorous evaluation on the challenging Siftflow dataset for various weakly labeled settings, and show that our approach outperforms the state-of-the-art by 12% on per-class accuracy, while maintaining comparable per-pixel accuracy. |
Publication |
|
Source code |
Email Jia for the link. |
Acknowledgments |
We thank NVIDIA Corporation for the donation of GPUs used in this research.
This work was partially funded by NSF RI 1116584 and ONR-N00014-14-1-0232.
|