Computer Sciences Dept.

Abundant Inverse Regression using Sufficient Reduction and its Applications

Hyunwoo J. Kim*, Brandon M. Smith*, Nagesh Adluru, Charles R. Dyer, Sterling C. Johnson, Vikas Singh, Abundant Inverse Regression using Sufficient Reduction and its Applications , In European Conference on Computer Vision (ECCV), October 2016.
Hyunwoo J. Kim* and Brandon M. Smith* are joint first authors.

Abstract

Statistical models such as linear regression drive numerous applications in computer vision and machine learning. The landscape of practical deployments of these formulations is dominated by forward regression models that estimate the parameters of a function mapping a set of p covariates, x, to a response variable, y. The less known alternative, Inverse Regression, offers various benefits that are much less explored in vision problems. The goal of this paper is to show how Inverse Regression in the ``abundant'' feature setting (i.e., many subsets of features are associated with the target label or response, as is the case for images), together with a statistical construction called Sufficient Reduction, yields highly flexible models that are a natural fit for model estimation tasks in vision. Specifically, we obtain formulations that provide relevance of individual covariates used in prediction, at the level of specific examples/samples --- in a sense, explaining why a particular prediction was made. With no compromise in performance relative to other methods, an ability to interpret why a learning algorithm is behaving in a specific way for each prediction, adds significant value in numerous applications. We illustrate these properties and the benefits of Abundant Inverse Regression (AIR) on three distinct applications.

Result Browser

Acknowledgments

This research was supported by NIH grants AG040396, and NSF CAREER award 1252725. Partial support was provided by UW ADRC AG033514, UW ICTR 1UL1RR025011, UW CPCP AI117924 and Waisman Core Grant P30 HD003352-45.

 
Computer Sciences | UW Home