Gradients as Features for Deep Representation Learning

Fangzhou Mu 1
Yingyu Liang 1
Yin Li 1,2,*

1Department of Computer Sciences
2Department of Biostatistics & Medical Informatics
University of Wisconsin-Madison


Code [GitHub]
ICLR 2020 [Paper] [Slides] [Bibtex]

(a) An illustration of our parametrization. (b) An overview of our proposed model.


Abstract

We address the challenging problem of deep representation learning--the efficient adaption of a pre-trained deep network to different tasks. Specifically, we propose to explore gradient-based features. These features are gradients of the model parameters with respect to a task-specific loss given an input sample. Our key innovation is the design of a linear model that incorporates both gradient and activation of the pre-trained network. We show that our model provides a local linear approximation to an underlying deep model, and discuss important theoretical insights. Moreover, we present an efficient algorithm for the training and inference of our model without computing the actual gradient. Our method is evaluated across a number of representation-learning tasks on several datasets and using different network architectures. Strong results are obtained in all settings, and are well-aligned with our theoretical insights.


Acknowledgements

This work was supported in part by FA9550-18-1-0166. The authors would also like to acknowledge support provided by the University of Wisconsin-Madison Office of the Vice Chancellor for Research and Graduate Education with funding from the Wisconsin Alumni Research Foundation.