GoogLe Net and Residual Net via MMF Graphs

Quantifying difference between two similarity matrices via difference in MMF graphs, i.e., the difference in the sequence of the rotations -- and the corresponding differneces in the positions of the low and high frequencies -- is a measure of the difference between two sets of data representations. In other words, given some deep representations extracted from a trained deep network, one can construct MMF graph on the corresponding class-by-class covariance matrix of these representations. The changes betwen such graphs coming from two different networks e.g., GoogLeNet and ResNet give a novel distance/performence measure of the good-ness of the representations. With the failure of classical generalization and performance meausres in quantifying the success of neural networks, these new summaries (which are an outcome of MMF) provide a way to construct visually comparable generalization measures.

As a first step in investigating this line of research, we here ask two fundamental questions about GoogLeNet and ResNet -- the two most widely used neural networks in vision. (1) How does LeNet's MMF graph compare to ResNet's MMF graph? (2) Given the LeNet or ResNet, how does the graph evolve from input to output layer?



Example: A set of Animal classes.


10 classes: cow, insectivore, hound, puppy, garden spider, ptarmigan, phalanger, killer whale, green lizard, kangaroo

This is the same set of classes used to characterize the relationships between human and deep representational semantics in Human_vs_Deep.
The two MMF graphs correspond to the graphs computed on the last hidden layer representations of GoogLeNet and ResNet.
Below we will look at the MMF graphs for each of these two networks as one moves from input to the outputs. The goal here is to visualize the interactions across the 10 animal classes as inputs are non-linearly transformed to the outputs -- there by getting an interpretable/explainable sneak-peack into the landscape of class-wise interactions. Since MMF captures the hierarchical correlations across classes, the corresponding evolution of the MMF graphs is one of the surrogates for this notion of interpretability/explainability.
The changes between two consecituve MMF graphs -- a change corresponds to one class moving frequencies, higher or lower -- are shown in blue color, and the entities that do not move are in black.

A slider view of the two examples can be found at MMF-Evolving-in-GoogLeNet and MMF-Evolving-in-ResNet respectively.


MMF graph evolution in GoogLeNet



MMF graph evolution in ResNet