Understanding Deep Architectures and the Effect of Unsupervised Pre-training

Much recent research has been devoted to learning algorithms for deep architectures such as Deep Belief Networks and stacks of auto-encoder variants, with impressive results obtained in several areas, mostly on vision and language data sets. The best results obtained on supervised learning tasks involve an unsupervised learning component, usually in an unsupervised pre-training phase. Even though these new algorithms have enabled training deep models, many questions remain as to the nature of this difficult learning problem. The first question this talk addresses is the following: how does unsupervised pre-training work? Answering this question is important if learning in deep architectures is to be further improved. We propose several explanatory hypotheses and test them through extensive simulations. We empirically show the influence of pre-training with respect to architecture depth, model capacity, and number of training examples. The experiments confirm and clarify the advantage of unsupervised pre-training. The results suggest that unsupervised pre-training guides the learning towards basins of attraction of minima that support better generalization from the training data set; the evidence from these results supports a regularization explanation for the effect of pre-training.

The second part of the talk will address the issue of understanding the kind of features that a deep architecture learns and represents. In this work in progress, we would like to gain insights into the high-level invariances represented by the upper layers. To this end, we developed new tools that make it possible to visualize the invariance manifold of upper layer units. This manifold illustrates several of our intuitions regarding the usefulness of learning deep representations, is interpretable and allows us to make comparisons between various pre-training strategies. Such qualitative assessment can be important for further understanding how and why high-level feature extractors work.

Speaker Details

Dumitru Erhan obtained his BSc in Electrical Engineering and Computer Science from Jacobs University Bremen (Germany). He has pursued his graduate studies at University of Montreal (Canada), where he did an MSc in Computer Science, applying collaborative filtering techniques for drug discovery problems. He is currently a PhD student at University of Montreal, under the supervision of Yoshua Bengio, working on understanding deep architectures and applying them in large-scale setting. He has done research internships in Machine Learning at Helsinki University of Technology, Max-Planck Institute for Biological Cybernetics, Microsoft Research Cambridge and Google.

Date:
Speakers:
Dumitru Erhan
Affiliation:
Jacobs University Bremen (Germany)
    • Portrait of Jeff Running

      Jeff Running