Deep Machine Learning: a Panel

This panel session of the 2013 Microsoft Research Faculty Summit looks at deep learning, a sub-field of machine learning that focuses on hierarchical representations of features or concepts, where high-level semantic-like features can emerge via automatic layer-by-layer learning from low-level features.

In recent years, deep learning has achieved important successes in a variety of applied artificial intelligence tasks including speech recognition, computer vision, and natural language processing. The implications of such recent work have been prominently covered in recent media with both enthusiasm and skepticism. Since 2009, in partnership with academics, Microsoft Research has been pursuing deep learning research and technology transfer, and has pioneered the development of industry-scale deep learning technology for speech recognition. It is useful to share our experience with wider academic communities and learn from each other. To make the material and directions of interest to a broader computer science audience, we offer a tutorial to demystify the “black-art” label often attached to deep learning.

Speaker Details

Yoshua Bengio received a PhD in Computer Science from McGill University, Canada in 1991. After two post-doctoral years, one at M.I.T. with Michael Jordan and one at AT&T Bell Laboratories with Yann LeCun and Vladimir Vapnik, he became professor at the Department of Computer Science and Operations Research at Université de Montréal. He is the author of two books and around 200 publications, the most cited being in the areas of deep learning, recurrent neural networks, probabilistic learning algorithms, natural language processing and manifold learning. He is among the most cited Canadian computer scientists and is or has been associate editor of the top journals in machine learning and neural networks. Since ‘2000 he holds a Canada Research Chair in Statistical Learning Algorithms, since ‘2006 an NSERC Industrial Chair, since ‘2005 his is a Fellow of the Canadian Institute for Advanced Research. He is on the board of the NIPS foundation and has been program chair and general chair for NIPS. He has co-organized the Learning Workshop for 14 years and co-created the new International Conference on Learning Representations. His current interests are centered around his quest for AI through machine learning, and include fundamental questions on deep learning and representation learning, the geometry of generalization in high-dimensional spaces, manifold learning, biologically inspired learning algorithms, and challenging applications of statistical machine learning. At the beginning of 2013, Google Scholar finds more than 12000 citations to his work, yielding an h-index of 47.

Honglak Lee is an assistant professor of Computer Science and Engineering at the University of Michigan, Ann Arbor. He received his Ph.D. from the Computer Science Department at Stanford University in 2010, advised by Andrew Ng. His primary research interests lies in machine learning, which spans deep learning, unsupervised and semi-supervised learning, transfer learning, graphical models, and optimization. He also works on application problems in computer vision, audio recognition, robot perception, and text processing. His work received best paper awards at ICML and CEAS. He received a Google Faculty Research Award, and he has served as a guest editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence “Special Issue on Learning Deep Architectures.”

Andrew Ng is an Assistant Professor of Computer Science at Stanford University. His research interests include machine learning, reinforcement learning/control, and broad-competence AI. His group has won best paper/best student paper awards at ACL, CEAS, 3DRR and ICML. He is also a recipient of the Alfred P. Sloan Fellowship.

Ruslan Salakhutdinov is a PhD student at University of Toronto. His broad research interests involve developing learning and inference algorithms for probabilistic hierarchical models that contain many layers of nonlinear processing. Much of his current research concentrates on the theoretical analysis of Deep Belief Networks and Deep Boltzmann Machines, with applications to information retrieval, visual object recognition, and dimensionality reduction. His other interests include collaborative filtering, large-scale approximate Bayesian inference and large-scale optimization.

Date:
Speakers:
Andrew Ng, Honglak Lee, Ruslan Salakhutdinov, and Yoshua Bengio
Affiliation:
University of Montreal, University of Michigan – Ann Arbor, Stanford University, University of Toronto

Series: Microsoft Research Faculty Summit