Symmetry-Based Learning

Learning representations is arguably the central problem in machine learning, and symmetry group theory is a natural foundation for it. A symmetry of a classifier is a representation change that doesn’t change the examples’ classes. The goal of representation learning is to get rid of unimportant variations, making important ones easy to detect, and unimportant variations are symmetries of the target function. Exploiting symmetries reduces sample complexity, leads to new generalizations of classic learning algorithms, provides a new approach to deep learning, and is applicable with all types of machine learning. In this talk I will present three approaches to symmetry-based learning: (1) Exchangeable variable models are distributions that are invariant under permutations of subsets of the variables. They subsume existing tractable independence-based models and difficult cases like parity functions, and outperform SVMs and state-of-the-art probabilistic classifiers. (2) Deep symmetry networks generalize convolutional neural networks by tying parameters and pooling over an arbitrary symmetry group, not just the translation group. In preliminary experiments, they outperformed convnets on a digit recognition task. (3) Symmetry-based semantic parsing defines a symmetry of a sentence as a syntactic transformation that preserves its meaning. The meaning of a sentence is thus its orbit under the semantic symmetry group of the language. This allows us to map sentences to their meanings without pre-defining a formal meaning representation or requiring labeled data in the form of sentence-formal meaning pairs, and achieved promising results in a paraphrase detection problem. (Joint work with Rob Gens, Chloe Kiddon and Mathias Niepert.)

Speaker Details

Pedro Domingos is Professor of Computer Science and Engineering at the University of Washington. His research interests are in artificial intelligence, machine learning and data mining. He received a PhD in Information and Computer Science from the University of California at Irvine, and is the author or co-author of over 200 technical publications.

He is a member of the editorial board of the Machine Learning journal, co-founder of the International Machine Learning Society, and past associate editor of JAIR. He was program co-chair of KDD-2003 and SRL-2009, and has served on numerous program committees. He is a AAAI Fellow, and has received several awards, including a Sloan Fellowship, an NSF CAREER Award, a Fulbright Scholarship, an IBM Faculty Award, and best paper awards at several leading conferences.

Date:
Speakers:
Pedro Domingos
Affiliation:
University of Washington
    • Portrait of Jeff Running

      Jeff Running

Series: Microsoft Research Talks