Speaker Pedro Domingos
Affiliation University of Washington
Host Ofer Dekel
Date recorded 26 October 2012
Inference is the hardest part of learning. Learning most powerful models requires repeated intractable inference, and approximate inference often interacts badly with parameter optimization. At inference time, an intractable accurate model can effectively become an inaccurate model due to approximate inference. All these problems would be avoided if we learned only tractable models, but standard tractable model classes - like thin junction trees and mixture models - are insufficiently expressive for most applications. However, in recent years a series of surprisingly expressive tractable model classes have been developed, including arithmetic circuits, feature trees, sum-product networks, and tractable Markov logic. I will give an overview of these representations, algorithms for learning them, and their startling successes in challenging applications.
©2012 Microsoft Corporation. All rights reserved.
By the same speaker
People also watched
Machine Learning Work Shop - Session 1 - Carlos Guestrin - "GraphLab: Large-scale Machine Learning on Natural Graphs"
Machine Learning Work Shop - Session 2 - Jeff Bilmes - "Why Submodularity is Important to Machine Learning"