Machine Learning Work Shop-Session 3 – Pedro Domingos – “Learning Tractable but Expressive Models”

Inference is the hardest part of learning. Learning most powerful models requires repeated intractable inference, and approximate inference often interacts badly with parameter optimization. At inference time, an intractable accurate model can effectively become an inaccurate model due to approximate inference. All these problems would be avoided if we learned only tractable models, but standard tractable model classes – like thin junction trees and mixture models – are insufficiently expressive for most applications. However, in recent years a series of surprisingly expressive tractable model classes have been developed, including arithmetic circuits, feature trees, sum-product networks, and tractable Markov logic. I will give an overview of these representations, algorithms for learning them, and their startling successes in challenging applications.

Speaker Details

Pedro Domingos is an assistant professor in the Department of Computer Science and Engineering at the University of Washington. His research interests are in machine learning and data mining. He received a PhD in Information and Computer Science from the University of California at Irvine, and is the author or co-author of over 70 technical publications in the fields of scalable machine learning, model ensembles, probabilistic learning, model selection, cost-sensitive learning, multistrategy learning, adaptive Web sites, programming by demonstration, data integration, anytime reasoning, computer graphics, and others. He is on the editorial boards of JAIR and IDA, has served on numerous program committees, and is the recipient of an NSF CAREER award, a Fulbright scholarship, an IBM Faculty Award, and best paper awards at KDD-98 and KDD-99.

Date:
Speakers:
Pedro Domingos
Affiliation:
University of Washington
    • Portrait of Jeff Running

      Jeff Running