Tractable Learning of Structured Prediction Models

Structured prediction is a fundamental machine learning task involving classification or regression in which the output variables are mutually dependent or constrained. Such dependencies and constraints reflect sequential, spatial or combinatorial structure in the problem domain, and capturing these interactions is often as important for the purposes of prediction as capturing input-output dependencies.

Probabilistic graphical models and combinatorial graph-based models are used to represent problem structure across many fields, including computational biology, vision and linguistics. Typical structured models may involve hundreds of thousands of interacting variables and parameters. In general, the standard (likelihood-based) learning of such models is intractable because of the exponential explosion of the number of possible joint outcomes.

I will present a large-margin learning framework for structured prediction that enables tractable learning for several important classes of models via convex optimization. By exploiting the underlying combinatorial problem structure, I will derive a simple, efficient and scalable learning algorithm. I will demonstrate practical applications of the approach for problems in object recognition, protein folding and machine translation.

Speaker Details

Ben Taskar received his Ph.D. in Computer Science from Stanford University working with Daphne Koller. He is currently a postdoctoral fellow with Michael Jordan at the Computer Science Division, University of California at Berkeley. One of his interests is structured model estimation in machine learning, especially in computational linguistics, computer vision and computational biology. Last year, he co-organized a NIPS workshop on this emerging topic. His work on structured prediction has received best paper awards at NIPS and EMNLP conferences.

Date:
Speakers:
Ben Taskar
Affiliation:
Stanford University
    • Portrait of Jeff Running

      Jeff Running