
CVPR 2010 Tutorial on Higher Order Models in 


Speakers
Slides online now All slides in pdf format: download You are welcome to use these slides for talks. Please give
appropriate credits. 

Purpose of this
course Many labelling problems in computer vision such as image
restoration, disparity estimation and object recognition are often modelled
via Markov Random Fields. Most commonly the model has an underlying simple
4(8)connectivity field. These simple models are very popular, very likely
due to the fact that efficient inference (and learning) techniques exist. It
is well known that modelling jointly several variables, i.e. higherorder
cliques, considerably improve the modelling power and hence the results. The
goal of this tutorial is to analyse and categorize various types of different
higherorder random field models which have been considered in the past (e.g.
patchbased priors, curvature prior, topology prior, etc).
The key question for such powerful models is whether efficient and powerful
inference techniques exist. This question is the main focus of the tutorial,
and we will review recent work which has shown that inference is indeed
tractable, by e.g. transforming the higherorder model into a pairwise one. Relationship to
tutorial at ICCV 2009 We had given a general (full day) tutorial
at ICCV 09 on MAP Inference in Discrete Models. The ICCV tutorial was very
general and higherorder models were not really covered. Given recent
advances and interest in this field we believe that this tutorial will appeal
to a large audience. We hope to inspire many people to use in the future more
sophisticated higherorder models. Final Syllabus 1. Introduction
to Higher Order Random Field Models 2. Background
of inference techniques: PseudoBoolean Optimization, Message Passing, LPrelaxation
techniques, and Dual Decomposition. 3. Low
to medium order Models a. Patch based models: FoE,
Patternbased potentials. b. Regionbased potentials
(labelconsistency; P^n Potts) c. Curvature, etc. d. Transformation of a+b
to pairwise model. e. Examples: stereo, denoising,
object recognition, etc. 4. Global
(full image) Models a. Topology: Connectivity, Bounding Box,
Silhouette constraint etc. b. Preserve distribution of labels
(Marginal Probability Field), and global appearance models. c. Label cost prior. d. Examples: denoising, interactive
segmentation, cosegmentation, FilterFlow etc. 5. Summary
and Directions for Future Work About the Speakers 


Carsten
Rother
received his Diploma degree with distinction in 1999 at the University of
Karlsruhe/Germany. He did his PhD at the Royal Institute of Technology
Stockholm/Sweden, supervised by Stefan Carlsson and
JanOlof Eklundh. Since
2003 he is a researcher at Microsoft Research Cambridge/UK. He supervises
several PhD students and gives frequently invited talks, organizes workshops
and taught a tutorial (MAP Inference in Discrete Models at ICCV 09). His
research interests are in the field of “Markov Random Fields for
Computer Vision”, “Discrete Optimization”, and
“Vision for Graphics”.
He has published more than 20 high impact papers (at least 10
citations) at international conferences and journals. He won the best paper
honourable mention award at CVPR ’05, and he was awarded the DAGM
Olympus price 2009. He serves on the program committee of major conferences
(e.g. SIGGRAPH, ICCV, ECCV, CVPR, NIPS), and has been area chair for BMVC
’08 – ‘10 and DAGM ‘10. 


Sebastian
Nowozin
is a researcher in the Machine Learning and Perception group at Microsoft
Research Cambridge. He received
his Master of Engineering degree from the Shanghai Jiaotong
University and his diploma degree in computer science with distinction from
the Technical University of Berlin in 2006. He received his PhD degree with
highest distinction in 2009 for his thesis on learning with structured data
in computer vision, completed at the Max Planck Institute for Biological
Cybernetics, Tuebingen and the Technical University
of Berlin. His research interest
is diverse and includes computer vision, machine learning, and continuous and
discrete optimization. He
organizes the successful “Optimization for Machine Learning”
workshop series at NIPS (OPT 2008, OPT 2009) and serves as PCmember/reviewer
for machine learning (e.g. NIPS, ICML, AISTATS, UAI, ECML, JMLR) and computer
vision (e.g. CVPR, ICCV, ECCV, PAMI, IJCV) conferences/journals. 
