Hi! I work in the field of Bayesian statistical inference, and I develop efficient algorithms for use in machine learning, computer vision, text retrieval, and data mining. My goal is to make Bayesian inference a standard tool for processing information.
To make Bayesian inference easier to understand, I've written papers which illustrate Bayesian methods on important problems in machine learning, computer vision, and text retrieval.
What makes Bayesian inference special is that it takes into account all possible states of nature, not just the one that is the most likely. At first glance, this seems to require a lot of computation. I've addressed this issue by developing new computational methods, including the Expectation Propagation algorithm. With this algorithm, you can obtain the benefits of Bayesian inference with a typically small additional cost over non-Bayesian methods.
Bayesian inference also requires good models. I've taken two different approaches to this. The first is to visualize data in order to determine an appropriate model. I have developed step-by-step methods for visualizing data, taught in my classes at CMU. My second approach is to analyze successful non-Bayesian methods in computer vision and text retrieval, and determine what model assumptions would lead to those methods. This "reverse-engineering" process is usually quite instructive, and by improving the recovered models you can improve on their results.
My main project these days is Infer.NET, a software library for inference in graphical models. Most of my research in message-passing algorithms goes into it.