Divergence measures and message passing

Tom Minka
Microsoft Research Technical Report (MSR-TR-2005-173), 2005

This paper presents a unifying view of message-passing algorithms, as methods to approximate a complex Bayesian network by a simpler network with minimum information divergence. In this view, the difference between mean-field methods and belief propagation is not the amount of structure they model, but only the measure of loss they minimize (`exclusive' versus `inclusive' Kullback-Leibler divergence). In each case, message-passing arises by minimizing a localized version of the divergence, local to each factor. By examining these divergence measures, we can intuit the types of solution they prefer (symmetry-breaking, for example) and their suitability for different tasks. Furthermore, by considering a wider variety of divergence measures (such as alpha-divergences), we can achieve different complexity and performance goals.

pdf

The bounds in this paper were extended by Liu and Ihler in their paper Bounding the Partition Function using Holder's Inequality.


Tom Minka
AI & Statistics 2005, invited talk
Original title: Some intuitions about message passing

Talk slides pdf ppt


Last modified: Wed Dec 07 17:19:39 GMT 2005