Learning Bayesian Networks: The Combination of Knowledge and Statistical Data

D. Heckerman, D. Geiger, and D.M. Chickering

March 1995

We describe a Bayesian approach for learning Bayesian networks from a combination of prior knowledge and statistical data. First and foremost, we develop a methodology for assessing informative priors needed for learning. Our approach is derived from a set of assumptions made previously as well as the assumption of likelihood equivalence, which says that data should not help to discriminate network structures that represent the same assertions of conditional independence. We show that likelihood equivalence when combined with previously made assumptions implies that the user's priors for network parameters can be encoded in a single Bayesian network for the next case to be seen–a prior network– and a single measure of confidence for that network. Second, using these priors, we show how to compute the relative posterior probabilities of network structures given data. Third, we describe search methods for identifying network structures with high posterior probabilities. We describe polynomial algorithms for finding the highest-scoring network structures in the special case where every node has at most k = 1 parent. For the general case (k > 1), which is NP-hard, we review heuristic search algorithms including local search, iterative local search, and simulated annealing. Finally, we describe a methodology for evaluating Bayesian-network learning algorithms, and apply this approach to a comparison of various approaches.

PostScript file | File |

In Machine Learning

Publisher Morgan Kaufmann Publishers

All copyrights reserved by Morgan Kaufmann Publishers 1994.

Type | Article |

URL | http://www.mkp.com/ |

Pages | 197-243 |

Volume | 20 |

Number | MSR-TR-94-09 |

Institution | Microsoft Research |

> Publications > Learning Bayesian Networks: The Combination of Knowledge and Statistical Data