Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Speeding up Inference in Statistical Relational Learning by Clustering Similar Query Literals

Lilyana Mihalkova and Matthew Richardson

Abstract

Markov logic networks (MLNs) are a statistical relational learning model that consists of a set of weighted first-order clauses and provides a way of softening first-order logic. Several machine learning problems have been successfully addressed by treating MLNs as a “programming language” where a set of features expressed in first-order logic is manually engineered by the designer and then weights for these features are learned from the data. Inference over the learned model is an important step in this process both because several weight-learning algorithms involve performing inference multiple times during training and because inference is used to evaluate and use the final model. “Programming” with an MLN would therefore be significantly facilitated by speeding up inference, thus providing the ability to quickly observe the performance of new hand-coded features. This paper presents a meta-inference algorithm that can speed up any of the available inference techniques by first clustering the query literals and then performing full inference for only one representative from each cluster. Our approach to clustering the literals does not depend on the weights of the clauses in the model. Thus, when learning weights for a fixed set of clauses, the clustering step incurs only a one-time up-front cost.

Details

Publication typeTechReport
NumberMSR-TR-2008-72
Pages14
InstitutionMicrosoft Research
> Publications > Speeding up Inference in Statistical Relational Learning by Clustering Similar Query Literals