Arun Chaganty, Akash Lal, Aditya V. Nori, and Sriram K. Rajamani
In statistical relational learning, one is concerned with inferring the most likely explanation (or world) that satisfies a given set of weighted constraints. The weight of a constraint signifies our confidence in the constraint, and the most likely world that explains a set of constraints is simply a satisfying assignment that maximizes the weights of satisfied constraints. The relational learning community has developed specialized solvers (e.g., Alchemy and Tuffy) for such weighted constraints independently of the work on SMT solvers in the verification community.
In this paper, we show how to leverage SMT solvers to significantly improve the performance of relational solvers. Constraints associated with a weight of 1 (or 0) are called axioms because they must be satisfied (or violated) by the fifinal assignment. Such axioms are hard constraints and often create difficulties for relations solvers. Our main contribution is to use CEGAR (Counterexample-guided abstraction refinement) in this context: we solve axioms using SMT solvers and only lazily pass information back to the relational solver. Further, this information can either be a subset of the axioms, or even generalized axioms (similar to predicate generalization in verification).
We also present an implementation of our algorithm in a tool called Soft-Cegar. We empirically compare our approach to state-of-the-art statistical relational learning approaches such as Tuffy and Alchemy over four real-world applications.We fifind that Soft-Cegar significantly outperforms these approaches both in terms of runtime and quality of results.
We hope this work opens the door for further collaboration between relational learning and SMT solvers.
|Published in||Computer Aided Verification (CAV)|
|Publisher||Lecture Notes in Computer Science|