Gjergji Kasneci, Jurgen V. Gael, Ralf Herbrich, and Thore Graepel
Current knowledge bases suffer from either low coverage or low accuracy. The underlying hypothesis of this work is that user feedback can greatly improve the quality of automatically extracted knowledge bases. The feedback could help quantify the uncertainty associated with the stored facts and would enable mechanisms for searching, ranking and reasoning at entity-relationship level. Most importantly, a principled model for exploiting user feedback to learn the truth values of facts in the knowledge base would be a major step forward in addressing the issue of knowledge base curation.
We present a family of probabilistic graphical models that builds on user feedback and logical inference rules derived from the popular Semantic Web formalism of RDFS. Through internal inference and belief propagation, these models are capable of learning both, the truth values of the facts in the knowledge base and the reliabilities of the users who give feedback. We demonstrate the viability of our approach in extensive experiments on real-world datasets, with feedback collected from Amazon Mechanical Turk.
|Published in||Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD)|
All copyrights reserved by Springer 2007.
Gjergji Kasneci, Jurgen Van Gael, Ralf Herbrich, and Thore Graepel. Bayesian Knowledge Corroboration with Logical Rules and User Feedback, Microsoft Research, 6 May 2010.