Learning and Inference in Collective Knowledge Bases
- Matthew Richardson
PhD Thesis: University of Washington |
Truly intelligent action requires large quantities of knowledge. Acquiring this knowledge has long been the major bottleneck preventing the rapid spread of AI systems. Hand-building comprehensive knowledge bases is slow and costly. Machine learning can be much faster and cheaper, but is limited in the depth and breadth of knowledge it can acquire. The spread of the Internet has made possible a new solution: building large knowledge bases by mass collaboration, combining information from a multitude of sources. While such collective knowledge bases (CKBs) promise a breakthrough in coverage and cost-effectiveness, they can only succeed if the quality, relevance, and consistency of the knowledge is kept at acceptable levels. This dissertation introduces an architecture for collective knowledge bases that addresses these problems. It operates in two loops of interaction, one with users and one with contributors. Knowledge from contributors is used to answer questions from users, and feedback from users is used to evaluate the knowledge from contributors. By informing contributors what knowledge was used, and what may be lacking, the CKB remains relevant to the needs of its users.