Learning to Construct and Reason with a Large Knowledge Base of Extracted Information

Speaker  William Cohen

Host  Rich Caruana

Affiliation  Carnegie Mellon University

Duration  01:08:07

Date recorded  12 July 2013

Carnegie Mellon University's "Never Ending Language Learner" (NELL) has been running for over three years, and has automatically extracted from the web millions of facts concerning hundreds of thousands of entities and thousands of concepts. NELL works by coupling together many interrelated large-scale semi-supervised learning problems. In this talk I will discuss some of the technical problems we encountered in building NELL, and some of the issues involved in reasoning with this sort of large, diverse, and imperfect knowledge base. This is joint work with Tom Mitchell, Ni Lao, William Wang, and many other colleagues.

©2013 Microsoft Corporation. All rights reserved.
> Learning to Construct and Reason with a Large Knowledge Base of Extracted Information