Machine reading can be defined as the automatic understanding of text. One way in which human understanding of text has been gauged is to measure the ability to answer questions pertaining to the text. In this paper, we present a brief study designed to explore how a natural language processing component for the recognition of textual entailment bears on the problem of answering questions in a basic, elementary school reader. An alternative way of testing human understanding is to assess one's ability to ask sensible questions for a given text. We survey current computational systems that are capable of generating questions automatically, and suggest that understanding must comprise not only a grasp of semantic equivalence, but also an assessment of the importance of information conveyed by the test. We suggest that this observation should contribute in the design of an overall evaluation methodology for machine reading.
Publisher American Association for Artificial Intelligence
All copyrights reserved by AAAI 2007.