WikiQA: A Challenge Dataset for Open-Domain Question Answering

  • Yi Yang ,
  • Scott Wen-tau Yih ,
  • Chris Meek

Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing |

Published by ACL - Association for Computational Linguistics

Publication | Publication

We describe the WikiQA dataset, a new publicly available set of question and sentence pairs, collected and annotated for research on open-domain question answering. Most previous work on answer sentence selection focuses on a dataset created using the TREC-QA data, which includes editor-generated questions and candidate answer sentences selected by matching content words in the question. WikiQA is constructed using a more natural process and is more than an order of magnitude larger than the previous dataset. In addition, the WikiQA dataset also includes questions for which there are no correct sentences, enabling researchers to work on answer triggering, a critical component in any QA system. We compare several systems on the task of answer sentence selection on both datasets and also describe the performance of a system on the problem of answer triggering using the WikiQA dataset.

Publication Downloads

Microsoft Research WikiQA Code Package

October 30, 2015

We are releasing the code of some models used in our EMNLP-2015 paper, “WikiQA: A Challenge Dataset for Open-Domain Question Answering.” The code includes implementation of two models: Convolutional Neural Networks (CNN), and Logistic regression with CNN and count features (CNN-Cnt). Last published: October 30, 2015.

WikiQA Corpus

August 28, 2015

The WikiQA corpus is a new publicly available set of question and sentence pairs, collected and annotated for research on open-domain question answering. In order to reflect the true information need of general users, we used Bing query logs as the question source. Each question is linked to a Wikipedia page that potentially has the answer. Because the summary section of a Wikipedia page provides the basic and usually most important information about the topic, we used sentences in this section as the candidate answers. With the help of crowdsourcing, we included 3,047 questions and 29,258 sentences in the dataset, where 1,473 sentences were labeled as answer sentences to their corresponding questions. In addition, this download also includes the experimental results in the paper, an evaluation script for judging the "answer triggering" task, as well as the answer phrases labeled by the authors of the paper.