Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Crowdsourcing for Book Search Evaluation: Impact of HIT Design on Comparative System Ranking

Gabriella Kazai, Jaap Kamps, Marijn Koolen, and Natasa Milic-Frayling

Abstract

The evaluation of information retrieval (IR) systems over special collections, such as large book repositories, is out of reach of traditional methods that rely upon editorial relevance judgments. Increasingly, the use of crowdsourcing to collect relevance labels has been regarded as a viable alternative that scales with modest costs. However, crowdsourcing suffers from undesirable worker practices and low quality contributions. In this paper we investigate the design and implementation of effective crowdsourcing tasks in the context of book search evaluation. We observe the impact of aspects of the Human Intelligence Task (HIT) design on the quality of relevance labels provided by the crowd. We assess the output in terms of label agreement with a gold standard data set and observe the effect of the crowdsourced relevance judgments on the resulting system rankings. This enables us to observe the effect of crowdsourcing on the entire IR evaluation process. Using the test set and experimental runs from the INEX 2010 Book Track, we find that varying the HIT design, and the pooling and document ordering strategies leads to considerable differences in agreement with the gold set labels. We then observe the impact of the crowdsourced relevance label sets on the relative system rankings using four IR performance metrics. System rankings based on MAP and Bpref remain less affected by different label sets while the Precision@10 and nDCG@10 lead to dramatically different system rankings, especially for labels acquired from HITs with weaker quality controls. Overall, we find that crowdsourcing can be an effective tool for the evaluation of IR systems, provided that care is taken when designing the HITs.

Details

Publication typeInproceedings
Published inThe 34th Annual International ACM SIGIR Conference (SIGIR 2011), July 24-28, 2011, Beijing, China
PublisherACM
> Publications > Crowdsourcing for Book Search Evaluation: Impact of HIT Design on Comparative System Ranking