Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
In Search of Quality in Crowdsourcing for Search Engine Evaluation

Gabriella Kazai

Abstract

Crowdsourcing is increasingly looked upon as a feasible alternative to traditional methods of gathering relevance labels for the evaluation of search engines, offering a solution to the scalability problem that hinders traditional approaches. However, crowdsourcing raises a range of questions regarding the quality of the resulting data. What indeed can be said about the quality of the data that is contributed by anonymous workers who are only paid cents for their efforts? Can higher pay guarantee better quality? Do better qualified workers produce higher quality labels? In this paper, we investigate these and similar questions via a series of controlled crowdsourcing experiments where we vary pay, required effort and worker qualifications and observe their effects on the resulting label quality, measured based on agreement with a gold set.

Details

Publication typeInproceedings
Published inAdvances in Information Retrieval. 33rd European Conference on IR Research (ECIR 2011), April 18-21, 2011, Dublin, Ireland
PublisherACM
> Publications > In Search of Quality in Crowdsourcing for Search Engine Evaluation