Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Pairwise Ranking Aggregation in a Crowdsourced Setting

Xi Chen, Paul N. Bennett, Kevyn Collins-Thompson, and Eric Horvitz

Abstract

Inferring rankings over elements of a set of objects, such as documents or images, is a key learning problem for such important applications as Web search and recommender systems. Crowdsourcing services provide an inexpensive and efficient means to acquire preferences over objects via labeling by sets of annotators. We propose a new model to predict a gold-standard ranking that hinges on combining pairwise comparisons via crowdsourcing. In contrast to traditional ranking aggregation methods, the approach learns about and folds into consideration the quality of contributions of each annotator. In addition, we minimize the cost of assessment by introducing a generalization of the traditional active learning scenario to jointly select the annotator and pair to assess while taking into account the annotator quality, the uncertainty over ordering of the pair, and the current model uncertainty. We formalize this as an active learning strategy that incorporates an exploration-exploitation tradeoff and implement it using an efficient online Bayesian updating scheme. Using simulated and real-world data, we demonstrate that the active learning strategy achieves significant reductions

Details

Publication typeProceedings
Published inProceedings of the 6th ACM International Conference on Web Search and Data Mining (WSDM '13)
URLhttp://research.microsoft.com/en-us/um/people/pauben/papers/wsdm2013-preference-chen-et-al.pdf
PublisherACM
> Publications > Pairwise Ranking Aggregation in a Crowdsourced Setting