Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Quality Expectation-Variance Tradeoffs in Crowdsourcing Contests

Xi Alice Gao, Yoram Bachrach, Peter Key, and Thore Graepel

Abstract

We examine designs for crowdsourcing contests, where participants compete for rewards given to superior solutions of a task. We theoretically analyze tradeoffs between the expectation and variance of the principal’s utility (i.e. the best solution’s quality), and empirically test our theoretical predictions using a controlled experiment on Amazon Mechanical Turk. Our evaluation method is also crowdsourcing based and relies on the peer prediction mechanism. Our theoretical analysis shows an expectation-variance tradeoff of the principal’s utility in such contests through a Pareto efficient frontier. In particular, we show that the simple contest with 2 authors and the 2-pair contest have good theoretical properties. In contrast, our empirical results show that the 2-pair contest is the superior design among all designs tested, achieving the highest expectation and lowest variance of the principal’s utility.

Details

Publication typeInproceedings
Published inAAAI 2012
PublisherAssociation for the Advancement of Artificial Intelligence
> Publications > Quality Expectation-Variance Tradeoffs in Crowdsourcing Contests