Crowdsourcing Contests

We introduced a game-theoretic framework to study crowdsourcing systems as a system of competing all-pay auctions. The term crowdsourcing refers to soliciting solutions to tasks via open calls to large-scale communities, which has become a method of choice in the Internet over the last few years. Our framework enables to reason and provides insight into the relation between offered rewards and induced user contributions across various projects and provides guidelines for design of crowdsourcing systems.

In our framework, users correspond to players and projects correspond to auctions or contests. Each user's contribution is interpreted as a bid that is strategically submitted by the user to one of open contests. A player with the largest submitted bid to a contest wins the prize associated to that contest, which captures typical design of crowdsourcing systems where the prize of a project is rewarded to a user whose submitted solution is selected as the winner, according to some measure of the quality of a solution.

The model suggests that individual contributors would tend to focus their contributions to prizes that match the skill of the contributor and attempt higher prizes with a smaller probability of selection. On the other hand, the resulting aggregate contribution of the community as a whole over projects would tend to increase with diminishing rate of increase with the prize of a project. These hypotheses are suggested by the model and were found to be in good conformance with observed usage behavior of a popular crowdsourcing site.

The following is our main paper that provided a theoretical framework and empirical validation:

Related talk:

Related Work

Here is a list of some related work, which is by no means exhaustive:

  1. S. Chawla, J. D. Hartline, B. Sivan, Optimal Crowdsourcing Contests, ACM-SIAM SODA 2012.
    • The paper considers revenue optimal contests in the framework of all-pay auctions.
  2. J. J. Horton and L. B. Chilton, The Labor Economics of Paid Crowdsourcing, ACM EC, 2010.
    • This paper considers a model of workers and estimation of worker's reservation wage.
  3. D. DiPalantino, T. Karagiannis and M. Vojnovic, Individual and Collective User Behavior in Crowdsourcing Services, Microsoft Research Technical Report, MSR-TR-2010-59, May 2010.
    • This paper contains some empirical explorations of a large-scale crowdsourcing site and evaluates some hypotheses from the ACM EC'09 paper.
  4. S. Jain, Y. Chen and D. C. Parkes, Designing Incentives for Online Question and Answer Forums, ACM EC, 2009.
    • The paper analyzes game-theoretic models for online question and answer forums where users behave strategically while competing to answer questions under a reward scheme that award best answers to a question.
  5. N. Archak, Money, glory and cheap talk: analyzing strategic behavior of contestants in simultaneous crowdsourcing contests on, WWW, 2010.
    • This paper presents an empirical analysis of determinants of individual performance in multiple simultaneous crowdsourcing contests using a unique dataset for the world's largest competitive software development portal:
  6. J. Yang, L. A. Adamic and M. S. Ackerman, Crowdsourcing and Knowledge Sharing: Strategic User Behavior on Taskcn, ACM EC, 2008.
    • This is one of first empirical analysis of a large-scale crowdsourcing system.

Some Crowdsourcing Systems

General Purpose

Data Analytics



  1. J. Howe, Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business, Crown Business, 2008.
    • A book written by Jeff Hower who first coined the term "Crowdsourcing" in a June 2006 Wired article.

Some Press Articles

  1. Guardian: Technology/Crowdsourcing
  2. T. de Castella, Should we Trust the Wisdom of Crowds?, BBC News, 5 July, 2010.