With the advent of crowdsourcing and crowd work, human intelligence can be brought to bear on a great variety of useful tasks and on short notice. However, because of the varying attention span and inevitable errors made by human contributors, much research has focused on algorithmic and AI-based approaches to optimizing the quality of results aggregated from human input. In this talk, we present several projects providing examples of how human behavior in crowdsourcing can be not only modeled, but also managed. We show how the attention of crowd workers can be accurately predicted by the combination of certain features, allowing for targeted interventions. We present an experiment showing how different financial incentives, even at the same overall level of payment, can result in a trade-off between quality and speed. Finally, we provide some interesting results comparing paid workers to volunteers and the attention span of workers on Amazon Mechanical Turk. Our work sets the stage for richer crowdsourcing algorithms that can adapt to and even take advantage of differences in human behavior.
Joint work with Ece Kamar, Eric Horvitz, and Yiling Chen.