Catherine C. Marshall and Frank M. Shipman
Crowdsourcing services such as Amazon's Mechanical Turk (MTurk) provide new venues for recruiting participants and conducting studies; hundreds of surveys may be offered to workers at any given time. We reflect on the results of six related studies we performed on MTurk over a two year period. The studies used a combination of open-ended questions and structured hypothetical statements about story-like scenarios to engage the efforts of 1252 participants. We describe the method used in the studies and reflect on what we have learned about identified best practices. We analyze the aggregated data to profile the types of Turkers who take surveys and examine how the characteristics of the surveys may influence data reliability. The results point to the value of participant engagement, identify potential changes in MTurk as a study venue, and highlight how communication among Turkers influences the data that researchers collect.
In Proceedings of WebSci 2013
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. WebSci’13, May 2–4, 2013, Paris, France. Copyright 2013 ACM 978-1-4503-1889-1....$10.00.