Deterring cheating in online environments

  • Henry Corrigan-Gibbs ,
  • Nakull Gupta ,
  • Curtis Northcutt ,
  • ,
  • Bill Thies

ACM Trans. Comput.-Hum. Interact. (TOCHI) |

Publication

Many Internet services depend on the integrity of their users, even when these users have strong incentives to behave dishonestly. Drawing on experiments in two different online contexts, this study measures the prevalence of cheating and evaluates two different methods for deterring it. Our first experiment investigates cheating behavior in a pair of online exams spanning 632 students in India. Our second experiment examines dishonest behavior on Mechanical Turk through an online task with 2,378 total participants. Using direct measurements that are not dependent on self-reports, we detect significant rates of cheating in both environments. We confirm that honor codes–despite frequent use in massive open online courses (MOOCs)–lead to only a small and insignificant reduction in online cheating behaviors. To overcome these challenges, we propose a new intervention: a stern warning that spells out the potential consequences of cheating. We show that the warning leads to a significant (about twofold) reduction in cheating, consistent across experiments. We also characterize the demographic correlates of cheating on Mechanical Turk. Our findings advance the understanding of cheating in online environments, and suggest that replacing traditional honor codes with warnings could be a simple and effective way to deter cheating in online courses and online labor marketplaces.