Not all computerized cheating tasks are equal: A comparison of computerized and non-computerized versions of a cheating task
Research output: Contribution to journal › Journal article › Research › peer-review
Computerized versions of population inferred cheating tasks (C-PICT)—i.e., tasks in which dishonesty is statistically determined on the aggregate by comparing self-reported outcomes with a known probability distribution—have become increasingly popular. To this date no study has investigated whether non-computerized population inferred cheating tasks (PICT) and C-PICT as well as different implementations of C-PICT produce similar results. The current study tackles both issues via a well-powered pre-registered online experiment (N = 3,645) with four conditions. Participants played either a non-computerized coin toss task (CTT) (C1) or one of three computerized CTT: a computerized CTT provided via an external website (C2), a computerized CTT provided within the survey framework of the study in which participants were explicitly informed that the actual outcome of the CTT was not monitored (C3), or a computerized CTT provided within the survey framework of the study in which participants were explicitly informed that the actual outcome of the CTT was monitored (C4). A priori we expected the probability of dishonesty to be higher in C1 compared to C2, C3, and C4, as well as lower in C4 compared to C3 and C2. Results show that the probability of dishonesty is higher in C1 and C2 compared to C3 and C4. Conversely, no significant difference was observed between C1 and C2, nor between C3 and C4. Taken together, our results indicate that C-PICT produce results similar to PICT when they are provided via an external website, but not when they are implemented within the survey framework of the study.
|Journal||Journal of Economic Psychology|
|Publication status||Published - Jun 2020|
- Cheating tasks, Computerization, Dishonesty, Monitoring, Online experiment