Algorithmic Leviathan or Individual Choice: Choosing Sanctioning Regimes in the Face of Observational Error

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Standard

Algorithmic Leviathan or Individual Choice : Choosing Sanctioning Regimes in the Face of Observational Error. / Markussen, Thomas; Putterman, Louis; Wang, Liangjun.

I: Economica, Bind 90, Nr. 357, 01.2023, s. 315-338.

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Harvard

Markussen, T, Putterman, L & Wang, L 2023, 'Algorithmic Leviathan or Individual Choice: Choosing Sanctioning Regimes in the Face of Observational Error', Economica, bind 90, nr. 357, s. 315-338. https://doi.org/10.1111/ecca.12443

APA

Markussen, T., Putterman, L., & Wang, L. (2023). Algorithmic Leviathan or Individual Choice: Choosing Sanctioning Regimes in the Face of Observational Error. Economica, 90(357), 315-338. https://doi.org/10.1111/ecca.12443

Vancouver

Markussen T, Putterman L, Wang L. Algorithmic Leviathan or Individual Choice: Choosing Sanctioning Regimes in the Face of Observational Error. Economica. 2023 jan.;90(357):315-338. https://doi.org/10.1111/ecca.12443

Author

Markussen, Thomas ; Putterman, Louis ; Wang, Liangjun. / Algorithmic Leviathan or Individual Choice : Choosing Sanctioning Regimes in the Face of Observational Error. I: Economica. 2023 ; Bind 90, Nr. 357. s. 315-338.

Bibtex

@article{03767ea67851411d9f504393477b8fb0,
title = "Algorithmic Leviathan or Individual Choice: Choosing Sanctioning Regimes in the Face of Observational Error",
abstract = "Laboratory experiments are a promising tool for studying how competing institutional arrangements perform and what determines preferences between them. Reliance on enforcement by peers versus formal authorities is a key example. That people incur costs to punish free riders is a well-documented departure from non-behavioural game-theoretic predictions, but how robust is peer punishment to informational problems? We report experimental evidence that reluctance to personally impose punishment when choices are reported unreliably may tip the scales towards rule-based and algorithmic formal enforcement even when observation by the centre is equally prone to error. We provide new and consonant evidence from treatments in which information quality differs for authority versus peers, and confirmatory patterns in both binary decision and quasi-continuous decision variants. Since the role of formal authority is assumed by a computer in our experiment, our findings are also relevant to the question of willingness to entrust machines to make morally fraught decisions, a choice increasingly confronting humans in the age of artificial intelligence.",
keywords = "Faculty of Social Sciences",
author = "Thomas Markussen and Louis Putterman and Liangjun Wang",
year = "2023",
month = jan,
doi = "10.1111/ecca.12443",
language = "English",
volume = "90",
pages = "315--338",
journal = "Economica",
issn = "0013-0427",
publisher = "Wiley-Blackwell",
number = "357",

}

RIS

TY - JOUR

T1 - Algorithmic Leviathan or Individual Choice

T2 - Choosing Sanctioning Regimes in the Face of Observational Error

AU - Markussen, Thomas

AU - Putterman, Louis

AU - Wang, Liangjun

PY - 2023/1

Y1 - 2023/1

N2 - Laboratory experiments are a promising tool for studying how competing institutional arrangements perform and what determines preferences between them. Reliance on enforcement by peers versus formal authorities is a key example. That people incur costs to punish free riders is a well-documented departure from non-behavioural game-theoretic predictions, but how robust is peer punishment to informational problems? We report experimental evidence that reluctance to personally impose punishment when choices are reported unreliably may tip the scales towards rule-based and algorithmic formal enforcement even when observation by the centre is equally prone to error. We provide new and consonant evidence from treatments in which information quality differs for authority versus peers, and confirmatory patterns in both binary decision and quasi-continuous decision variants. Since the role of formal authority is assumed by a computer in our experiment, our findings are also relevant to the question of willingness to entrust machines to make morally fraught decisions, a choice increasingly confronting humans in the age of artificial intelligence.

AB - Laboratory experiments are a promising tool for studying how competing institutional arrangements perform and what determines preferences between them. Reliance on enforcement by peers versus formal authorities is a key example. That people incur costs to punish free riders is a well-documented departure from non-behavioural game-theoretic predictions, but how robust is peer punishment to informational problems? We report experimental evidence that reluctance to personally impose punishment when choices are reported unreliably may tip the scales towards rule-based and algorithmic formal enforcement even when observation by the centre is equally prone to error. We provide new and consonant evidence from treatments in which information quality differs for authority versus peers, and confirmatory patterns in both binary decision and quasi-continuous decision variants. Since the role of formal authority is assumed by a computer in our experiment, our findings are also relevant to the question of willingness to entrust machines to make morally fraught decisions, a choice increasingly confronting humans in the age of artificial intelligence.

KW - Faculty of Social Sciences

U2 - 10.1111/ecca.12443

DO - 10.1111/ecca.12443

M3 - Journal article

VL - 90

SP - 315

EP - 338

JO - Economica

JF - Economica

SN - 0013-0427

IS - 357

ER -

ID: 322121380