Regulating for ‘normal AI accidents’: Operational Lessons for the Responsible Governance of Artificial Intelligence Deployment

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

Regulating for ‘normal AI accidents’ : Operational Lessons for the Responsible Governance of Artificial Intelligence Deployment. / Maas, Matthijs Michiel.

Proceedings of 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES ‘18), February 2-3, 2018, New Orleans. Association for Computing Machinery (ACM), 2018. s. 223.

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

Maas, MM 2018, Regulating for ‘normal AI accidents’: Operational Lessons for the Responsible Governance of Artificial Intelligence Deployment. i Proceedings of 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES ‘18), February 2-3, 2018, New Orleans. Association for Computing Machinery (ACM), s. 223. https://doi.org/10.1145/3278721.3278766

APA

Maas, M. M. (2018). Regulating for ‘normal AI accidents’: Operational Lessons for the Responsible Governance of Artificial Intelligence Deployment. I Proceedings of 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES ‘18), February 2-3, 2018, New Orleans (s. 223). Association for Computing Machinery (ACM). https://doi.org/10.1145/3278721.3278766

Vancouver

Maas MM. Regulating for ‘normal AI accidents’: Operational Lessons for the Responsible Governance of Artificial Intelligence Deployment. I Proceedings of 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES ‘18), February 2-3, 2018, New Orleans. Association for Computing Machinery (ACM). 2018. s. 223 https://doi.org/10.1145/3278721.3278766

Author

Maas, Matthijs Michiel. / Regulating for ‘normal AI accidents’ : Operational Lessons for the Responsible Governance of Artificial Intelligence Deployment. Proceedings of 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES ‘18), February 2-3, 2018, New Orleans. Association for Computing Machinery (ACM), 2018. s. 223

Bibtex

@inproceedings{6e43005fd83f4847abf2a7cb681c2239,
title = "Regulating for {\textquoteleft}normal AI accidents{\textquoteright}: Operational Lessons for the Responsible Governance of Artificial Intelligence Deployment",
abstract = "New technologies, particularly those which are deployed rapidly across sectors, or which have to operate in competitive conditions, can disrupt previously stable technology governance regimes. This leads to a precarious need to balance caution against performance while exploring the resulting {\textquoteleft}safe operating space{\textquoteright}. This paper will argue that Artificial Intelligence is one such critical technology, the responsible deployment of which is likely to prove especially complex, because even narrow AI applications often involve networked (tightly coupled, opaque) systems operating in complex or competitive environments. This ensures such systems are prone to {\textquoteleft}normal accident{\textquoteright}-type failures which can cascade rapidly, and are hard to contain or even detect in time. Legal and governance approaches to the deployment of AI will have to reckon with the specific causes and features of such {\textquoteleft}normal accidents{\textquoteright}. While this suggests that large-scale, cascading errors in AI systems are inevitable, an examination of the operational features that lead technologies to exhibit {\textquoteleft}normal accidents{\textquoteright} enables us to derive both tentative principles for precautionary policymaking, and practical recommendations for the safe(r) deployment of AI systems. This may help enhance the safety and security of these systems in the public sphere, both in the short- and in the long term.",
author = "Maas, {Matthijs Michiel}",
note = "Matthijs M. Maas. 2018. Regulating for {\textquoteleft}normal AI accidents{\textquoteright}: operational lessons for the responsible governance of artificial intelligence deployment. In Proceedings of 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES {\textquoteleft}18), February 2-3, 2018, New Orleans, LA. https://doi.org/10.1145/3278721.3278766 ",
year = "2018",
doi = "10.1145/3278721.3278766",
language = "English",
pages = "223",
booktitle = "Proceedings of 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES {\textquoteleft}18), February 2-3, 2018, New Orleans",
publisher = "Association for Computing Machinery (ACM)",
address = "United States",

}

RIS

TY - GEN

T1 - Regulating for ‘normal AI accidents’

T2 - Operational Lessons for the Responsible Governance of Artificial Intelligence Deployment

AU - Maas, Matthijs Michiel

N1 - Matthijs M. Maas. 2018. Regulating for ‘normal AI accidents’: operational lessons for the responsible governance of artificial intelligence deployment. In Proceedings of 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES ‘18), February 2-3, 2018, New Orleans, LA. https://doi.org/10.1145/3278721.3278766

PY - 2018

Y1 - 2018

N2 - New technologies, particularly those which are deployed rapidly across sectors, or which have to operate in competitive conditions, can disrupt previously stable technology governance regimes. This leads to a precarious need to balance caution against performance while exploring the resulting ‘safe operating space’. This paper will argue that Artificial Intelligence is one such critical technology, the responsible deployment of which is likely to prove especially complex, because even narrow AI applications often involve networked (tightly coupled, opaque) systems operating in complex or competitive environments. This ensures such systems are prone to ‘normal accident’-type failures which can cascade rapidly, and are hard to contain or even detect in time. Legal and governance approaches to the deployment of AI will have to reckon with the specific causes and features of such ‘normal accidents’. While this suggests that large-scale, cascading errors in AI systems are inevitable, an examination of the operational features that lead technologies to exhibit ‘normal accidents’ enables us to derive both tentative principles for precautionary policymaking, and practical recommendations for the safe(r) deployment of AI systems. This may help enhance the safety and security of these systems in the public sphere, both in the short- and in the long term.

AB - New technologies, particularly those which are deployed rapidly across sectors, or which have to operate in competitive conditions, can disrupt previously stable technology governance regimes. This leads to a precarious need to balance caution against performance while exploring the resulting ‘safe operating space’. This paper will argue that Artificial Intelligence is one such critical technology, the responsible deployment of which is likely to prove especially complex, because even narrow AI applications often involve networked (tightly coupled, opaque) systems operating in complex or competitive environments. This ensures such systems are prone to ‘normal accident’-type failures which can cascade rapidly, and are hard to contain or even detect in time. Legal and governance approaches to the deployment of AI will have to reckon with the specific causes and features of such ‘normal accidents’. While this suggests that large-scale, cascading errors in AI systems are inevitable, an examination of the operational features that lead technologies to exhibit ‘normal accidents’ enables us to derive both tentative principles for precautionary policymaking, and practical recommendations for the safe(r) deployment of AI systems. This may help enhance the safety and security of these systems in the public sphere, both in the short- and in the long term.

U2 - 10.1145/3278721.3278766

DO - 10.1145/3278721.3278766

M3 - Article in proceedings

SP - 223

BT - Proceedings of 2018 AAAI/ACM Conference on AI, Ethics, and Society (AIES ‘18), February 2-3, 2018, New Orleans

PB - Association for Computing Machinery (ACM)

ER -

ID: 210112712