How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Standard

How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons. / Maas, Matthijs M.

I: Contemporary Security Policy, Bind 40, Nr. 3, 2019, s. 285-311.

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Harvard

Maas, MM 2019, 'How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons', Contemporary Security Policy, bind 40, nr. 3, s. 285-311. https://doi.org/10.1080/13523260.2019.1576464

APA

Maas, M. M. (2019). How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons. Contemporary Security Policy, 40(3), 285-311. https://doi.org/10.1080/13523260.2019.1576464

Vancouver

Maas MM. How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons. Contemporary Security Policy. 2019;40(3):285-311. https://doi.org/10.1080/13523260.2019.1576464

Author

Maas, Matthijs M. / How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons. I: Contemporary Security Policy. 2019 ; Bind 40, Nr. 3. s. 285-311.

Bibtex

@article{108b4ba835c94d24b8786242ae905a5a,
title = "How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons",
abstract = "Many observers anticipate “arms races” between states seeking to deploy artificial intelligence (AI) in diverse military applications, some of which raise concerns on ethical and legal grounds, or from the perspective of strategic stability or accident risk. How viable are arms control regimes for military AI? This article draws a parallel with the experience in controlling nuclear weapons, to examine the opportunities and pitfalls of efforts to prevent, channel, or contain the militarization of AI. It applies three analytical lenses to argue that (1) norm institutionalization can counter or slow proliferation; (2) organized “epistemic communities” of experts can effectively catalyze arms control; (3) many military AI applications will remain susceptible to “normal accidents,” such that assurances of “meaningful human control” are largely inadequate. I conclude that while there are key differences, understanding these lessons remains essential to those seeking to pursue or study the next chapter in global arms control.",
keywords = "Favourite, Artificial intelligence, AI, nonproliferation, arms control, arms race, epistemic communities, governance, normal accidents",
author = "Maas, {Matthijs M.}",
year = "2019",
doi = "10.1080/13523260.2019.1576464",
language = "English",
volume = "40",
pages = "285--311",
journal = "Contemporary Security Policy",
issn = "1352-3260",
publisher = "Routledge",
number = "3",

}

RIS

TY - JOUR

T1 - How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons

AU - Maas, Matthijs M.

PY - 2019

Y1 - 2019

N2 - Many observers anticipate “arms races” between states seeking to deploy artificial intelligence (AI) in diverse military applications, some of which raise concerns on ethical and legal grounds, or from the perspective of strategic stability or accident risk. How viable are arms control regimes for military AI? This article draws a parallel with the experience in controlling nuclear weapons, to examine the opportunities and pitfalls of efforts to prevent, channel, or contain the militarization of AI. It applies three analytical lenses to argue that (1) norm institutionalization can counter or slow proliferation; (2) organized “epistemic communities” of experts can effectively catalyze arms control; (3) many military AI applications will remain susceptible to “normal accidents,” such that assurances of “meaningful human control” are largely inadequate. I conclude that while there are key differences, understanding these lessons remains essential to those seeking to pursue or study the next chapter in global arms control.

AB - Many observers anticipate “arms races” between states seeking to deploy artificial intelligence (AI) in diverse military applications, some of which raise concerns on ethical and legal grounds, or from the perspective of strategic stability or accident risk. How viable are arms control regimes for military AI? This article draws a parallel with the experience in controlling nuclear weapons, to examine the opportunities and pitfalls of efforts to prevent, channel, or contain the militarization of AI. It applies three analytical lenses to argue that (1) norm institutionalization can counter or slow proliferation; (2) organized “epistemic communities” of experts can effectively catalyze arms control; (3) many military AI applications will remain susceptible to “normal accidents,” such that assurances of “meaningful human control” are largely inadequate. I conclude that while there are key differences, understanding these lessons remains essential to those seeking to pursue or study the next chapter in global arms control.

KW - Favourite

KW - Artificial intelligence

KW - AI

KW - nonproliferation

KW - arms control

KW - arms race

KW - epistemic communities

KW - governance

KW - normal accidents

U2 - 10.1080/13523260.2019.1576464

DO - 10.1080/13523260.2019.1576464

M3 - Journal article

VL - 40

SP - 285

EP - 311

JO - Contemporary Security Policy

JF - Contemporary Security Policy

SN - 1352-3260

IS - 3

ER -

ID: 228153544