How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons
Publikation: Bidrag til tidsskrift › Tidsskriftartikel › Forskning › fagfællebedømt
Standard
How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons. / Maas, Matthijs M.
I: Contemporary Security Policy, Bind 40, Nr. 3, 2019, s. 285-311.Publikation: Bidrag til tidsskrift › Tidsskriftartikel › Forskning › fagfællebedømt
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - JOUR
T1 - How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons
AU - Maas, Matthijs M.
PY - 2019
Y1 - 2019
N2 - Many observers anticipate “arms races” between states seeking to deploy artificial intelligence (AI) in diverse military applications, some of which raise concerns on ethical and legal grounds, or from the perspective of strategic stability or accident risk. How viable are arms control regimes for military AI? This article draws a parallel with the experience in controlling nuclear weapons, to examine the opportunities and pitfalls of efforts to prevent, channel, or contain the militarization of AI. It applies three analytical lenses to argue that (1) norm institutionalization can counter or slow proliferation; (2) organized “epistemic communities” of experts can effectively catalyze arms control; (3) many military AI applications will remain susceptible to “normal accidents,” such that assurances of “meaningful human control” are largely inadequate. I conclude that while there are key differences, understanding these lessons remains essential to those seeking to pursue or study the next chapter in global arms control.
AB - Many observers anticipate “arms races” between states seeking to deploy artificial intelligence (AI) in diverse military applications, some of which raise concerns on ethical and legal grounds, or from the perspective of strategic stability or accident risk. How viable are arms control regimes for military AI? This article draws a parallel with the experience in controlling nuclear weapons, to examine the opportunities and pitfalls of efforts to prevent, channel, or contain the militarization of AI. It applies three analytical lenses to argue that (1) norm institutionalization can counter or slow proliferation; (2) organized “epistemic communities” of experts can effectively catalyze arms control; (3) many military AI applications will remain susceptible to “normal accidents,” such that assurances of “meaningful human control” are largely inadequate. I conclude that while there are key differences, understanding these lessons remains essential to those seeking to pursue or study the next chapter in global arms control.
KW - Favourite
KW - Artificial intelligence
KW - AI
KW - nonproliferation
KW - arms control
KW - arms race
KW - epistemic communities
KW - governance
KW - normal accidents
U2 - 10.1080/13523260.2019.1576464
DO - 10.1080/13523260.2019.1576464
M3 - Journal article
VL - 40
SP - 285
EP - 311
JO - Contemporary Security Policy
JF - Contemporary Security Policy
SN - 1352-3260
IS - 3
ER -
ID: 228153544