Maas publishes ‘how viable is arms control for military AI?’ in CSP
“TLDR: there’s hope, but with caveats” -- Matthijs Maas
Matthijs Maas, a PhD Fellow at the Centre for International Law, Conflict and Crisis (CILCC), and member of the AI-LeD research group, published How viable is international arms control for military artificial intelligence? Three lessons from nuclear weapons in the journal Contemporary Security Policy.
In the paper, Maas takes on current narratives about on-going arms races between states such as the US and China, which seek to deploy artificial intelligence (AI) in diverse military applications. Some of these military uses raise concerns on ethical and legal grounds, or from the perspective of strategic stability or accident risk. But are arms control regimes for military AI actually possible?
To answer this question, Maas draws a parallel with the history of the control of nuclear weapons. Through three case studies, he argues that (1) norm institutionalization can counter or slow proliferation, even of powerful and appealing technologies; (2) organized “epistemic communities” of experts can effectively catalyze arms control; (3) many military AI applications will remain susceptible to “normal accidents,” such that assurances of “meaningful human control” are largely inadequate.
The paper is available (temporary open access) at: https://www.tandfonline.com/doi/full/10.1080/13523260.2019.1576464