The Real Challenges posed by Artificial Intelligence
As the hype around artificial intelligence reverberates around the media it is important to step back and take stock of the what the real challenges actually are, and whether it is actually artificial intelligence that is at their root. What I mean is that the frames of the debate are focused upon intelligence (and associated attributes of capacity, adaptability, and ingenuity) and upon non-natural (read non-human) origins. Thus, the connotations invoked by artificial intelligence lean towards a manufactured entity that possesses human-like levels of intelligence, and segue naturally towards questions of how to accommodate for sentient artificial beings in social, legal and political terms.
I would like to suggest this framing is too narrow: that many of the real challenges posed by artificial intelligence have nothing particular to do with the characteristics of being created nor with being intelligent. To see why this is important, imagine that we come up with legal mechanisms and social norms capable of integrating such intelligent artificial beings into human society in a politically palatable manner. This would have solved the issues implied by the framing of artificial intelligence. Yet, the real systemic impacts of the these technologies would not have been addressed, let alone mitigated.
Instead, we should think about the underlying technologies that are characterised as artificial intelligence through different spectra. Consider the looming technological end of work. While advanced algorithms may be able to accomplish many professional tasks consistently to a higher standard than the human beings currently employed to perform them, the societal shock lies not with the replacement of individual human professionals, but rather the systemic obsolescence of entire professions or vocations. Thus, while an ethical conundrum raised by self-driving cars involves familiar trolley-problem ethics that forces us to prescribe whether the old lady or the young child is to be run over in an unavoidable accident scenario, the policy and social conundrum raised by the prospect of self-driving cars is likely to involve the mass unemployment professional drivers. The demographics impacted by the large scale substitution of professional drivers by self-driving cars suggest that some societal unrest might ensue. Too myopic a focus upon artificial intelligence would overlook familiar cycles of technological advances, societal upheaval and the lessons learnt from past industrial revolutions.
Take another example, the castigated Social Credit System in operation in parts of China at the moment with the policy directing towards full integration by 2020. Such a system, which aggregates vast amounts of data points on an individual’s behaviour in order to quantify a specific score, then amalgamates the treatment of that individual across all aspects of life according to that score, does not even qualify as artificial intelligence. All that such a system does is to connect data with consequences through a processing algorithm – no intelligence necessary. Yet, while the Social Credit System raises many of the perils warned about in dystopian fiction, again a focus on artificial intelligence would neither help in understanding the issues raised by such a system, nor hint towards the types of responses capable of neutralising such a threat. A large part of the oppressive nature of such a system resides not in any purported intelligence, but rather in its comprehensiveness and its consistency. In this sense, looking for intelligence gets us searching for an entity displaying competence at performing a task: instead we should be looking for integrated systems and digital networks and consider that challenges that these characteristics introduce.
Much of the concern raised by AI revolves around the delegation of human decision-making to machines, and in the legal context at least, how to ensure that such decision-making processes remain amenable to human oversight, and where necessary, human intervention. Thus, in a recent case in the United States, where an algorithm was influential in sentencing a man to prison for six years, the appeal centred upon due process rights – about accessibility and transparency of the reasons that influenced the decision-making process. In rejecting that appeal, the Wisconsin Supreme Court held among other things that the algorithm’s determination with regard to the prison sentence did not exclusively lead to the judge’s decision. Again, the question of the ‘intelligence’ of the algorithm is irrelevant to containing its challenge to the legal system. Instead, questions of systemic bias loom large in the datasets which the algorithms utilise, and in the unpredictable conclusions that algorithms might infer from such datasets.
But the point is really that decision-making by machines is implied by discussions of artificial intelligence, and this is what we focus upon as a result. Take the example above with criminal sentencing, however, and it becomes clear that the real problem lies not in the decision-making process itself, but in the consequences that flow from it. Imagine for a moment that the defendant’s objections are solved – that due process rights are not a problem. What might the consequences be in such a scenario? Three come to mind. There is an entrenchment of contemporary values – what is right today will be persevered intact into the future. The scope for improvement, even mere variation, fades as algorithms churn out decisions based upon our current constellation of values. A related concern is practical, as the flaws inherent in today’s society as gender, racial and other biases cascade through decision-making frameworks. But perhaps most perilous consequence is our inability to respond to these challenges if we are do not begin to be vigilant now. Through our narrow framing of artificial intelligence, we run the risk of focussing upon the decision-making process and in aligning it with existing legal obligations, and thus overlooking the larger challenges that these technologies bring. In short, we win the legal battle over transparent and accessible machine decision-making processes, but we stand still to lose the war on the impact that such technologies will have across society as a whole.
The Collingridge Dilemma describes a problem facing all societies at the moment: before a technology is extensively developed and used widely, there is an information problem about its impacts, but after a technology is in widespread circulation, there is a power problem that makes it difficult to control or change that technology. To prevent society from sleepwalking into irreversible situations where advanced algorithms cage human liberty and capacity, we should place emphasis upon confronting this information problem first (if nothing else, this cannot be addressed in retrospect). To shield our eyes from the dazzle of emerging technologies, we should reorient responses away from the characteristics and capabilities that their proponents claim and instead look to the impact that integrating these may have upon society more generally. In doing so, we can also seek to influence the spread and penetration of artificial intelligence applications and allow their use only after considered debate and reflection.