Artificial Intelligence and Legal Disruption Research Group
The purpose of the Artificial Intelligence and Legal Disruption (AI LeD) Research Group is to explore and address the legal, regulatory, governance and policy questions arising from Artificial Intelligence (AI) as a technology, and from the continued deployment to- and application of these technologies to various domains of human activity.
The purpose of the AI LeD Research Group is to explore and address the legal, regulatory, governance and policy questions arising from Artificial Intelligence (AI) as a technology, and from the continued deployment to- and application of these technologies to various domains of human activity. As such, there are obvious overlaps with the Faculty’s emphasis upon digitalisation, but the aim of this research group is to go further to confront the next generation of transformational legal questions arising from AI and other emerging technologies.
There is simply a compelling amount of legal research that needs to be done in this area. AI is plausibly an interruptive, and potentially a disruptive force for the law. Legal scholars must begin to consider both how the technology must conform to legal requirements, but also even which legal fundamentals might have to be reconsidered in light of what AI reveals in the organisation of society through to individual rights. Thus, insofar as AI grants us an alternative vantage point for examining legal principles and processes, it provides a valuable opportunity not only to buttress the existing legal constellation, but also an opportunity to rethink and improve extant legal systems. This opportunity at improvement is crucial here: what AI systems flag up as anomalies or produce as outcomes often is a reflection of human and societal bias. Rather than demand that we correct the AI systems to prevent such things in the future, legal scholars should exploit this rare insight in addressing the underlying point of friction or controversy.
There is a wide spectrum of effects at the interface between AI and the law. While we would accept projects broadly situated at this interface, ideally the threshold would be that of ‘legal disruption’ for projects to be conducted within this Group. It is this potential for artificial intelligences to disrupt legal principles, processes and procedures that forms the focal point of evaluation and examination in this Group. Legal disruption forms the filter through which the issues embraced in this Group percolate through: artificial intelligences, or their manifestations, which are capable of fundamentally displacing legal presumptions or systemically distort the functioning of the regulatory system will be considered primarily. Thus, artificial intelligences and their manifestations must raise structural or systemic challenges to governance to be included as a project within this Group. This is a necessarily high threshold, but in order to test whether an artificial intelligence or its impact passes muster, we will of course also discuss issues which might ultimately fall short.
Given the emphasis upon legal disruption, this Group constantly aims at a dynamic target: as legal and policy responses to challenges posed by artificial intelligences are overcome or otherwise settled, these issues lose their disruptive effect and fall out of the ambit of this Group (much like AI is a moving target, with problems such as chess, vision or translation – which were once considered ‘benchmarks of intelligence’ – being dismissed as ‘mere computation’ once they are conquered by computers). What loses controversy also loses interest for us. But the vantage point granted by legal disruption offers a mix of horizon-scanning for the next generation of challenges, and a measure of foresighting future issues which we will be able to prepare law and policy responses. As such, the perspective in this Group celebrates the unknown and the incomplete, as a way of formulating more robust and resilient regulatory models as a response to these brilliant technologies.
Definition of Research Area
The intersection between AI and the law broadly comprises of three clusters of issues:
- Governing AI: what legal, regulatory or governance strategies are appropriate and necessary to contain and direct the development of AI. In other words, why are legal approaches relevant and necessary to regulate the development of the technology as such, and what are opportunities and pitfalls in formulating effective, legitimate and resilient governance strategies in a changing world.
- Regulating the impact of AI: as AI gets deployed across sectors, old regulatory constellations become unbalanced. Where regulation was unnecessary previously because activities were exclusively pursued by humans, this presumption is being challenged by AI agents. For example, we need to reconsider international humanitarian law rules where autonomous weapons systems are introduced to the battlefield and perhaps different configurations of safeguards need to be considered if AI assists in medical decision-making. There are also questions of structural biases that slant and coordinate treatment of human beings, in diffuse, opaque ways that were not possible previously.
- AI in legal practice and judicial decision-making: following on the above, why might there be concerns about algorithmic decision-making in the justice sector, and why might there be opportunities alongside potential pitfalls?
Existing projects of the Research Group include:
- The impact of AI upon the legal profession, practice and principles. This includes questions of the relationship between AI and the judiciary, the erosion of legal professional competencies, and distortions relating to access to justice and the meaning of that justice.
- The anthropocentrism of the law and legal processes, which can be observed and examined through AI. The effect of human cognitive processes upon legal and regulatory processes are adaptive only insofar as human beings are both subjects and objects of that system. The prospect for AI enables a competing perspective to assess and evaluate existing regulatory biases to create a more robust and objective system capable of coherently applying to both humans and AI.
- The possible global catastrophic and existential risks posed by AI, and the significant but overlooked ways in which law and policy might mitigate these risks. While the focus in this area has traditionally been placed upon ‘superintelligence’ and the problem of control in relation to the value alignment problem, there may also be scope for such risks to emerge by more subtle or indirect vectors. Mere human intelligence levels have proven sufficient to provoke global catastrophic and existential risks; the deployment of ‘narrow’ AI systems may feed into lowering the threshold to other catastrophic risks (e.g. full-spectrum surveillance enabling entrenched totalitarian regimes; or autonomous undersea drones enabling the detection of missile submarines, undercutting nuclear deterrence stability), and furthermore, AI can taunt or enable human behaviour in detrimental fashion.
- The presumptions underlying intellectual property rights, and how these are shaken where AI is involved. The involvement of AI within the creative process questions the incentive structure and rationale underpinning the current system of intellectual property, threatening to further rarefy growing inequalities.
- How the legal protections enjoyed by human beings may be threatened or eroded in subtle ways, because AI undermines the relevance and efficacy of human rights law in fundamental ways. Reliance upon human rights law against AI power thus provides only the illusion of security, and complementary protections need to be devised to ensure both the protection of the human in the negative sense, as well as the empowerment of the human in the positive sense.
- The capacities for AI to grant autonomy to weapons and cyber systems strains contemporary notions that curb recourse to armed conflict and regulate the conduct of hostilities. The availability of autonomous weapons systems are likely to seep into the realms of policing and security, thus introducing militarised concepts and capabilities into the civilian realm.
- Structural or systemic discrimination becomes possible where sets of preferences are given effect across a network of AIs, replacing what were previously isolated and averaged effects. Where the optimisation processes of AI are harnessed, certain approaches will be favoured over others, which will structure the reallocation of benefits and burdens in a manner which precludes open debate or avoids democratic decision-making and accountability. Furthermore, because these are optimisation processes exerting pressures, these will likely fall below the thresholds, and avoid the defining parameters, necessary to demonstrate discrimination, while at the same time looking very much like traditional conceptions of discrimination.
Centre for Advanced Studies in Biomedical Innovation Law (CeBIL)
AI-LeD is also collaborating with CeBIL, given the close alignment of our research areas. Their emphasis is placed upon the legal and ethical questions raised by black-box precision medicine, as CeBIL director Timo Minssen describes:
At our Centre for Advanced Studies in Biomedical Innovation Law (CeBIL - www.cebil.dk ), we work with a great variety of projects at the interface of AI and the Health & Life Sciences. The project range from smart clinical trials data and healthcare with the internet of things (IoT), such as smart pills and health apps, to the uses of AI and blockchain technology in precision medicine and open innovation.
This is perhaps best illustrated by our collaborative project with the Petrie Flom Center at Harvard Law School on black box precision medicine (PMAIL). Black-box precision medicine is an exciting new frontier in health care diagnostics, harnessing the power of big data and AI. In black-box medicine, machine-learning algorithms and artificial intelligence examine newly available troves of health data, including genomic sequences, patient clinical care records, and the results of diagnostic tests to make predictions and recommendations about care. An algorithm may be “black-box” either because it is based on unknowable machine-learning techniques or because the relationships it draws are too complex for explicit understanding.
CeBIL’s and the Petrie Flom Center’s PMAIL project will provide a comparative analysis of the law and ethics of black-box precision medicine, explaining the shortcomings of the current innovation policy landscape in Europe and the US, and providing a comprehensive examination of various policy options to better harness the potential of black-box medicine. It will be a major initiative, spanning five years of study. For further information on the project, see: http://petrieflom.law.harvard.edu/resources/article/petrie-flom-center-launches-pmail-project.
|Adamo, Silvia||Associate professor|
|Alaminos, Barbara Diaz||PhD student|
|Feldthusen, Rasmus Kristian||Professor|
|Gunnarsdóttir, Hrefna Dögg||PhD Fellow|
|Liu, Hin-Yan||Associate professor|
|Maas, Matthijs Michiel||PhD Fellow|
|Mazibrada, Andrew||PhD Fellow|
|Minssen, Timo||Director of centre, professor|
|Olsen, Henrik Palmer||Associate Dean for Research|
|Schäfke-Zell, Werner||Assistant professor|
|Teo, Sue Anne||PhD Fellow|
|Trabucco, Lena||PhD student|
|van der Donk, Berdien B E||PhD student|
Faculty of Law
University of Copenhagen
South Campus, Building: 6A.4.16
Karen Blixens Plads 16
2300 Copenhagen S
Phone: (45) 35 33 76 96