Artificial Intelligence and Legal Disruption Research Group
The purpose of the AI LeD Research Group is to define and refine the legal, regulatory, and governance questions arising from Artificial Intelligence (AI) as a technology, and from the application of these technologies to various domains of human activity.
An article outlining the AI-LeD model can be found in a forthcoming article in Law, Innovation, and Technology, and in a forthcoming article in the UNICRI Special Collection on Artificial Intelligence where a part of the AI-LeD model is applied to questions of criminal justice. The Group necessarily adopts a problem-finding orientation that departs from the orthodox problem-solving approaches to the perceived problems. This is because we do not see the technology as a ‘problem’ to be ‘solved’ in any direct sense as we do not consider the technology as a potential regulatory target. A sketch of this problem-finding approach and its applications can be found in a forthcoming article in Futures.
The possible global catastrophic and existential risks posed by AI, and the significant but overlooked ways in which law and policy might mitigate these risks. While the focus in this area has traditionally been placed upon ‘superintelligence’ and the problem of control in relation to the value alignment problem, there may also be scope for such risks to emerge by more subtle or indirect vectors. Mere human intelligence levels have proven sufficient to provoke global catastrophic and existential risks; the deployment of ‘narrow’ AI systems may feed into lowering the threshold to other catastrophic risks (e.g. full-spectrum surveillance enabling entrenched totalitarian regimes; or autonomous undersea drones enabling the detection of missile submarines, undercutting nuclear deterrence stability), and furthermore, AI can taunt or enable human behaviour in detrimental fashion. Our overarching approach can be found in ‘Governing Boring Apocalypses’.
How the legal protections enjoyed by human beings may be threatened or eroded in subtle ways, because AI undermines the relevance and efficacy of human rights law in fundamental ways. Reliance upon human rights law against AI power thus provides only the illusion of security, and complementary protections need to be devised to ensure both the protection of the human in the negative sense, as well as the empowerment of the human in the positive sense. We have several papers in this area, including: ‘The Digital Disruption of Human Rights Foundations’; ‘The Power Structure of Artificial Intelligence’; and ‘A new human rights regime to address robotics and artificial intelligence’.
The capacities for AI to grant autonomy to weapons and cyber systems strains contemporary notions that curb recourse to armed conflict and regulate the conduct of hostilities. The availability of autonomous weapons systems are likely to seep into the realms of policing and security, thus introducing militarised concepts and capabilities into the civilian realm. We have recently proposed a pivot away from anchoring these discussions within the concept of ‘autonomy’ in an attempt to find new questions and challenges posed by military applications of artificial intelligence. These are available in a Special Issue in 10(1) Journal of International Humanitarian Legal Studies. Specifically, the Editorial, 'From the Autonomy Framework towards Networks and Systems Approaches for ‘Autonomous’ Weapons Systems’, and ‘Innovation-Proof Global Governance for Military Artificial Intelligence?'
Given effect across a network of AIs, replacing what were previously isolated and averaged effects. Where the optimisation processes of AI are harnessed, certain approaches will be favoured over others, which will structure the reallocation of benefits and burdens in a manner which precludes open debate or avoids democratic decision-making and accountability. Furthermore, because these are optimisation processes exerting pressures, these will likely fall below the thresholds, and avoid the defining parameters, necessary to demonstrate discrimination, while at the same time looking very much like traditional conceptions of discrimination. Our papers in this area include: ‘Three Types of Structural Discrimination Introduced by Autonomous Vehicles’ and ‘Irresponsibilities, inequalities and injustice for autonomous vehicles’.
Lunch meetings are held at 12:00-13:00 in room 8A.0.57.
- 16 September – Henrik and Jake
- 23 September – Sue Anne
- 28 October – Berdien
- 18 November – Catalin-Gabriel
- 9 December – Timo
Researchers
Name | Title | Image |
---|---|---|
Feldthusen, Rasmus Kristian | Professor |
|
Gunnarsdóttir, Hrefna Dögg | PhD Student |
|
Kianzad, Behrang | Postdoc |
|
Krunke, Helle | Head of Centre, Professor |
|
Maas, Matthijs Michiel | Guest Researcher |
|
Mazibrada, Andrew | PhD Fellow |
|
Minssen, Timo | Head of Centre, Professor |
|
Schwemer, Sebastian Felix | Head of Centre, Associate Professor |
|
Shapiro, Amanda Lee | PhD Student |
|
Slosser, Jacob Livingston | Assistant Professor |
|
Teo, Sue Anne | PhD Student |
|
Ó Cathaoir, Katharina | Associate Professor |
|
Students
- Niels Michael Wee bxq372@alumni.ku.dk
- Emilie Trier Larsen spj718@alumni.ku.dk
- Max Kronfeld sbh614@alumni.ku.dk
- Linda Tiggemann sxf429@alumni.ku.dk
- Brid Kenny pqj297@alumni.ku.dk
- Jonas No Sjølund phf680@alumni.ku.dk
Contact
Associate professor
Hin-Yan Liu
Faculty of Law
University of Copenhagen
South Campus, Building: 6A.4.16
Karen Blixens Plads 16
DK-2300 Copenhagen S
Phone: (45) 35 33 76 96
E-mail: hin-yan.liu@jur.ku.dk