Léonard Van Rompaey
JUR- CEPRI - Centre for Private Governance
Karen Blixens Plads 16, 2300 København S, 6B Bygning 6B (Afsnit 3), Bygning: 6B-3-57
Private Governance for Responsible and Trustworthy Artificial Intelligence
This industrial postdoctoral project explores how trust in artificial intelligence (AI), among other new emerging technologies, is a key factor of public acceptance and uptake of the technology. It will be crucial to commercial success of AI producers, and to the development of the tech industry, at both national and European levels. For those reasons, it is also geopolitically vital as the AI technology race rages across the continents. At other levels, AI also disrupts our legal systems, and generates loopholes in our liability regimes (including product liability). Unclear or unfair liability regimes risk severely affecting trust. There could be a way to use the precepts of Trustworthy AI to secure those liability regimes, while simultaneously fostering trust at other levels. The AI act regulation project by the EU Commission goes in that direction by encouraging and framing private governance, which will be more efficient at finding the specific frameworks of implementation required to foster trustworthy AI for clearer liability in each industrial sector.
PhD thesis: Discretionary Robots – Conceptual Legal Challenges for the Regulation of Machine Behaviour
Artificial intelligence is an exceptionally capable technology compared to previous human creations, and it is spreading into all fields of human activity. Insofar as they are able to discretionarily make choices about what they learn and what they decide while accomplishing a task, robots can be described as having intentionality and agency. This creates conceptual and systemic disruption for our legal systems, which never had to deal with these kind of qualities in objects before. The problem is that at this stage, most of the research and regulatory efforts focus on topical and symptomatic legal problems, instead of trying to deal with the legal implications of those abilities for agency and discretion.
Discretion in decision-making tends to disqualify robots as objects, and this in turn explains why we consider qualifying them as persons because those levels of agency and discretion are traditionally attached to persons by our legal systems—this the thesis illustrates by showing how . While robots display some human attributes that disqualify them as objects, they also lack important qualities required to properly interact with and be affected by our legal systems. This in turn disqualifies them as persons.
Robots thus sit oddly between categories of objects and persons: they are conceptually ambivalent, and this, the thesis argues, is the source of the conceptual legal disruption, which is in turn, the source of the topical and symptomatic legal disruption. The thesis intends to properly identify the effects and sources of the legal disruption the technology creates, which in turn will help properly craft regulations to maximise the benefits and minimize the harms that AI creates.
- Robot and AI Law
- Industrial standards
- AI ethics
- Engineering design processes
- Theory of Law
Undervisnings- og vejledningsområder
- International Public Law (BA)