Current Ethical and Legal Issues in Medical AI

Decorative image

Leading international experts discuss legal and ethical issues of AI applications in medicine.

Details

Time: 17 June 2024, 09:00-11:00

Place: Room 6B-0-6 (Moot Court room), Ground Floor, Njalsgade 76, DK-2300 Copenhagen S and Zoom

Organizer: Timo Minssen (Chair & Moderator) & CeBIL- Centre for Advanced Studies in Bioscience Innovation Law & Lifesciencelaw.dk

Agenda

09:00 - 09:05 Introduction
by CeBIL Director Timo Minssen
09:05 - 09:25 Must Medical AI be Explainable?
by Glenn Cohen
09:25 - 09:45 Will the EU AI Act help to eliminate data bias in medical AI?
by Emilia Niemiec
09:45 - 10:05 Governance Standards for Medical AI: The Role of Humans in the Loop
by Nicholson Price
10:05 - 10:25 Fit for Purpose? Analysing the current regulatory landscape for consumer wearables and potential future directions
by Hannah Louise Smith
10:25 - 10:50 Panel Discussion
10:50 – open end Mingle with light refreshments

Registration for in-person participation - no more seats

Please register no later than the 14 June 2024 at 14:00 using this registration form

Registration for online participation

Please register no later than the 17 June 2024 at 08:30 using this registration form

Further information

Must Medical AI be Explainable?

Explainability in artificial intelligence and machine learning (“AI/ML”) is emerging as a leading area of academic research and a topic of significant regulatory concern. Indeed, a near-consensus exists in favor of explainable AI/ML among academics, governments, and civil society groups. In this project, we challenge this prevailing trend. We discuss why explainable AI often cannot achieve what it promises. There is, however, an alternative — interpretable AI/ML — which we will distinguish from explainable AI/ML. Interpretable AI/ML can be useful where it is appropriate, but represents real trade-offs as to algorithmic performance and in some instances (in medicine and elsewhere) adopting an interpretable AI/ML may mean adopting a less accurate AI/ML. We argue that it is better to face those trade-offs head on.

G CohenGlenn Cohen, JD, PhD, is a Deputy Dean and James A. Attwood and Leslie Williams Professor of Law, Harvard Law School as well as the Faculty Director of the Petrie-Flom Center for Health Law Policy, Biotechnology & Bioethics. His work focuses on how the law grapples with new medical technologies – including reproductive technologies, psychedelics, and artificial intelligence. He is co-PI of the Project on Precision Medicine, Artificial Intelligence, and the Law at the Petrie-Flom Center at Harvard Law School and a Core Partner at the University of Copenhagen’s International Collaborative Bioscience Innovation & Law (Inter-CeBIL) Programme.


Will the EU AI Act help to eliminate data bias in medical AI?

It is widely acknowledged that AI systems are as good as the data they are trained on. Training an AI system on biased datasets (i.e., datasets skewed toward certain subgroups, defined by, for example, age or ethnicity) can lead to underperformance of the system on the subgroups underrepresented in the data. To address this issue, the EU AI Act includes provisions focusing on data quality and bias. The aim of this talk is to explore the implications of these provisions for medical AI systems.

E NiemiecEmilia Niemiec, is a postdoctoral researcher at the Centre for Advanced Studies in Bioscience Innovation Law at the University of Copenhagen. She specialises in legal and ethical issues in biomedical research and technologies, in particular in medical AI and genomics. Emilia has a multidisciplinary background combining degrees and experience in Biotechnology (MSc, BEng, Warsaw University of Life Sciences) and Bioethics (MSc, KULeuven). She also holds a PhD in Law, Science and Technology from the University of Bologna.


Governance Standards for Medical AI: The Role of Humans in the Loop

As medical AI begins to mature as a health-care tool, the task of governance grows increasingly important.  Ensuring that medical AI works, works where it’s used, and works for the patient in the moment is a challenging, multifaceted task.  Some of this governance can be centralized—in review by the FDA or by national accreditation labs, for instance.  Some must be local, performed by the hospital or health system about to use the product in their own, unique environment.  But a large amount of governance is left to the individual provider in the room, the human in the loop who presumably knows the patient and the health system environment, and who can ensure that the AI system is being used in a safe and effective manner.  Unfortunately, placing such a burden on the physician poorly reflects the reality of modern medical technology and practice, and law and policy must take that reality into account.

N PriceNicholson Price, is a Professor of Law at the University of Michigan.  He studies how law shapes biomedical innovation, especially medical AI.  Nicholson teaches patents, health law, innovation in the life sciences, AI and the law, and science fiction and the law.  He holds a PhD in Biological Sciences and a JD, both from Columbia, and an AB from Harvard. He is a also research fellow at the University of Copenhagen’s International Collaborative Bioscience Innovation & Law (Inter-CeBIL) Programme.


Fit for Purpose? Analysing the current regulatory landscape for consumer wearables and potential future directions

As Google announces its plans to integrate a Personal Health Large Language Model into its consumer wearable devices and fitness applications, there is an urgent need to explore the current regulatory landscape pertaining to these technologies to ensure that users rights and needs are appropriately considered. This talk will highlight some of the gaps, grey areas, and current challenges associated with the current regulatory approach that fails to properly scrutinize 1) how companies may have obfuscated the sensitive nature of some of the data extracted by users under the guise of “wellness” data and 2) their disruption of traditional relationships and institutions in healthcare settings.  These findings are premised as necessary preliminary work to better understand the ways in which aspects of medical AI may enter wider society and the key actors and their respective roles in its rollout.

H SmithHannah Louise Smith, is a postdoc working at Inter-CeBIL where she draws upon her socio-legal background to explore novel and responsive ways to regulate emerging technologies that promotes positive societal outcomes. She holds a DPhil, M.St, BCL, and BA from the University of Oxford and previously worked at the University of Western Australia’s Tech & Policy Lab.

 


Timo

Chair &Moderator

Timo Minssen, is Professor of Law at the University of Copenhagen (UCPH), and the Founding Director of UCPH's Center for Advanced Studies in Bioscience Innovation Law (CeBIL). He is also an LML Research Affiliate at the University of Cambridge (UK), and an Inter-CeBIL Research Affiliate, Petrie-Flom Centre for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School (US). His research, supervision, teaching and part-time advisory practice concentrates on Intellectual Property-, Competition and Regulatory Law, as well as on the law and ethics of emerging health and life science technologies, such as genome editing, big data, artificial intelligence and quantum technology.