AI@Care: Law and Ethics and Algorithmic Bias in Healthcare
AI@CARE conceptualizes and computationally models how bias and discrimination occur within medical artificial intelligence, the reasons for these, and how the problem could be best addressed within enforceable legal frameworks and with reliable technological support to enhance the democratization of medicine.
The project gathers experts in digital health and law and “will consist of three interrelated subprojects, researching in depth both qualitative and quantitative aspects of the bias and discrimination”, says Pr. Katarzyna Wac, co-principal investigator at the Quality of Life Technologies Lab. The starting point of the project is an existing representative large scale, longitudinal dataset for the Danish population and a set of algorithms for assessing their risk of chronic illness in the long term. According to Pr. Timo Minssen, co-principal investigator at CeBIL, “This combination will allow us to provide one of the first conceptual accounts of algorithmic/infrastructural, legal and ethical factors that are relevant to bias and discrimination scenarios in healthcare.”
Subproject 1 conceptualizes the notion of discrimination in human rights law, and identifies its implications in the AI area, starting from its application to the TOF dataset in Danish context. Besides, subproject 1 investigates how social and structural factors influence algorithms in Denmark and other EU countries. Finally, the subproject explores convergence of interests or inconsistencies between medical AI and human rights promotion in healthcare, and offers recommendations for public healthcare to be human rights compliant and/or proposes legal reforms.
Subproject 2 classifies types of biases and discrimination embedded within algorithmic decision-making in healthcare. Different scenarios and approaches will be first simulated within the TOF dataset.
The subproject also evaluates methods and tools for detecting biases and discrimination embedded within computational methods, focusing on algorithmic explainability, accountability and intelligibility (Abdu et al., 2018). This will help to (1) reveal legal inconsistencies and regulatory gaps with regard to new forms of (hidden) biases, (2) design ‘bias awareness checklists’ for algorithm development; and (3) provide frameworks for developing design blueprints for healthcare AI solutions.
Subproject 3 will merge subprojects 1 and 2 to develop ethical guidelines, legal reforms and ‘bias awareness checklists’ for algorithm development and design blueprints for healthcare AI solutions.
AI@Care: Law and Ethics and Algorithmic Bias in Healthcare has received a three year funding from the DATA+ pool administered by the Rector of University of Copenhagen.
Period: 2020 – 2023