No Quality without Equality: Anti-discrimination as a Marker of Quality in Artificial Intelligence Health Tools

Research output: Contribution to conferencePaperCommunication

Denmark has stated that artificial intelligence is the pillar of maintaining a health care system serving patients with increasingly greater needs (e.g., an aging population and one with more chronic conditions). AI will deliver higher quality patient care, while at the same time—and most critically—lowering costs. Denmark’s ‘signature projects’ on artificial intelligence and health in 2021 ranged from a tool to predict (early) the risk that a patient with heart disease develops anxiety or depression to a natural language processing tool that listens to midwives’ conversations with pregnant and laboring women to better identify critical conditions. For a host of reasons (including unrepresentative data, proxies that mask inequities, and ill-conceived algorithmic designs), AI tools even in the medical field have had documented problems of discrimination against women, people of color, and other marginalized communities.

Often, without regard for its discriminatory implications, the highest benchmark for a successful AI tool is performance ‘accuracy,’ meaning that the tool accurately answers the problem presented based on given data (e.g., it correctly predicts which cardiac patients will develop anxiety or depression, as evidenced by doctors’ diagnoses). In standard AI terminology, protecting patients from discrimination often only registers from metrics of ‘fairness’ and ‘bias’—that the tool’s conclusions are translatable in the real world with diverse populations. Bias can include much more expansive ideas than legal bias (that against protected groups), such as a radiology tool that’s biased towards patients who were standing, as opposed to sitting, when their scans were taken. However, standard AI measurements pit accuracy and fairness/debiasing against each other: if an AI tool becomes more fair and debiased, its accuracy suffers. Instead of falling prey to the accuracy-fairness/bias dichotomy, this presentation suggests that a ‘quality’ AI tool should abandon notions of fairness/bias in favor of anti-discrimination. Pausing to look at the quality of an AI tool from multiple populations and needs (that is, those of non-dominant groups) has the potential to improve the quality of health AI tools overall by exposing some of the flawed systemic inequities and assumptions on which they are based.
Original languageEnglish
Publication date20 Apr 2022
Publication statusPublished - 20 Apr 2022
Event8th European Conference on Health Law: Quality in healthcare: Can the law help to guarantee safe and reliable care for the patient? - Ghent, Belgium
Duration: 20 Apr 202222 Apr 2022
https://eahl.eu/eahl-2022-conference

Conference

Conference8th European Conference on Health Law: Quality in healthcare
CountryBelgium
CityGhent
Period20/04/202222/04/2022
Internet address

ID: 368900234