No Quality without Equality: Anti-discrimination as a Marker of Quality in Artificial Intelligence Health Tools

Publikation: KonferencebidragPaperFormidling

Denmark has stated that artificial intelligence is the pillar of maintaining a health care system serving patients with increasingly greater needs (e.g., an aging population and one with more chronic conditions). AI will deliver higher quality patient care, while at the same time—and most critically—lowering costs. Denmark’s ‘signature projects’ on artificial intelligence and health in 2021 ranged from a tool to predict (early) the risk that a patient with heart disease develops anxiety or depression to a natural language processing tool that listens to midwives’ conversations with pregnant and laboring women to better identify critical conditions. For a host of reasons (including unrepresentative data, proxies that mask inequities, and ill-conceived algorithmic designs), AI tools even in the medical field have had documented problems of discrimination against women, people of color, and other marginalized communities.

Often, without regard for its discriminatory implications, the highest benchmark for a successful AI tool is performance ‘accuracy,’ meaning that the tool accurately answers the problem presented based on given data (e.g., it correctly predicts which cardiac patients will develop anxiety or depression, as evidenced by doctors’ diagnoses). In standard AI terminology, protecting patients from discrimination often only registers from metrics of ‘fairness’ and ‘bias’—that the tool’s conclusions are translatable in the real world with diverse populations. Bias can include much more expansive ideas than legal bias (that against protected groups), such as a radiology tool that’s biased towards patients who were standing, as opposed to sitting, when their scans were taken. However, standard AI measurements pit accuracy and fairness/debiasing against each other: if an AI tool becomes more fair and debiased, its accuracy suffers. Instead of falling prey to the accuracy-fairness/bias dichotomy, this presentation suggests that a ‘quality’ AI tool should abandon notions of fairness/bias in favor of anti-discrimination. Pausing to look at the quality of an AI tool from multiple populations and needs (that is, those of non-dominant groups) has the potential to improve the quality of health AI tools overall by exposing some of the flawed systemic inequities and assumptions on which they are based.
OriginalsprogEngelsk
Publikationsdato20 apr. 2022
StatusUdgivet - 20 apr. 2022
Begivenhed8th European Conference on Health Law: Quality in healthcare: Can the law help to guarantee safe and reliable care for the patient? - Ghent, Belgien
Varighed: 20 apr. 202222 apr. 2022
https://eahl.eu/eahl-2022-conference

Konference

Konference8th European Conference on Health Law: Quality in healthcare
LandBelgien
ByGhent
Periode20/04/202222/04/2022
Internetadresse

ID: 368900234