No Quality without Equality: Anti-discrimination as a Marker of Quality in Artificial Intelligence Health Tools

Publikation: KonferencebidragPaperFormidling

Standard

No Quality without Equality : Anti-discrimination as a Marker of Quality in Artificial Intelligence Health Tools. / Shapiro, Amanda Lee.

2022. 125-140 Paper præsenteret ved 8th European Conference on Health Law: Quality in healthcare, Ghent, Belgien.

Publikation: KonferencebidragPaperFormidling

Harvard

Shapiro, AL 2022, 'No Quality without Equality: Anti-discrimination as a Marker of Quality in Artificial Intelligence Health Tools', Paper fremlagt ved 8th European Conference on Health Law: Quality in healthcare, Ghent, Belgien, 20/04/2022 - 22/04/2022 s. 125-140.

APA

Shapiro, A. L. (2022). No Quality without Equality: Anti-discrimination as a Marker of Quality in Artificial Intelligence Health Tools. 125-140. Paper præsenteret ved 8th European Conference on Health Law: Quality in healthcare, Ghent, Belgien.

Vancouver

Shapiro AL. No Quality without Equality: Anti-discrimination as a Marker of Quality in Artificial Intelligence Health Tools. 2022. Paper præsenteret ved 8th European Conference on Health Law: Quality in healthcare, Ghent, Belgien.

Author

Shapiro, Amanda Lee. / No Quality without Equality : Anti-discrimination as a Marker of Quality in Artificial Intelligence Health Tools. Paper præsenteret ved 8th European Conference on Health Law: Quality in healthcare, Ghent, Belgien.

Bibtex

@conference{5ad549e582b64921868cc2ecf5beeae5,
title = "No Quality without Equality: Anti-discrimination as a Marker of Quality in Artificial Intelligence Health Tools",
abstract = "Denmark has stated that artificial intelligence is the pillar of maintaining a health care system serving patients with increasingly greater needs (e.g., an aging population and one with more chronic conditions). AI will deliver higher quality patient care, while at the same time—and most critically—lowering costs. Denmark{\textquoteright}s {\textquoteleft}signature projects{\textquoteright} on artificial intelligence and health in 2021 ranged from a tool to predict (early) the risk that a patient with heart disease develops anxiety or depression to a natural language processing tool that listens to midwives{\textquoteright} conversations with pregnant and laboring women to better identify critical conditions. For a host of reasons (including unrepresentative data, proxies that mask inequities, and ill-conceived algorithmic designs), AI tools even in the medical field have had documented problems of discrimination against women, people of color, and other marginalized communities.Often, without regard for its discriminatory implications, the highest benchmark for a successful AI tool is performance {\textquoteleft}accuracy,{\textquoteright} meaning that the tool accurately answers the problem presented based on given data (e.g., it correctly predicts which cardiac patients will develop anxiety or depression, as evidenced by doctors{\textquoteright} diagnoses). In standard AI terminology, protecting patients from discrimination often only registers from metrics of {\textquoteleft}fairness{\textquoteright} and {\textquoteleft}bias{\textquoteright}—that the tool{\textquoteright}s conclusions are translatable in the real world with diverse populations. Bias can include much more expansive ideas than legal bias (that against protected groups), such as a radiology tool that{\textquoteright}s biased towards patients who were standing, as opposed to sitting, when their scans were taken. However, standard AI measurements pit accuracy and fairness/debiasing against each other: if an AI tool becomes more fair and debiased, its accuracy suffers. Instead of falling prey to the accuracy-fairness/bias dichotomy, this presentation suggests that a {\textquoteleft}quality{\textquoteright} AI tool should abandon notions of fairness/bias in favor of anti-discrimination. Pausing to look at the quality of an AI tool from multiple populations and needs (that is, those of non-dominant groups) has the potential to improve the quality of health AI tools overall by exposing some of the flawed systemic inequities and assumptions on which they are based.",
author = "Shapiro, {Amanda Lee}",
year = "2022",
month = apr,
day = "20",
language = "English",
pages = "125--140",
note = "8th European Conference on Health Law: Quality in healthcare : Can the law help to guarantee safe and reliable care for the patient? ; Conference date: 20-04-2022 Through 22-04-2022",
url = "https://eahl.eu/eahl-2022-conference",

}

RIS

TY - CONF

T1 - No Quality without Equality

T2 - 8th European Conference on Health Law: Quality in healthcare

AU - Shapiro, Amanda Lee

PY - 2022/4/20

Y1 - 2022/4/20

N2 - Denmark has stated that artificial intelligence is the pillar of maintaining a health care system serving patients with increasingly greater needs (e.g., an aging population and one with more chronic conditions). AI will deliver higher quality patient care, while at the same time—and most critically—lowering costs. Denmark’s ‘signature projects’ on artificial intelligence and health in 2021 ranged from a tool to predict (early) the risk that a patient with heart disease develops anxiety or depression to a natural language processing tool that listens to midwives’ conversations with pregnant and laboring women to better identify critical conditions. For a host of reasons (including unrepresentative data, proxies that mask inequities, and ill-conceived algorithmic designs), AI tools even in the medical field have had documented problems of discrimination against women, people of color, and other marginalized communities.Often, without regard for its discriminatory implications, the highest benchmark for a successful AI tool is performance ‘accuracy,’ meaning that the tool accurately answers the problem presented based on given data (e.g., it correctly predicts which cardiac patients will develop anxiety or depression, as evidenced by doctors’ diagnoses). In standard AI terminology, protecting patients from discrimination often only registers from metrics of ‘fairness’ and ‘bias’—that the tool’s conclusions are translatable in the real world with diverse populations. Bias can include much more expansive ideas than legal bias (that against protected groups), such as a radiology tool that’s biased towards patients who were standing, as opposed to sitting, when their scans were taken. However, standard AI measurements pit accuracy and fairness/debiasing against each other: if an AI tool becomes more fair and debiased, its accuracy suffers. Instead of falling prey to the accuracy-fairness/bias dichotomy, this presentation suggests that a ‘quality’ AI tool should abandon notions of fairness/bias in favor of anti-discrimination. Pausing to look at the quality of an AI tool from multiple populations and needs (that is, those of non-dominant groups) has the potential to improve the quality of health AI tools overall by exposing some of the flawed systemic inequities and assumptions on which they are based.

AB - Denmark has stated that artificial intelligence is the pillar of maintaining a health care system serving patients with increasingly greater needs (e.g., an aging population and one with more chronic conditions). AI will deliver higher quality patient care, while at the same time—and most critically—lowering costs. Denmark’s ‘signature projects’ on artificial intelligence and health in 2021 ranged from a tool to predict (early) the risk that a patient with heart disease develops anxiety or depression to a natural language processing tool that listens to midwives’ conversations with pregnant and laboring women to better identify critical conditions. For a host of reasons (including unrepresentative data, proxies that mask inequities, and ill-conceived algorithmic designs), AI tools even in the medical field have had documented problems of discrimination against women, people of color, and other marginalized communities.Often, without regard for its discriminatory implications, the highest benchmark for a successful AI tool is performance ‘accuracy,’ meaning that the tool accurately answers the problem presented based on given data (e.g., it correctly predicts which cardiac patients will develop anxiety or depression, as evidenced by doctors’ diagnoses). In standard AI terminology, protecting patients from discrimination often only registers from metrics of ‘fairness’ and ‘bias’—that the tool’s conclusions are translatable in the real world with diverse populations. Bias can include much more expansive ideas than legal bias (that against protected groups), such as a radiology tool that’s biased towards patients who were standing, as opposed to sitting, when their scans were taken. However, standard AI measurements pit accuracy and fairness/debiasing against each other: if an AI tool becomes more fair and debiased, its accuracy suffers. Instead of falling prey to the accuracy-fairness/bias dichotomy, this presentation suggests that a ‘quality’ AI tool should abandon notions of fairness/bias in favor of anti-discrimination. Pausing to look at the quality of an AI tool from multiple populations and needs (that is, those of non-dominant groups) has the potential to improve the quality of health AI tools overall by exposing some of the flawed systemic inequities and assumptions on which they are based.

M3 - Paper

SP - 125

EP - 140

Y2 - 20 April 2022 through 22 April 2022

ER -

ID: 368900234