Discrimination and racial bias in AI technology: A case study for the WHO

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

Discrimination and racial bias in AI technology: A case study for the WHO. / Corrales Compagnucci, Marcelo; Gerke, Sara; Minssen, Timo.

Ethics and Governance of Artificial Intelligence for Health: WHO Guidance. Geneva : World Health Organization, 2021. s. 54.

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

Corrales Compagnucci, M, Gerke, S & Minssen, T 2021, Discrimination and racial bias in AI technology: A case study for the WHO. i Ethics and Governance of Artificial Intelligence for Health: WHO Guidance. World Health Organization, Geneva, s. 54.

APA

Corrales Compagnucci, M., Gerke, S., & Minssen, T. (2021). Discrimination and racial bias in AI technology: A case study for the WHO. I Ethics and Governance of Artificial Intelligence for Health: WHO Guidance (s. 54). World Health Organization.

Vancouver

Corrales Compagnucci M, Gerke S, Minssen T. Discrimination and racial bias in AI technology: A case study for the WHO. I Ethics and Governance of Artificial Intelligence for Health: WHO Guidance. Geneva: World Health Organization. 2021. s. 54

Author

Corrales Compagnucci, Marcelo ; Gerke, Sara ; Minssen, Timo. / Discrimination and racial bias in AI technology: A case study for the WHO. Ethics and Governance of Artificial Intelligence for Health: WHO Guidance. Geneva : World Health Organization, 2021. s. 54

Bibtex

@inproceedings{78813a8aac19471a9cbe00ea8949b188,
title = "Discrimination and racial bias in AI technology: A case study for the WHO",
abstract = "In a study published in Science in October 2019, researchers found significant racial bias in an algorithm used widely in the US health-care system to guide health decisions. The algorithm is based on cost (rather than illness) as a proxy for needs; however, the US health-care system spent less money on Black than on white patients with the same level of need. Thus, the algorithm incorrectly assumed that white patients were sicker than equally sick Black patients. The researchers estimated that the racial bias reduced the number of Black patients receiving extra care by more than half.This case highlights the importance of awareness of biases in AI and mitigating them from the onset to prevent discrimination (based on e.g. race, gender, age or disability). Biases may be present not only in the algorithm but also, for example, in the data used to train the algorithm. Many other types of bias, such as contextual bias, should be considered. Stakeholders, particularly AI programmers, should apply “ethics by design” and mitigate biases at the outset in developing a new AI technology for health.",
author = "{Corrales Compagnucci}, Marcelo and Sara Gerke and Timo Minssen",
year = "2021",
month = jun,
day = "29",
language = "English",
isbn = "978-92-4-002921-7",
pages = "54",
booktitle = "Ethics and Governance of Artificial Intelligence for Health",
publisher = "World Health Organization",
address = "Switzerland",

}

RIS

TY - GEN

T1 - Discrimination and racial bias in AI technology: A case study for the WHO

AU - Corrales Compagnucci, Marcelo

AU - Gerke, Sara

AU - Minssen, Timo

PY - 2021/6/29

Y1 - 2021/6/29

N2 - In a study published in Science in October 2019, researchers found significant racial bias in an algorithm used widely in the US health-care system to guide health decisions. The algorithm is based on cost (rather than illness) as a proxy for needs; however, the US health-care system spent less money on Black than on white patients with the same level of need. Thus, the algorithm incorrectly assumed that white patients were sicker than equally sick Black patients. The researchers estimated that the racial bias reduced the number of Black patients receiving extra care by more than half.This case highlights the importance of awareness of biases in AI and mitigating them from the onset to prevent discrimination (based on e.g. race, gender, age or disability). Biases may be present not only in the algorithm but also, for example, in the data used to train the algorithm. Many other types of bias, such as contextual bias, should be considered. Stakeholders, particularly AI programmers, should apply “ethics by design” and mitigate biases at the outset in developing a new AI technology for health.

AB - In a study published in Science in October 2019, researchers found significant racial bias in an algorithm used widely in the US health-care system to guide health decisions. The algorithm is based on cost (rather than illness) as a proxy for needs; however, the US health-care system spent less money on Black than on white patients with the same level of need. Thus, the algorithm incorrectly assumed that white patients were sicker than equally sick Black patients. The researchers estimated that the racial bias reduced the number of Black patients receiving extra care by more than half.This case highlights the importance of awareness of biases in AI and mitigating them from the onset to prevent discrimination (based on e.g. race, gender, age or disability). Biases may be present not only in the algorithm but also, for example, in the data used to train the algorithm. Many other types of bias, such as contextual bias, should be considered. Stakeholders, particularly AI programmers, should apply “ethics by design” and mitigate biases at the outset in developing a new AI technology for health.

UR - https://www.who.int/publications/i/item/9789240029200

M3 - Article in proceedings

SN - 978-92-4-002921-7

SP - 54

BT - Ethics and Governance of Artificial Intelligence for Health

PB - World Health Organization

CY - Geneva

ER -

ID: 273291016