Discrimination and racial bias in AI technology: A case study for the WHO
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Standard
Discrimination and racial bias in AI technology: A case study for the WHO. / Corrales Compagnucci, Marcelo; Gerke, Sara; Minssen, Timo.
Ethics and Governance of Artificial Intelligence for Health: WHO Guidance. Geneva : World Health Organization, 2021. p. 54.Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Discrimination and racial bias in AI technology: A case study for the WHO
AU - Corrales Compagnucci, Marcelo
AU - Gerke, Sara
AU - Minssen, Timo
PY - 2021/6/29
Y1 - 2021/6/29
N2 - In a study published in Science in October 2019, researchers found significant racial bias in an algorithm used widely in the US health-care system to guide health decisions. The algorithm is based on cost (rather than illness) as a proxy for needs; however, the US health-care system spent less money on Black than on white patients with the same level of need. Thus, the algorithm incorrectly assumed that white patients were sicker than equally sick Black patients. The researchers estimated that the racial bias reduced the number of Black patients receiving extra care by more than half.This case highlights the importance of awareness of biases in AI and mitigating them from the onset to prevent discrimination (based on e.g. race, gender, age or disability). Biases may be present not only in the algorithm but also, for example, in the data used to train the algorithm. Many other types of bias, such as contextual bias, should be considered. Stakeholders, particularly AI programmers, should apply “ethics by design” and mitigate biases at the outset in developing a new AI technology for health.
AB - In a study published in Science in October 2019, researchers found significant racial bias in an algorithm used widely in the US health-care system to guide health decisions. The algorithm is based on cost (rather than illness) as a proxy for needs; however, the US health-care system spent less money on Black than on white patients with the same level of need. Thus, the algorithm incorrectly assumed that white patients were sicker than equally sick Black patients. The researchers estimated that the racial bias reduced the number of Black patients receiving extra care by more than half.This case highlights the importance of awareness of biases in AI and mitigating them from the onset to prevent discrimination (based on e.g. race, gender, age or disability). Biases may be present not only in the algorithm but also, for example, in the data used to train the algorithm. Many other types of bias, such as contextual bias, should be considered. Stakeholders, particularly AI programmers, should apply “ethics by design” and mitigate biases at the outset in developing a new AI technology for health.
UR - https://www.who.int/publications/i/item/9789240029200
M3 - Article in proceedings
SN - 978-92-4-002921-7
SP - 54
BT - Ethics and Governance of Artificial Intelligence for Health
PB - World Health Organization
CY - Geneva
ER -
ID: 273291016