Discrimination and racial bias in AI technology: A case study for the WHO

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

In a study published in Science in October 2019, researchers found significant racial bias in an algorithm used widely in the US health-care system to guide health decisions. The algorithm is based on cost (rather than illness) as a proxy for needs; however, the US health-care system spent less money on Black than on white patients with the same level of need. Thus, the algorithm incorrectly assumed that white patients were sicker than equally sick Black patients. The researchers estimated that the racial bias reduced the number of Black patients receiving extra care by more than half.

This case highlights the importance of awareness of biases in AI and mitigating them from the onset to prevent discrimination (based on e.g. race, gender, age or disability). Biases may be present not only in the algorithm but also, for example, in the data used to train the algorithm. Many other types of bias, such as contextual bias, should be considered. Stakeholders, particularly AI programmers, should apply “ethics by design” and mitigate biases at the outset in developing a new AI technology for health.
TitelEthics and Governance of Artificial Intelligence for Health : WHO Guidance
Antal sider1
ForlagWorld Health Organization
Publikationsdato29 jun. 2021
ISBN (Trykt)978-92-4-002921-7
ISBN (Elektronisk)978-92-4-002920-0
StatusUdgivet - 29 jun. 2021

ID: 273291016