Transparency of machine-learning in healthcare: The GDPR & European health law

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

Transparency of machine-learning in healthcare : The GDPR & European health law. / Mourby, Miranda ; Ó Cathaoir, Katharina; Collin, Catherine Bjerre.

In: Computer Law & Security Review, Vol. 43, 2021.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Mourby, M, Ó Cathaoir, K & Collin, CB 2021, 'Transparency of machine-learning in healthcare: The GDPR & European health law', Computer Law & Security Review, vol. 43. https://doi.org/10.1016/j.clsr.2021.105611

APA

Mourby, M., Ó Cathaoir, K., & Collin, C. B. (2021). Transparency of machine-learning in healthcare: The GDPR & European health law. Computer Law & Security Review, 43. https://doi.org/10.1016/j.clsr.2021.105611

Vancouver

Mourby M, Ó Cathaoir K, Collin CB. Transparency of machine-learning in healthcare: The GDPR & European health law. Computer Law & Security Review. 2021;43. https://doi.org/10.1016/j.clsr.2021.105611

Author

Mourby, Miranda ; Ó Cathaoir, Katharina ; Collin, Catherine Bjerre. / Transparency of machine-learning in healthcare : The GDPR & European health law. In: Computer Law & Security Review. 2021 ; Vol. 43.

Bibtex

@article{fb62fbc44a4c417c96e95be720bd7f9c,
title = "Transparency of machine-learning in healthcare: The GDPR & European health law",
abstract = "Machine-learning ({\textquoteleft}ML{\textquoteright}) models are powerful tools which can support personalised clinical judgments, as well as patients{\textquoteright} choices about their healthcare. Concern has been raised, however, as to their {\textquoteleft}black box{\textquoteright} nature, in which calculations are so complex they are difficult to understand and independently verify. In considering the use of ML in healthcare, we divide the question of transparency into three different scenarios:1)Solely automated decisions. We suggest these will be unusual in healthcare, as Article 22(4) of the General Data Protection Regulation presents a high bar. However, if solely automatic decisions are made (e.g. for inpatient triage), data subjects will have a right to {\textquoteleft}meaningful information{\textquoteright} about the logic involved.2)Clinical decisions. These are decisions made ultimately by clinicians—such as diagnosis—and the standard of transparency under the GDPR is lower due to this human mediation.3)Patient decisions. Decisions about treatment are ultimately taken by the patient or their representative, albeit in dialogue with clinicians. Here, the patient will require a personalised level of medical information, depending on the severity of the risk, and how much they wish to know.In the final category of decisions made by patients, we suggest European healthcare law sets a more personalised standard of information requirement than the GDPR. Clinical information must be tailored to the individual patient according to their needs and priorities; there is no monolithic {\textquoteleft}explanation{\textquoteright} of risk under healthcare law. When giving advice based (even partly) on a ML model, clinicians must have a sufficient grasp of the medically-relevant factors involved in the model output to offer patients this personalised level of medical information. We use the UK, Ireland, Denmark, Norway and Sweden as examples of European health law jurisdictions which require this personalised transparency to support patients{\textquoteright} rights to make informed choices. This adds to the argument for post-hoc, rationale explanations of ML to support healthcare decisions in all three scenarios.",
author = "Miranda Mourby and {{\'O} Cathaoir}, Katharina and Collin, {Catherine Bjerre}",
year = "2021",
doi = "10.1016/j.clsr.2021.105611",
language = "English",
volume = "43",
journal = "Computer Law and Security Review",
issn = "0267-3649",
publisher = "Elsevier Advanced Technology",

}

RIS

TY - JOUR

T1 - Transparency of machine-learning in healthcare

T2 - The GDPR & European health law

AU - Mourby, Miranda

AU - Ó Cathaoir, Katharina

AU - Collin, Catherine Bjerre

PY - 2021

Y1 - 2021

N2 - Machine-learning (‘ML’) models are powerful tools which can support personalised clinical judgments, as well as patients’ choices about their healthcare. Concern has been raised, however, as to their ‘black box’ nature, in which calculations are so complex they are difficult to understand and independently verify. In considering the use of ML in healthcare, we divide the question of transparency into three different scenarios:1)Solely automated decisions. We suggest these will be unusual in healthcare, as Article 22(4) of the General Data Protection Regulation presents a high bar. However, if solely automatic decisions are made (e.g. for inpatient triage), data subjects will have a right to ‘meaningful information’ about the logic involved.2)Clinical decisions. These are decisions made ultimately by clinicians—such as diagnosis—and the standard of transparency under the GDPR is lower due to this human mediation.3)Patient decisions. Decisions about treatment are ultimately taken by the patient or their representative, albeit in dialogue with clinicians. Here, the patient will require a personalised level of medical information, depending on the severity of the risk, and how much they wish to know.In the final category of decisions made by patients, we suggest European healthcare law sets a more personalised standard of information requirement than the GDPR. Clinical information must be tailored to the individual patient according to their needs and priorities; there is no monolithic ‘explanation’ of risk under healthcare law. When giving advice based (even partly) on a ML model, clinicians must have a sufficient grasp of the medically-relevant factors involved in the model output to offer patients this personalised level of medical information. We use the UK, Ireland, Denmark, Norway and Sweden as examples of European health law jurisdictions which require this personalised transparency to support patients’ rights to make informed choices. This adds to the argument for post-hoc, rationale explanations of ML to support healthcare decisions in all three scenarios.

AB - Machine-learning (‘ML’) models are powerful tools which can support personalised clinical judgments, as well as patients’ choices about their healthcare. Concern has been raised, however, as to their ‘black box’ nature, in which calculations are so complex they are difficult to understand and independently verify. In considering the use of ML in healthcare, we divide the question of transparency into three different scenarios:1)Solely automated decisions. We suggest these will be unusual in healthcare, as Article 22(4) of the General Data Protection Regulation presents a high bar. However, if solely automatic decisions are made (e.g. for inpatient triage), data subjects will have a right to ‘meaningful information’ about the logic involved.2)Clinical decisions. These are decisions made ultimately by clinicians—such as diagnosis—and the standard of transparency under the GDPR is lower due to this human mediation.3)Patient decisions. Decisions about treatment are ultimately taken by the patient or their representative, albeit in dialogue with clinicians. Here, the patient will require a personalised level of medical information, depending on the severity of the risk, and how much they wish to know.In the final category of decisions made by patients, we suggest European healthcare law sets a more personalised standard of information requirement than the GDPR. Clinical information must be tailored to the individual patient according to their needs and priorities; there is no monolithic ‘explanation’ of risk under healthcare law. When giving advice based (even partly) on a ML model, clinicians must have a sufficient grasp of the medically-relevant factors involved in the model output to offer patients this personalised level of medical information. We use the UK, Ireland, Denmark, Norway and Sweden as examples of European health law jurisdictions which require this personalised transparency to support patients’ rights to make informed choices. This adds to the argument for post-hoc, rationale explanations of ML to support healthcare decisions in all three scenarios.

U2 - 10.1016/j.clsr.2021.105611

DO - 10.1016/j.clsr.2021.105611

M3 - Journal article

VL - 43

JO - Computer Law and Security Review

JF - Computer Law and Security Review

SN - 0267-3649

ER -

ID: 284284520