Transparency of machine-learning in healthcare: The GDPR & European health law
Research output: Contribution to journal › Journal article › Research › peer-review
Standard
Transparency of machine-learning in healthcare : The GDPR & European health law. / Mourby, Miranda ; Ó Cathaoir, Katharina; Collin, Catherine Bjerre.
In: Computer Law & Security Review, Vol. 43, 2021.Research output: Contribution to journal › Journal article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - JOUR
T1 - Transparency of machine-learning in healthcare
T2 - The GDPR & European health law
AU - Mourby, Miranda
AU - Ó Cathaoir, Katharina
AU - Collin, Catherine Bjerre
PY - 2021
Y1 - 2021
N2 - Machine-learning (‘ML’) models are powerful tools which can support personalised clinical judgments, as well as patients’ choices about their healthcare. Concern has been raised, however, as to their ‘black box’ nature, in which calculations are so complex they are difficult to understand and independently verify. In considering the use of ML in healthcare, we divide the question of transparency into three different scenarios:1)Solely automated decisions. We suggest these will be unusual in healthcare, as Article 22(4) of the General Data Protection Regulation presents a high bar. However, if solely automatic decisions are made (e.g. for inpatient triage), data subjects will have a right to ‘meaningful information’ about the logic involved.2)Clinical decisions. These are decisions made ultimately by clinicians—such as diagnosis—and the standard of transparency under the GDPR is lower due to this human mediation.3)Patient decisions. Decisions about treatment are ultimately taken by the patient or their representative, albeit in dialogue with clinicians. Here, the patient will require a personalised level of medical information, depending on the severity of the risk, and how much they wish to know.In the final category of decisions made by patients, we suggest European healthcare law sets a more personalised standard of information requirement than the GDPR. Clinical information must be tailored to the individual patient according to their needs and priorities; there is no monolithic ‘explanation’ of risk under healthcare law. When giving advice based (even partly) on a ML model, clinicians must have a sufficient grasp of the medically-relevant factors involved in the model output to offer patients this personalised level of medical information. We use the UK, Ireland, Denmark, Norway and Sweden as examples of European health law jurisdictions which require this personalised transparency to support patients’ rights to make informed choices. This adds to the argument for post-hoc, rationale explanations of ML to support healthcare decisions in all three scenarios.
AB - Machine-learning (‘ML’) models are powerful tools which can support personalised clinical judgments, as well as patients’ choices about their healthcare. Concern has been raised, however, as to their ‘black box’ nature, in which calculations are so complex they are difficult to understand and independently verify. In considering the use of ML in healthcare, we divide the question of transparency into three different scenarios:1)Solely automated decisions. We suggest these will be unusual in healthcare, as Article 22(4) of the General Data Protection Regulation presents a high bar. However, if solely automatic decisions are made (e.g. for inpatient triage), data subjects will have a right to ‘meaningful information’ about the logic involved.2)Clinical decisions. These are decisions made ultimately by clinicians—such as diagnosis—and the standard of transparency under the GDPR is lower due to this human mediation.3)Patient decisions. Decisions about treatment are ultimately taken by the patient or their representative, albeit in dialogue with clinicians. Here, the patient will require a personalised level of medical information, depending on the severity of the risk, and how much they wish to know.In the final category of decisions made by patients, we suggest European healthcare law sets a more personalised standard of information requirement than the GDPR. Clinical information must be tailored to the individual patient according to their needs and priorities; there is no monolithic ‘explanation’ of risk under healthcare law. When giving advice based (even partly) on a ML model, clinicians must have a sufficient grasp of the medically-relevant factors involved in the model output to offer patients this personalised level of medical information. We use the UK, Ireland, Denmark, Norway and Sweden as examples of European health law jurisdictions which require this personalised transparency to support patients’ rights to make informed choices. This adds to the argument for post-hoc, rationale explanations of ML to support healthcare decisions in all three scenarios.
U2 - 10.1016/j.clsr.2021.105611
DO - 10.1016/j.clsr.2021.105611
M3 - Journal article
VL - 43
JO - Computer Law and Security Review
JF - Computer Law and Security Review
SN - 0267-3649
ER -
ID: 284284520