Explainable AI and Law: An Evidential Survey

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

Explainable AI and Law : An Evidential Survey. / Richmond, Karen McGregor; Muddamsetty, Satya Mahesh; Gammeltoft-Hansen, Thomas; Olsen, Henrik Palmer; Moeslund, Thomas B.

In: Digital Society, Vol. 3, No. 1, 2024.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Richmond, KM, Muddamsetty, SM, Gammeltoft-Hansen, T, Olsen, HP & Moeslund, TB 2024, 'Explainable AI and Law: An Evidential Survey', Digital Society, vol. 3, no. 1. https://doi.org/10.1007/s44206-023-00081-z

APA

Richmond, K. M., Muddamsetty, S. M., Gammeltoft-Hansen, T., Olsen, H. P., & Moeslund, T. B. (2024). Explainable AI and Law: An Evidential Survey. Digital Society, 3(1). https://doi.org/10.1007/s44206-023-00081-z

Vancouver

Richmond KM, Muddamsetty SM, Gammeltoft-Hansen T, Olsen HP, Moeslund TB. Explainable AI and Law: An Evidential Survey. Digital Society. 2024;3(1). https://doi.org/10.1007/s44206-023-00081-z

Author

Richmond, Karen McGregor ; Muddamsetty, Satya Mahesh ; Gammeltoft-Hansen, Thomas ; Olsen, Henrik Palmer ; Moeslund, Thomas B. / Explainable AI and Law : An Evidential Survey. In: Digital Society. 2024 ; Vol. 3, No. 1.

Bibtex

@article{05cd726bc1034aeb974914f8ada6348f,
title = "Explainable AI and Law: An Evidential Survey",
abstract = "Decisions made by legal adjudicators and administrative decision-makers often found upon a reservoir of stored experiences, from which is drawn a tacit body of expert knowledge. Such expertise may be implicit and opaque, even to the decision-makers themselves, and generates obstacles when implementing AI for automated decision-making tasks within the legal field, since, to the extent that AI-powered decision-making tools must found upon a stock of domain expertise, opacities may proliferate. This raises particular issues within the legal domain, which requires a high level of accountability, thus transparency. This requires enhanced explainability, which entails that a heterogeneous body of stakeholders understand the mechanism underlying the algorithm to the extent that an explanation can be furnished. However, the “black-box” nature of some AI variants, such as deep learning, remains unresolved, and many machine decisions therefore remain poorly understood. This survey paper, based upon a unique interdisciplinary collaboration between legal and AI experts, provides a review of the explainability spectrum, as informed by a systematic survey of relevant research papers, and categorises the results. The article establishes a novel taxonomy, linking the differing forms of legal inference at play within particular legal sub-domains to specific forms of algorithmic decision-making. The diverse categories demonstrate different dimensions in explainable AI (XAI) research. Thus, the survey departs from the preceding monolithic approach to legal reasoning and decision-making by incorporating heterogeneity in legal logics: a feature which requires elaboration, and should be accounted for when designing AI-driven decision-making systems for the legal field. It is thereby hoped that administrative decision-makers, court adjudicators, researchers, and practitioners can gain unique insights into explainability, and utilise the survey as the basis for further research within the field.",
author = "Richmond, {Karen McGregor} and Muddamsetty, {Satya Mahesh} and Thomas Gammeltoft-Hansen and Olsen, {Henrik Palmer} and Moeslund, {Thomas B.}",
note = "This research is funded by the Danish National Research Foundation Grant no. DNRF169 and conducted under the auspices of the Danish National Research Foundation{\textquoteright}s Centre of Excellence for Global Mobility Law.",
year = "2024",
doi = "10.1007/s44206-023-00081-z",
language = "English",
volume = "3",
journal = "Digital Society",
publisher = "Springer",
number = "1",

}

RIS

TY - JOUR

T1 - Explainable AI and Law

T2 - An Evidential Survey

AU - Richmond, Karen McGregor

AU - Muddamsetty, Satya Mahesh

AU - Gammeltoft-Hansen, Thomas

AU - Olsen, Henrik Palmer

AU - Moeslund, Thomas B.

N1 - This research is funded by the Danish National Research Foundation Grant no. DNRF169 and conducted under the auspices of the Danish National Research Foundation’s Centre of Excellence for Global Mobility Law.

PY - 2024

Y1 - 2024

N2 - Decisions made by legal adjudicators and administrative decision-makers often found upon a reservoir of stored experiences, from which is drawn a tacit body of expert knowledge. Such expertise may be implicit and opaque, even to the decision-makers themselves, and generates obstacles when implementing AI for automated decision-making tasks within the legal field, since, to the extent that AI-powered decision-making tools must found upon a stock of domain expertise, opacities may proliferate. This raises particular issues within the legal domain, which requires a high level of accountability, thus transparency. This requires enhanced explainability, which entails that a heterogeneous body of stakeholders understand the mechanism underlying the algorithm to the extent that an explanation can be furnished. However, the “black-box” nature of some AI variants, such as deep learning, remains unresolved, and many machine decisions therefore remain poorly understood. This survey paper, based upon a unique interdisciplinary collaboration between legal and AI experts, provides a review of the explainability spectrum, as informed by a systematic survey of relevant research papers, and categorises the results. The article establishes a novel taxonomy, linking the differing forms of legal inference at play within particular legal sub-domains to specific forms of algorithmic decision-making. The diverse categories demonstrate different dimensions in explainable AI (XAI) research. Thus, the survey departs from the preceding monolithic approach to legal reasoning and decision-making by incorporating heterogeneity in legal logics: a feature which requires elaboration, and should be accounted for when designing AI-driven decision-making systems for the legal field. It is thereby hoped that administrative decision-makers, court adjudicators, researchers, and practitioners can gain unique insights into explainability, and utilise the survey as the basis for further research within the field.

AB - Decisions made by legal adjudicators and administrative decision-makers often found upon a reservoir of stored experiences, from which is drawn a tacit body of expert knowledge. Such expertise may be implicit and opaque, even to the decision-makers themselves, and generates obstacles when implementing AI for automated decision-making tasks within the legal field, since, to the extent that AI-powered decision-making tools must found upon a stock of domain expertise, opacities may proliferate. This raises particular issues within the legal domain, which requires a high level of accountability, thus transparency. This requires enhanced explainability, which entails that a heterogeneous body of stakeholders understand the mechanism underlying the algorithm to the extent that an explanation can be furnished. However, the “black-box” nature of some AI variants, such as deep learning, remains unresolved, and many machine decisions therefore remain poorly understood. This survey paper, based upon a unique interdisciplinary collaboration between legal and AI experts, provides a review of the explainability spectrum, as informed by a systematic survey of relevant research papers, and categorises the results. The article establishes a novel taxonomy, linking the differing forms of legal inference at play within particular legal sub-domains to specific forms of algorithmic decision-making. The diverse categories demonstrate different dimensions in explainable AI (XAI) research. Thus, the survey departs from the preceding monolithic approach to legal reasoning and decision-making by incorporating heterogeneity in legal logics: a feature which requires elaboration, and should be accounted for when designing AI-driven decision-making systems for the legal field. It is thereby hoped that administrative decision-makers, court adjudicators, researchers, and practitioners can gain unique insights into explainability, and utilise the survey as the basis for further research within the field.

U2 - 10.1007/s44206-023-00081-z

DO - 10.1007/s44206-023-00081-z

M3 - Journal article

VL - 3

JO - Digital Society

JF - Digital Society

IS - 1

ER -

ID: 376288638