Explainability requirements for AI used in legal decision-making (LEXplain)

The research focus in LEXplain is aimed at understanding how various AI technologies used to support legal decision-making can be adapted to meet existing legal standards of providing justificatory explainability, which is at the core of the rule of law.

Flickr

The project is structured into three interrelated Research Streams (RS):

RS1 Evolution and differentiation: investigates how legal explainability requirements and explainability culture have evolved up through the second half of 20th and into the 21st Century, primarily through institutional interplay. This RS will contain both a historical, doctrinal-analytic and an empirical component.

RS2 AI explainability suppport: investigates to what extent AI can support legal explainability. This RS will investigate and advance the use of hybrid-AI architectures for legal explainability

RS3 Implementation and transformation: This RS investigates how hybrid AI systems can be developed and implemented in ways that are compatible with the upcoming AI Act. This RS will also attempt to understand how hybrid AI systems may challenge and possibly transform how legal decision-making work is performed in public administrative practice.

See project page at University of Bergen for more information.

Funding


LEXplain is funded by the Research Council of Norway.

Project period: 2025-2028

PI: Henrik Palmer Olsen
Co-PI: Thomas Hildebrandt
Co-PI: Ragna Arli

Researchers

UCPH

Name Title Phone E-mail
Hildebrandt, Thomas Troels Professor +4535335116 E-mail
Kucuksu, Aysel Eybil Assistant Professor +4535328289 E-mail
Olsen, Henrik Palmer Professor +4535323219 E-mail

External

Name Title  E-mail
Arli, Ragna  Professor, University of Bergen E-mail
Mæhle, Synne Sæther Professor, University of Bergen E-mail
Herje, Martin  PhD student, University of Bergen E-mail