Artificial Legal Intelligence

ARTIFICIAL LEGAL INTELLIGENCE consists of three subprojects, each based on their own research grant, that investigate the emergence of artificial intelligence focused on supporting legal decision-making.

Law. Photo Gerd Altmann, Pixabay

The main overarching theme in these three interwoven subprojects are: 1) legal information retrieval and analysis, 2) XAI for legal decision-making, and 3) benchmarking machine generated legal text.

 

The four grants are:

XAIcred: PI’s: Thomas Moeslund (Aaborg University) and Thomas Gammeltoft Hansen (Copenhagen University). Co-PI, Henrik Palmer Olsen. Read more about the grant.

LEXplain: PI Henrik Palmer Olsen. Co-PI’s Synne Sæther Mæhle and Ragna Arli (both Bergen University). Read more about the grant

ALIKE: PI Henrik Palmer Olsen. More information about the grant has not yet been published on the homepage of Independent Research Fund Denmark. 

 

 

ALIKE and XAIcred.

In increasingly complex legal systems legal information retrieval is an expensive and difficult task. Drawing on open access databases from the European Court of Justice (Curia and EUR-LEX) and various domestic databases on administrative legal decisions, the project leverages Machine Learning, Natural Language Processing, Network Analysis and Graph Neural Networks to build and test various approaches to legal information retrieval, primarily with a focus on precedent finding. This project also seeks to use the same approach to find patterns in the way asylum seekers’ testimony are assessed as either trustworthy or untrustworthy. The aim of this latter part of subproject 1 is to be able to retrieve informal information signals in a set of case law (here, the Danish Refugee Council’s decisions).

 

LEXplain and XAIfair.

As AI is increasingly used to support legal decision-making, users demand information about how this technology operates. This call for explainability echoes the need for legal decisions to be grounded in justificatory reasoning. AI which is based on symbolic logic or knowledge graphs are inherently interpretable and integrates well with the need to produce legal justifications. Machine Learning based AI including Large Language Models however have a level of complexity to them intransparent and therefore unintepretable in the traditional sense of being linear and predictable. To make this kind of AI more transparent, recent years have seen attempts to develop models that identify salient features in the internal operations of otherwise intransparent AI. This project researches the relationship between explainability in the computational sense and explainability in the legal justificatory sense. The project focuses both on technological solutions and the legal requirements for explainability, transparency and explainability in public administration.

 

ALIKE in collaboration with the IUROPA Project.

Large Language Models (LLMs) have been shown to perform well in generating text that is perceived as meaningful to humans. LLMs are already being used for many different tasks including summarisation, translation, editing, idea generation, etc.  LLM’s are also increasingly used in the legal service sector. Retrieval Augmented Generation (RAG) systems are used to combine LLMs with knowledge bases (consisting of case law and statutory law datasets) to enable legal argument generation. However, most RAG systems are operated by private companies who do not provide much technical information about their products and until today only few or none benchmarks for legal arguments generation exists. This project will build a working state-og-the-art RAG based on an open source LLM and using open access legal information (Curia and EUR-LEX) to test the ability of machines to produce legal text of a quality that is indistinguishable from human generated text. Based on the construction of original new benchmarks and empirical testing, the project will contribute to ways of measuring the quality of machine generated legal text.

  

Researchers

Name Title
Henrik Palmer Olsen Professor Billede af Henrik Palmer Olsen

 Funded by


Independent Research Fund Denmark logo

the Innovation Fund Denmark

The Research Council of Norway logo
Villum Fonden logo

Artificial Legal Intelligence is funded by the Independent Research Fund Denmark, the Innovation Fund Denmark, the Research Council of Norway, and the Villum Foundation.

Project: Artificial Legal Intelligence
Period: 2025 -

Contact

Principal Investigator
Henrik Palmer Olsen