Legal AI Lab
The Legal AI Lab aims to coordinate and support research, education and outreach activities in the field of law and artificial intelligence broadly understood. Several researchers at the law faculty is already engaging in many aspects of artificial intelligence and law. Not only does the faculty offer research on the increasing EU regulation of AI, but also on AI as a new kind of phenomenon that needs to be understood in relation to pre-existing legislation and the legal system as whole.
Moreover, several researchers in the faculty already collaborate with computer and data science researchers and industry partners in tackling the many challenges that is involved in making AI trustworthy and reliable as a tool to support legal decision-making practice and legal research.
Legal AI Lab seeks to provide a collaborative environment for developing knowledge about AI in a legal context. It serves as a hub to promote cross-disciplinary collaborations and engages both internal and external partners to advance research in the nexus between AI and law. The Lab will seek to make the Faculty’s research and education in this nexus more visible and will invite more collaboration with both research and industry partners.
Steering group
| Name | Title | Phone | |
|---|---|---|---|
| Gammeltoft-Hansen, Thomas | Professor | +4550203400 | |
| Olsen, Henrik Palmer | Professor, head of lab | +4535336212 | |
| Pedersen, Anja Møller | Assistant Professor | +4535336697 | |
| Udsen, Henrik | Professor | +4535323192 |
Activities
During 2026, the Lab will focus on identifying productive clusters of shared research interest in the lab and ensuring workable communication routines. We will initially try out three clusters:
AI is regulated in a number of ways. Not only is AI itself an object of regulation in for example the EU’s AI Act, but other legislation (both AI and domestic) impacts on AI. GDPR is an obvious example, but as AI is deployed across more and more areas of social life, the question of how AI interacts with sector specific regulation that have been enacted before the technological rise of AI raises new relevant and often difficult legal questions that needs to be addressed in legal research.
AI and particularly large language models have opened up for new ways of doing research. AI has long been used in physics and medical research but is now also gaining traction in law. AI can be used as a tool to explore large datasets and reveal patterns and trends in the data, that can be difficult to capture. Legal information retrieval and analysis is increasingly driven by AI in areas of law where large amounts of data makes it difficult to get a full understanding of the law and its impact on society.
In Legal Education large language models can be used to interact with legal literature and complex case law, to gain better learning outcomes. Legal education also increasingly need to address the new reality, that various AI tools are increasingly used on the job market for legal professionals. It is critically important, therefore, to discuss how legal education can prepare law students and provide the skills necessary to critically engage this new technology.
Designing AI models that can perform legal tasks in a trustworthy and reliable manner for deployment in legal decision-making practice is a challenge that involves legal as well as technological and often also psycho-social knowledge. Most people have heard of so-called “IT scandals” where public investments in the development of new computer systems to automate tasks in public administration have failed because it failed to comply with the law or was to inaccurate or rigid to function in manner compatible with the basic values underlying public administration in Denmark. Still, the Danish government has as its ambition to be world leading in using AI in the public sector (Strategisk Indsats for Kunstig Intelligens, 2024) and EU has set an ambitious course for a digital transformation of the justice sector in all member states (DigitalJustice@2030). To fulfill these ambitions requires intensive interdisciplinary research and development.
Networking, sharing and outreach
To gain momentum, the lab will seek to unite the faculty’s research on AI and make it visible to the outside world through the lab. By sharing network contacts and building on existing collaboration and shared interests, the lab will build a mail service function that allows lab users to communicate to a larger group of mail recipients by directing them to updates on the lab’s homepage. The aim is to create enhanced visibility for open research seminars by inviting more broadly through the mail service, to disseminate information about new research publications, to share information about new research projects and to invite collaboration, provide information about relevant events outside the faculty etc.
More broadly, the lab will function as a common virtual portal and coordination unit at the faculty. By enhancing visibility of AI activities under a single umbrella and creating an easier entry point for new external partners and stakeholders in relation to the faculty's research and learning opportunities, the faculty can better contribute to fulfilling the University of Copenhagen's overall initiative within this important area. The lab will also be able to function as a one point entry for other AI research environments such as Pioneer Center, Caisa, etc. who seeks legally oriented collaboration partners.
Collaborating on external funding
New interdisciplinary research in law and AI requires funding. The lab will seek to facilitate collaboration and advice for researchers who wishes to apply for funding in the field.