Three humanoid robots are looking at framed binary code numbers on a wall; two are standing and one is sitting. (Drawing in gray/black).

Algorithmic Profiling and Automated Decision-Making in Criminal Justice

Max Planck Fellow Group

The research group “Algorithmic Profiling and Automated Decision-Making in Criminal Jus­tice” is dedicated to legal issues that arise from the use of artificial intelligence (AI) in the areas of crime detection, prosecution, and sentencing of criminal offenses. Its aim is to examine whether traditional criminal law doctrine and existing criminal law practice provide convincing answers to the questions posed by the use of AI systems; where this is not the case, innovative solutions will be developed. The various projects make use of legal meth­od­ology, comparative legal analysis, and computer science.

Graph: © Mamak


Research Topics

The research group is open to research projects on all issues that arise from the use of AI in the areas of criminal law and criminal procedure. This includes investigations into criminal responsi­bil­ity when AI is used as a new actor as well as questions about the potentialities and limits of auto­mated applications of law in the area of criminal justice, such as algorithmic sentencing and new regulatory and supervisory models for the use of AI in criminal justice systems.


Projects

A robot and a boy in soccer clothes are standing in a meadow. Next to the boy is a soccer ball, and behind him on the right is a window with a crack in it. The robot is pointing at the boy. (Drawing in gray/black).

Head of project: Lea Bachmann
With the advent of artificial intelligence (AI) systems, new players are entering the corporate arena. While they offer many advantages, they also pose new risks. Using the example of AI systems for anti-money laundering, this doctoral project will analyze the potential… more

A graphic representation with diagrams, devices, etc. There is a copyright symbol in the top left corner and a lock symbol in the top right corner. At the very bottom is an elongated single element that indicates the progress of an automatic process, and underneath it says “80% complited” (sic!). (Drawing in gray/black).

Head of project: Colin Carter
Large language models are advanced, deep learning algorithms designed to understand, summarize, translate, predict, and generate text. They are trained on large datasets, which enables them to mimic human-like language abilities. Recently, these models have gained… more

A humanoid robot with a rectangular head sits on a rock with its arms crossed, appearing to be lost in thought. (Drawing in gray/black).

Head of project: Laura D’Amico
As the century proceeds, artificial intelligence (AI) will become increasingly present, not only in our daily lives but also in courtrooms. Using both theoretical and practical approaches, the aims of this research project are, on the one hand, to analyze who might… more

Close-up of a circuit board with electronic components.

Head of project: Linus Ensel
The focus of this research project lies on the potential advantages of a partial rationalization of the sentencing pro­cess. This kind of intervention in the existing system would lead to a reduction of judicial discretion and would raise the ques­tion of the role human… more

A person sits at a desk, concentrating and typing on the keyboard; an error message (ERROR) is displayed on the computer screen; a poison symbol is shown in the bottom right-hand corner. (Drawing in gray/black).

Head of project: Sabine Gless
Are programmers the new lawmakers, as Joseph Weizenbaum insinuated back in the last century? And will they remake the criminal justice universe? Certainly, AI systems will continue to replace human decision-making at various points within the criminal justice system… more

A robot with a winding mechanism on the side stands with its arms raised. Two large fingers can be seen on the left (for operating the winding mechanism?). (Drawing in gray/black).

Head of project: Elina Nerantzi
Can a harmful artificial intelligence (AI) agent be held directly criminally respon­si­ble? This project seeks a new way to address this recurring question. Instead of trying to ascertain whether AI agents could ever be our moral duplicates, responsive as we are to… more

Go to Editor View