Please register by email at conference@csl.mpg.de if you are interested in attending; there are 10 places available for external interested parties.
The use of AI in criminal proceedings raises concerns about the right to a fair trial as we know it today. In particular, automated decision-making and profiling based on expert systems, machine learning, natural language processing, and deep learning to assess evidence, monitor risks, predict recidivism, and even assist in judgments pose serious challenges to fundamental legal principles such as due process, the right to confront incriminating evidence, defense rights, transparency, and non-discrimination. The aim of this conference is to address these problems with panels dedicated to explaining the technology and its significance in the different stages of a criminal trial; this will be enhanced through the adoption of a comparative perspective, with a special emphasis on the traditions of inquisitorial and adversarial systems. The different presentations shall highlight EU initiatives as well as US approaches and particularities in criminal justice systems heavily influenced by plea bargaining solutions. This event will bring together leading scholars and early career researchers from Europe and the US to explore the evolving role of AI in criminal justice systems.
Early Career Scholars’ Day on May 7
Kicking off the conference, early career scholars will present their projects during the first day. Panels will address key topics such as algorithmic profiling, automated penal orders or money laundering alerts, and the rationalization of sentencing through AI.
Book launch on May 7, 5–6 pm
A book launch will highlight the importance of explaining the Law of Human-Robot Interaction and will pay tribute to the publication of Human-Robot Interaction in Law and Its Narratives, which explores the legal challenges posed by robots in society with an examination of substantive and procedural law, addressing issues like criminal liability and evidentiary reliability, and discussing at the same time how legal narratives shape our understanding of human-robot interactions.
Gless, S., & Whalen-Bridge, H. (Eds.). (2024). Human-robot interaction in law and its narratives: Legal blame, procedure, and criminal law. Cambridge: Cambridge University Press. doi:10.1017/9781009431453
Main Conference from May 8–9
During the main conference, expert panels will analyze the possible AI impact on our current concept of a fair trial. The presentations will address, for instance, how automated decision-making could determine legal outcomes and why the lack of transparency could make it difficult to challenge decisions effectively, thereby limiting the right to appeal and due process. A special focus will be on AI-systems used to obtain or assess evidence in criminal trials and the expected ramifications on defense rights in inquisitorial and adversarial proceedings. Another topic will evaluate AI-based profiling and discriminatory risks resulting from the collection and analysis of vast amounts of data and the classification of individuals based on perceived risk levels. Notoriously, if AI models are trained on biased historical data, they risk perpetuating systemic discrimination. In the US, for instance, algorithms that assess recidivism rates have been criticized for disproportionately labeling defendants from minority backgrounds as high-risk, thereby affecting sentencing severity and parole decisions. This not only undermines the presumption of innocence but also erodes the principle of equality before the law. The overall aim is to identify novel ways to safeguard a fair trial in the digital era. What could be robust safeguards? The general demand for transparency (and AI decision-making processes to be explainable and open to scrutiny) or judicial authorities retaining meaningful human oversight to prevent overreliance on automated decisions might fall short when it comes to ensuring that parties to a criminal trial can contest algorithmic assessments effectively and the public can be sure that a fair trial is granted. Balancing innovation with fundamental rights is essential to guarantee that AI serves justice rather than undermining it.
Registration Now Open!