LAWTOMATION

THE JEAN MONNET CENTRE OF EXCELLENCE FOR LAW AND AUTOMATION

Latest news

About the Centre

The Jean Monnet Centre of Excellence for Law and Automation (Lawtomation) is a focal point of competence and knowledge on the impact of automation on law that promotes excellence in teaching and research.

Lawtomation develops synergies between legal experts and data scientists, in an open dialogue with policy makers, civil servants, practitioners, and the civil society at large.

The Centre is co-financed by the European Commission, under the Erasmus+ programme.

About Our Research​

The Centre gathers the expertise of scholars from three schools of IE University: IE Law School, IE Business School and IE School of Science and Technology.

It fosters interdisciplinary research beyond knowledge silos to address the multiple challenges that automation represents for law.

Click here to find out more about LegRob

About our Teaching

The Centre aims to promote excellence in teaching by designing and delivering teaching and training activities on law and automation. The research conducted at the Centre fuels teaching. The Centre participants design and deliver the content of teaching sessions and the overall courses’ syllabi.

Teaching is embedded in regular courses and Advanced Seminars that are open to all (Law and other) programs at IE University. Some teaching sessions are open to civil society.

The Centre is in charge of teaching commitments under LEGROB, the Jean Monnet Module on Liability of Robots.

The research conducted at the Jean Monnet Centre of Excellence for Law and Automation fuels teaching

Latest Publications​

Check out our research publications

Latest Events​

Do not miss out on our upcoming events

The impact of AI related to law is and, even more, will be transformative for many sectors in our lives.

Purpose of our Research

The intersection of law and automation is a reality that has become visible in several areas. Artificial Intelligence has enhanced the automation of certain legal services and administrative proceedings. AI is used to enforce employment conditions and regulatory duties.

With the incessant development of AI, automation is called to play a larger role in e-justice, in e-government and in the performance of obligations between private individuals.

The Centre aims to conduct interdisciplinary research on the impact of automation in private legal relationships and in those arising between citizens and the administration.

It also aims to generate knowledge and insights that can support policymaking in these fields.

Our Research Focus

The Centre aims to conduct interdisciplinary research on the impact of automation in private relations and on automated judicial and administrative decision making.

Our Research Topics

E-Justice, Legal Tech and Private Law

This line of research is organized in two sections:

The Observatory

The e-Justice Observatory investigates the nature and scope of ICT reforms around courts in the EU. More specifically, at the e-Justice Observatory we study e-Justice policies and legislation across Europe, monitor ongoing modernization plans coordinated at the EU level, and map the use of ICT tools and services by European courts while comparing the digital transformation processes of justice systems in the different EU Member States.

The Observatory promotes the direct involvement of students with qualitative empirical research conducted across the various levels of the European justice systems, including with conducting interviews with members of the judiciary and with participatory observation.

This line of research aspires to contribute to contemporary and urgent debates—in both policy and scholarship—about the digital transformation of justice institutions.   

The Lab

The Lab conducts experiments that are aimed at assessing the suitability of law for automation, against the backdrop of disputes. Lawyers and data scientists interact in the selection of legal instruments whose application is subsequently tested with the use of AI. Court decisions on EU law are the data for the experiments.

A main outcome of the Lab is the creation of machine learning models to predict the outcomes of disputes. This is not done randomly as the choice of datasets considers the characteristics of legal rules that, ex ante, signal potential for automated enforcement. The identification of those features in legal rules is another outcome that is key to the activity of the Lab.

The Lab aims to contribute to the debate on automated decision making in dispute resolution and, more broadly, to that on coding law. It is open to explore pilot testing with courts and ADR institutions.

Automated State

This line of the Lawtomation project seeks to explore the possibilities and challenges raised by the automation of law in the constitutional and administrative context.

Undoubtedly, the automation of administrative procedures can bring about important benefits, minimizing administrative errors while speeding up procedures and reducing their costs. From this perspective, the automation of administrative decision-making can serve one of the fundamental goals of administrative law, which is to ensure that public authorities can effectively fulfil their functions.

At the same time, the automation of administrative procedures raises important questions from the perspective of the values served by administrative law: due process, citizen participation, transparency, and accountability. Thus, the question that arises is how to accommodate the automation of the administrative state with the rule of law and with the values that are at the core of the administrative law enterprise. In addition, new models and applications based on automation challenge a number of well-established procedural guarantees and fundamental rights, such as due process in the decision-making process, the lack of bias, and the protection of privacy.

Automated decision systems on the other hand, do not necessarily lead to fair outcomes, while they are equally problematic from a procedural point of view. While the administration feeds the algorithm with data, and then the latter produces decisions, how algorithms operate is opaque. Because of this lack of transparency, automated systems allow for less scrutiny, as compared to human decision-making. Such opacity means that citizens do not have access and cannot form knowledge on how decisions are reached, thus limiting the possibility to hold the administration accountable for its actions and decisions.

Furthermore, data collection and processing in the area of law enforcement creates concerns for the perspective of privacy and data protection. New means of surveillance and collection of data create new ways to profile citizens, leading to the new concept of ‘dataveillance’.

Finally, the automation of law also raises concerns about the overarching value of human dignity and its relation to AI. In particular it touches upon the relationship between AI with democratic and European values such as fairness, transparency, integrity, and non-discrimination that are core to the rule of law.

With that said, the key question is whether EU and national legal ecosystems are well equipped to face such challenges. How to endorse automation and at the same time to safeguard values enshrined in European Human Rights.

Algorithmic Bosses

Game-changing technologies can streamline production processes and replace humans in dangerous, repetitive, and tedious tasks. They are also exerting considerable pressure on job content, value, and availability.

This line of the Lawtomation project aims to disentangle the main trajectories of the digital automation of work from a cross-disciplinary perspective. This analysis concentrates on three main vectors of digital transformation: machines, algorithms, and platforms.

These transformative forces affect the entire cycle of options available to employers: potential dislocation of tasks, opportunities for outsourcing, digitization of human resources management (HRM) practices, intensification of command-and-control roles and effect on job quality and task discretion.

In today’s workplaces, information and bargaining asymmetries are unprecedentedly tilted towards data holders and away from data subjects. The data collected and processed by ubiquitous devices allows managers to target job adverts, recruit new staff, set remuneration, award promotions, assess productivity, and even fire workers. Such changes have been accompanied by the growth of automated decision-making systems (ADMSs), which are in charge of the management of private administrative processes. The ongoing transformation calls into question the rules and limits that regulate the exercise of employer powers, which were designed during times that predate the advent of automation.

The emergence of algorithmic management, namely the delegation of HRM functions to AI and algorithms, is placing a strain on existing regulatory frameworks. This expansion in the spatial and temporal ambits of powers raises issues about the adequacy of the statutory and collectively agreed legal framework at both the EU and domestic levels. 

Is the current legal framework suitable for algorithmic bosses? Are data protection, non-discrimination, health and safety and collective rights fit for the digital age? How can social partners, businesses and lawmakers foster the co-determination of technology-enabled practices at work? Labour regulation could play in addressing these manifold challenges. The volume, variety, and scope of the shift towards the datafication and wiring of the workplace require lawyers to reassess the concept of employer powers, the importance of controlling factors and their wide-ranging implications.

Building on the Boss Ex Machina project, this track of the Lawtomation Jean Monnet Centre of Excellence will offer a dynamic space for reflection and action, through empirical and analytical research, teaching, and policy analysis.

Meet our People

Antonio Aloisi

Antonios Kouroutakis

Argyri Panezi

Bart Wauters

Charlotte Leskinen

Fernando Pastor-Merchante

Francisco de Elizalde

François Delerue

Giulio Allevato

MarĂ­a Guadalupe MartĂ­nez Alles

Johanna Jacobsson

Kiron Ravindran

Konstantina Valogianni

Leon Anidjar

Marina Aksenova

Rafael Ballester-Ripoll

Sara SĂĄnchez

Sonsoles Arias GuedĂłn

image of Sergio Verdugo

Sergio Verdugo

Advisory Board

Martin Ebers

Valerio De Stefano

MichĂšle Finck

Sofia RanchordĂĄs

Roland Vogl

We use both our own and third-party cookies to enhance our services and to offer you the content that most suits your preferences by analysing your browsing habits. Your continued use of the site means that you accept these cookies. You may change your settings and obtain more information here. Accept