From AI regulation to fundamental rights, data protection and algorithmic trust, the 4th edition of LawtomationDays turned IE Law School into the epicentre of global debate on law and technology. This year, for the first time, thanks to a fruitful partnership with “DigiCon” and the work of our chief editor, Francesca Palmiotto, we are proud to publish a selection of the works presented in Madrid.
👉 https://digi-con.org/technology-and-distrust/
In this volume, trust emerges as a fragile and structurally mediated condition. Each contribution exposes a similar paradox: digital systems promise efficiency, safety and fairness, yet simultaneously erode the very foundations of trust that institutions rely upon. The authors collectively argue that real trust cannot be manufactured through rhetoric or compliance checklists. It requires institutional scepticism (Chiara Toscano), enforceable structural transparency (Maria Diory Rabajante), democratised governance (Mehmet Bilal Unver) and human-AI relationships built on meaningful dialogue and testing rather than formal oversight (Artur Bogucki, Mohammed Raiz Shaffique & Eduard Fosch Villaronga). It also demands creativity capable of adapting old concepts, such as copyright or working time, to new algorithmic realities without distorting their protective purpose (Berhan Sarılar, LL.M. & Philip Meinel, Tal Niv, Nikolett Hos). In the sphere of democracy, attempts to legislate truth itself risk collapsing pluralism into a coercive epistemology (Federica Fedorczyk & Filippo Venturi).
More specifically, Chiara Toscano argues that AI destabilises long-standing boundaries between human agency and technical systems, especially in the workplace. Because of this, the EU must ground governance not in trust but in “institutionalised distrust” as a constitutional bulwark. Social trust emerges only when legal scepticism is structurally embedded and operationalised through transparency and oversight. The key unresolved issue is whether this European model is universal or culturally (and politically, one may say) contingent.
Sarilar & Meinel show that the AI Act’s commitment to “trustworthy AI” does not translate well into copyright governance. Article 53(1)(c) demands compliance policies for GPAI training data, but enforcement is technically weak and largely unverifiable. Transparency summaries cannot meaningfully reveal whether rights holders’ opt-outs were respected. Trust in this context risks becoming discursive, prompting calls for remuneration mechanisms rather than illusory compliance.
According to Shaffique & Fosch-Villaronga, wearable robots promise major benefits, but trust collapses when user expectations exceed system capabilities and performances. Notably, EU product safety law focuses heavily on instructions and information duties, which the authors deem insufficient in the current context. Real trust requires user-centred design that calibrates how users perceive risk, reliability and limitations. Without these measures, deployment at scale will be undermined by anxiety and perceived unpredictability.
Deepfakes damage public trust and democratic discourse, yet Fedorczyk & Venturi argue that criminalisation would be both ineffectual and dangerous. Drawing on Arendt and Foucault, they show that lies are structurally embedded in politics and that truth-policing inevitably strengthens state power. Criminal law would turn governments into arbiters of truth, threatening pluralism and enabling authoritarian misuse. Democracy is better defended through transparency, contestation, and civic resilience rather than punitive truth enforcement.
Maria Diory F. Rabajante introduces the concept of “lex digitalis sermonis” to describe the quasi-legal governance of online speech, which appears principled but obscures the true locus of power: platform algorithms. Governance frameworks examine user content but systematically ignore the context that determines amplification and harm. Even sophisticated quasi-courts Meta’s Oversight Board fails to access or interrogate algorithmic mechanisms. This creates an illusion of the rule of law that disciplines users while shielding platforms from accountability.
Mehmet Unver argues that the AI Act operationalises trust as technocratic “trustworthiness,” turning compliance into a proxy for legitimacy. This engineered trust displaces human, relational trust and creates a widening “trust gap” between what systems are designed to be and how they are socially perceived. Without participatory governance, trust becomes a function of auditing procedures rather than democratic judgment. The remedy is to democratise AI oversight, embedding deliberation and participation into institutional structures.
From a case study in AI-supported credit lending, Artur Bogucki shows that trust does not arise from transparency alone but from interactive dialogue between human officers and AI systems. Explanation, contestation and negotiation form the core of trustworthy decision-making. Fragmented regulation and unclear liability rules, however, weaken confidence in both technology and institutional frameworks. Trust becomes sustainable only when oversight evolves into genuine collaboration rather than a token human presence.
Tal Niv criticises the simplistic view that accuracy alone guarantees trust. Instead, she proposes an “Honesty Dial” with context-specific modes of communication and an auditable “Contextual Honesty Profile” for each deployment. Calibrated honesty mirrors how humans communicate responsibly in different settings. Through legal, market, normative and code-based mechanisms, honesty becomes a verifiable duty rather than a branding slogan.
Finally, Nikolett Hős contends that traditional concepts of working time and wage suffer under the realities of algorithmic platform work. Empirical studies of AB5 and Spain’s Riders Law reveal that reclassification may affect some opportunities, earnings and overall trust in institutions. The author argues for a functional approach focused on algorithmic decision-making, transparency and income dynamics instead of rigid status categories. Regulation should rebuild trust by making algorithmic labour markets intelligible, fairer and accountable.
Taken together, these contributions describe a world in which trust is not restored by perfecting technology but by redesigning the human, legal and institutional environments in which technology operates. Trust, in this sense, becomes a constitutional (meta)project: a continuous negotiation between power and rights, transparency and opacity, automation and human autonomy.
We hope you will enjoy reading this book!
The co-editors
