DILEMA Workshop on Agent-Based Simulations of Disruptive Autonomous Devices: Relations, Norms, and Trust

Published 2 September 2024

On 15 October 2024, the DILEMA project is organising a workshop on ‘Agent-Based Simulations of Disruptive Autonomous Devices: Relations, Norms, and Trust’ at the University of Amsterdam (Science Park).

Interested participants can contact the convenor of the workshop, Dr Tomasz Zurek, DILEMA Project team member, Associate Research Fellow at the Asser Institute, and Post-Doctoral Researcher at the University of Amsterdam Faculty of Science (Complex Cyber Infrastructure group, TRUST RPA project).

Introduction

The rapid development of AI results in increasing autonomy of artificial agents. Devices like (partly) autonomous cars and aircrafts have become a normal element of our lives. More sophisticated ones, such as teaming robots or drones are just around the corner. The rapid development of AI used to control such devices has raised a number of questions concerning the legal and ethical context of decisions made by these devices. These questions are especially important when the results of decisions made can cause a serious risk of harm, such as with military autonomous devices. The more autonomous AI tools are, the more important are the ethical and legal issues related to the behavior of such tools, as well the role of human control. Moreover, autonomous devices become elements of social technical systems, within which they have individual goals, communicate, cooperate, trust (or not) each other, follow the regulations of the community, etc. Such relations create a complex environment in which various dynamics can be observed.

Topic

During the workshop we will discuss how to define and implement mechanisms for dealing with norms and trust relations into agents of multi-agent systems, in order to provide input for conducting simulations enabling the research of the dynamics of relations between autonomous agents. Topics to be addressed include, firstly, the problems of reasoning with open concepts in a multi-agent environment, where every agent may interpret these concepts differently. This influences both the way they deal with regulations and the dynamics of the relations of trust amongst the agents. Since agents can differ in interpreting some concepts, then they can also differ in interpreting what is true and which regulations they are and should be compliant with. Secondly, it is also influencing their ‘inter-personal’ relations like trust, in which the differences between actors of the system can change the evaluation of trustworthiness of other agents. For instance, some agents may prove to be competent in a specific area, some demonstrate the willingness to cooperate with other agents, which can be the basis of shaping the trust relations and allow for the analysis of the trust dynamics between agents. Thirdly, there is an important and still ongoing discussion of the role of the humans in-the-loop of autonomous systems. We will discuss how simulations can help in defining the role of humans in shaping the mechanisms of dealing with norms and trust relations.

Goal

The aim of the workshop is to provide space for exchange of methods, models, and ideas concerning research on agent-based simulations of disruptive autonomous devices, in order to deal with legal and social phenomena like adherence to rules and principles, trust, and cooperation between agents.

Speakers and Presentations

  • Trustworthy AI for High Risk systems – Leon Kester
  • Mind the Gaps: Logical English, Prolog, and Multi-Agent Systems for Autonomous vehicles – Galileo Sartor, Adam Wyner 
  • A Computational Model of Trustworthiness: Trust-Based Interactions Between Agents in Multi-Agent Systems – Tomasz Zurek, Tom van Engers, Basten Leefting, Britta Abbink Spaink
  • Are Human Judgement and Control of Autonomous Weapons Systems Requirements of International Humanitarian Law? – Jeroen van den Boogaard

 

Trustworthy AI for High Risk systems

Leon Kester

Abstract: The rapid developments in AI have also caused concern about the safety of AI technologies particularly that AI implementations that could cause substantial harm. The AI safety concerns can be categorized in four main risk domains: 1. Value alignment, 2. AI & cyber security, 3. Systemic risks including deepfakes and epistemic risk due to disinformation about AI and deepfake science 4. Rogue AGI/ASI becoming uncontrollable and wanting to take over. The presentation will discuss responsible design of high risk AI systems and AI risk mitigation strategies.

BiographyDr. Leon Kester is a senior research scientist at TNO Netherlands. He is a transdisciplinary scientist in responsible AI working on the safety, security, meta-ethics and governance of high risk AI systems and XR technologies. His transdisciplinary research integrates perspectives from human centered AI systems engineering, cognitive science and neuroscience, moral psychology, law, moral programming and philosophy of science. In recent years, he has published many papers as well as book chapters (most of them together with his partner Dr. Nadisha-Marie Aliman, postdoctoral visiting scholar at Utrecht University) on the aforementioned topics. In the EU standardization committee CEN/CENELEC JTC21 he is an editor of the European Norm for Risk Management and introduced “Augmented AI system goal specification” for advanced high-risk AI applications.    

 

Mind the Gaps: Logical English, Prolog, and Multi-Agent Systems for Autonomous vehicles

Galileo Sartor, Adam Wyner

Abstract: In this paper, we present a modular system for representing and reasoning with legal aspects of traffic rules for autonomous vehicles. We focus on a subset of the United Kingdom’s Highway Code (HC) related to junctions. As human drivers and automated vehicles (AVs) will interact on the roads, especially in urban environments, we claim that an accessible, unitary, high-level computational model should exist and be applicable to both users. Autonomous vehicles introduce a shift in liability that should not bring disadvantages or increased burden on human drivers. We develop a system “in silico” of the model. The proposed system is built of three main components: a natural language interface, using Logical English, which encodes the rules; an internal representation of the rules in Prolog; and an multi-agent-based simulation environment, built in NetLogo. The three components interact: Logical English is translated into and out of Prolog (along with some support code); Prolog and NetLogo interface via predicates. Such a modular approach enables the different components to carry different “burdens” in the overall system; it also allows swapping of modules. Given NetLogo, we can visualize the effect of the modeled rules as well as validate the system with a simple dynamic running scenario. Designated agents monitor the behaviour of the vehicles for compliance and record potential violations where they occur. The information on potential violations is then utilized by Validators, to determine whether the violation is punishable, differentiating between exceptions and cases.

Biographies: Adam Wyner, Associate Professor of Computer Science, Department of Computer Science, Swansea University. Adam has PhDs in Linguistics (Cornell University, 1994) and Computer Science (King's College London, 2008). Currently he is an Associate Professor at Swansea University in the Department of Computer Science. For some 20 years, he has lectured and conducted research on how the concepts, methods, and implementations of Artificial Intelligence apply to legal information and processes. He has published on natural language processing (rule-based and machine learning), information extraction, ontologies, argumentation, controlled languages, case-based reasoning, reasoning in juries, policy consultations, semantic web, and a machine-readable standard language for legal rules. Current areas of interest are value-based reasoning and reasoning with open-textured concepts.

Galileo Sartor, PhD candidate, Department of Computer Science, Swansea University. Prior to starting the PhD, Galileo worked on several projects in Artificial Intelligence and Law including:  INTERLEX - Advisory and Training System for Internet-related private International Law (2018-2022); CROSSJUSTICE - Knowledge, Advisory and Capacity Building Information Tool for Criminal Procedural Rights in Judicial Cooperation (2019-2022); ADELE - Analysis of DEcision in LEgal cases (2022-2023); and FACILEX - Facilitating Mutual Recognition: Analytics and Capacity Building Information Legal Explainable Tool to Strengthen Cooperation in the Criminal Matter (2023-).

 

A Computational Model of Trustworthiness: Trust-Based Interactions Between Agents in Multi-Agent Systems

Tomasz Zurek, Tom van Engers, Basten Leefting, Britta Abbink Spaink

Abstract: In our research group working on normative systems particularly in socio-technical contexts, we develop (normative) agent based models for evaluating policies, and as a basis for building distributed (normative) control components that can be used for controlling infrastructures and collaborative devices amongst others. The interactions if and how interactions between actors (represented by agents) are heavily impacted by the (dis)trust between those actors. A number of computational trust models have been introduced in literature. In previous work we identified three major components of those computational trust models: competence, benevolence and integrity. In this paper we'll discuss further refinements of our previous work on trust. The model presented in this paper is being illustrated by a number of small simulation experiments.

Biography: Tomasz Zurek works as a post-doc researcher at Complex Cyber Infrastructure group at the Informatics Institute, University of Amsterdam, where he is a part of the TRUST RPA project. He is also an Associate Fellow at the Asser Institute in The Hague and Assistant Professor at the Institute of Computer Science, Maria Curie-Skłodowska University in Lublin, Poland (currently on sabbatical). His current scientific interests focus on representation of legal knowledge and modeling of legal reasoning and argumentation, especially the modeling of informal ways of reasoning.

 

Are Human Judgement and Control of Autonomous Weapons Systems Requirements of International Humanitarian Law?

Jeroen van den Boogaard 

Abstract: In discussions about the required level of human involvement in the development and use of autonomous weapons systems, it is argued by some States that international law, in particular international humanitarian law, does not contain a separate requirement of human control. However, it has also been argued that human judgement is essential in order to comply with the IHL requirements of distinction, precautions and proportionality. This presentation will focus on the question whether human judgement and control over autonomous weapons systems is required by the rules of international humanitarian law. 

Biography: Jeroen van den Boogaard is legal counsel at the International Law department of the Ministry of Foreign Affairs of the Netherlands. He is also a lecturer in international humanitarian law at the University of Amsterdam. Previously, he worked as a military lawyer in the Royal Netherlands Army, Head of the international humanitarian law department of the Netherlands Red Cross and as an assistant professor in military law at the Netherlands Defence Academy. Jeroen holds an LL.M of the Erasmus University Rotterdam and a PhD of the University of Amsterdam. His monograph entitled ‘Proportionality in international humanitarian law’ was published by Cambridge University Press in 2023.

 

Programme

9:30-10:00 Welcome coffee
10:00 – 10:15 Opening
10:15-12:15 Session I

10:15 – 10:45 Trustworthy AI for High Risk systemsLeon Kester
10:45 – 11:15 Mind the Gaps: Logical English, Prolog, and Multi-Agent Systems for Autonomous vehicles – Galileo Sartor, Adam Wyner 
11:15 – 11:45 A Computational Model of Trustworthiness: Trust-Based Interactions Between Agents in Multi-Agent Systems Tomasz Zurek, Tom van Engers, Basten Leefting, Britta Abbink Spaink
11:45 – 12:15 Are Human Judgement and Control of Autonomous Weapons Systems Requirements of International Humanitarian Law? Jeroen van den Boogaard

12:15 – 13:00 Lunch Break 
13:00 – 14:30 Session II 

Panel Discussion: Leon Kester, Adam Wyner, Jeroen van den Boogaard, Tomasz Zurek. Moderator: Tom van Engers

14:30 – 14:45 Closing remarks