[Public panel] How to design and regulate for effective human-machine interaction in military AI?
Published 14 February 2023How to design and regulate for effective human-machine interaction in the context of military decision-support systems? That is the topic of a public panel, organised by the Asser Institute’s trailblazing DILEMA research project on responsible military AI, during the REAIM summit in The World Forum in The Hague. This first international summit on responsible AI in the military domain on 15 – 16 February is organised by the Netherlands.
On Wednesday 15 February, the DILEMA panel discussion, moderated by senior researcher Berenice Boutin, will focus on Decision-Support Systems (DSS) and Human-Machine Interaction (HMI). The topics builds on the DILEMA’s team ongoing research and interim findings. Register now for the summit and attend the session.
Background
International policy discussions around military applications of AI have heavily focused on autonomous weapon systems (AWS), while other existing and potential uses of AI in the military domain have received less attention. In particular, the increasing deployment of decision-support systems (DSS) raises a number of critical legal, ethical, and technical challenges.
DSS are AI-based tools aimed at supporting and facilitating military decision-making based on the collection and analysis of large amounts of data. Due to their characteristics and functions, as well as the circumstances and context in which they are deployed in military operations, DSS raise important questions regarding human-machine interaction (HMI).
As highlighted in the GGE LAWS Guiding Principles: “Human-machine interaction, which may take various forms and be implemented at various stages [...] should ensure that the potential use of [...] systems based on emerging technologies [...] is in compliance with applicable international law, in particular IHL”.
Key question
The DILEMA public panel during the REAIM summit on Wednesday 15 February seeks to develop insights on how to implement human-machine interaction in the context of decision-support systems. The key question guiding the discussion is: How to ensure an adequate “quality and extent of human-machine interaction” in relation to DSS, in particular to ensure compliance with international law?
In order to answer this question, a multi-disciplinary group of experts will discuss how the use of DSS can erode the exercise of human control, and reconfigures the role and place of human decision-making and human agency. Furthermore, the session will explore the implications of the discussion on DSS and HMI for the design and regulation of such systems. Indeed, a solid understanding of the relationship between humans and technology is necessary to set requirements for the engineering of DSS and to develop regulatory frameworks that effectively address the challenges of AI.
Speakers:
- Dr Berenice Boutin (moderator), Senior Researcher in International Law (Asser Institute), DILEMA Project Leader
- Neil Davison, Senior Scientific & Policy Adviser, Arms & Conduct of Hostilities Unit, Legal Division, International Committee of the Red Cross (ICRC)
- Dr Jurriaan van Diggelen, Senior Researcher in Military Human-Machine Teaming (TNO), ELSA Lab Defence Project Leader
- Klaudia Klonowska, Researcher in International Law (Asser Institute), PhD Candidate (University of Amsterdam)
- Dr Gregor Pavlin, Senior Scientist & Program Manager (Thales Nederland B.V.)
- Taylor Woodcock (rapporteur), Researcher in International Law (Asser Institute), PhD Candidate (University of Amsterdam)
Register now to attend the session.
[Spring academy] Artificial intelligence and international law | 27-31 March 2023
The Asser Institute’s Spring academy artificial Intelligence and international law, is an annual interdisciplinary programme offering in-depth perspectives on AI and international law. It addresses fundamental issues at the intersection of theory and practice. The programme will cover the technical aspects of AI, the philosophy and ethics of AI, human rights in relation to AI, AI in international humanitarian law, AI and international responsibility and international governance. The spring academy provides an ideal venue to help you understand these aspects of AI through a short interactive course with plenty of room for discussion with your fellow multidisciplinary participants. Read more.
Designing International Law and Ethics into Military Artificial Intelligence (DILEMA)
The DILEMA project explores interdisciplinary perspectives on military applications of artificial intelligence (AI), with a focus on legal, ethical, and technical approaches on safeguarding human agency over military AI. It analyses in particular subtle ways in which AI can affect or reduce human agency, and seeks to ensure compliance with international law and accountability by design. Read more.
About REAIM
Read more
State responsibility in relation to military applications of artificial intelligence
Asser Institute senior researcher Dr Bérénice Boutin explores the conditions and modalities under which a state can incur responsibility in relation to violations of international law involving military applications of artificial intelligence (AI) technologies.
Retaining human responsibility in the development and use of autonomous weapon systems: On accountability for violations of international humanitarian law involving AWS
In a report for the Stockholm International Peace Research Institute (SIPRI), Asser Institute researcher Marta Bo (together with Laura Bruun and Vincent Boulanin) tackle how humans can be held responsible for violations of international humanitarian law involving autonomous weapons systems.
In a new podcast episode by On Air, Asser Institute researcher Taylor Woodcock discusses today’s ‘overshadowing focus on autonomous weapon systems (aws) in warfare’, and the consequential lack of attention to other military applications of artificial intelligence, such as the use of data-driven algorithms to assist with target recognition, decision-making aids, for military tasking and to support intelligence, surveillance and reconnaissance. According to Woodcock, we need to fully understand the effects of these technologies on human decision-making processes prior to the deployment of these applications.