DILEMA-ELSA Lab Workshop on Rules of Engagement for Military AI
Published 19 September 2024On 6-7 November 2024, the DILEMA Project and the ELSA Lab Defence Project and organising an Expert Workshop on the topic of ‘Rules of Engagement for Military AI Systems’. The workshop is taking place at the Asser Institute, and is open exclusively by invitation.
Background
Rules of engagement (ROE) are one of the instruments with which military authorities ensure military capabilities are properly utilised on the battlefield. Given the flexible nature of ROE, they are also a prime instrument for ensuring lawful application of novel technologies and capabilities, alongside accompanying military instructions and directives. ROE are “[d]irectives to military forces, including individuals, that define the circumstances, conditions, degree, and manner in which force, or actions which might be construed as provocative, may be applied” (NATO Glossary of Terms and Definitions, AAP-06(2021)), and may include guidance regarding when force may be used and limitations on taking certain actions. ROE are a tool military authorities use to translate higher-level legal and policy concerns into simple and clear instructions for military forces.
For ensuring lawful use of AI systems in particular, ROE can be an attractive tool due to their straightforward and condensed nature. Modern AI systems are technically complex and not all operators or commanders down the chain of command may have the necessary background to fully understand associated risks, weaknesses and vulnerabilities. For such users, short and concise instructions could in practice be more helpful than extended and non-adapted technical readouts of AI technology or the AI system. ROE can also offer the issuing authority greater control over the circumstances in which it absolutely wants to avoid using such systems, e.g. because the system is untested in such conditions and therefore might function indiscriminately (legal consideration), because it is foreseeable that automation bias will set in due to operational pressures if use of the tool is left unconstrained (operational consideration), or because overreliance on the tool would negatively impact public opinion (policy consideration). ROE, thereby, are one means to curtail the variability and unpredictability in the behaviour of both the AI systems and their users.
Aims and Questions
This event aims to explore the viability of, and considerations that could guide the drafting of, ROE for military AI systems. The organisers will invite 15 experts in domains related to military AI in a closed workshop setting to work collaboratively toward a hypothetical AI-ROE framework. This process will leverage the backgrounds and perspectives of each invited participant to identify, amongst others:
- Which aspects of military AI bring challenges to existing ROE frameworks and would justify the adoption of supplementary instruments;
- What elements, instructions or restrictions to include in an AI-ROE;
- What formulation of such elements would be most effective;
- Specificities relating to the process of drafting and agreeing on AI-ROE;
- What concerns and risks related to military AI cannot be mitigated through AI-ROE, and thus must be addressed complementarily through other means.
Organisers
The workshop is convened by:
- Dr Bérénice Boutin (DILEMA Project, ELSA Lab Defence Project, Asser Institute)
- Dr Jonathan Kwik (ELSA Lab Defence Project, Asser Institute)
- Professor Marten Zwanenburg (DILEMA Project, University of Amsterdam, Netherlands Defence Academy)