[Public panel] Unpacking explainability in military artificial intelligence

Published 6 September 2024

iStock

On 9 September 2024, the Asser Institute’s DILEMA Project is presenting at the 2nd REAIM Summit in Seoul. In a breakout session titled ‘Beyond the black box: Unpacking explainability in military AI,’ our researchers will look at explainability from legal, ethical, technical, and operational perspectives.   

 

Context

The surge in military applications of AI has sparked many debates, especially regarding its regulation. There is legal and ethical ambiguity when it is unclear how or why certain decisions are reached in AI systems. A concept that has emerged in the midst of this is explainability, which generally refers to the ability to describe the internal mechanisms of a system in an understandable manner to humans. While some argue that explainability is desirable for military AI, others are skeptical of its value.

This session will facilitate expert discussion of the role of explainability in military AI, hoping to clarify and operationalise explainability. Taking a legal, ethical, technical, and operational perspective, it will address the key issues surrounding explainability, including what it entails, its challenges, and its relevance for the governance of military AI.  

Breakout session   

Our panel will demonstrate whether explainability of military applications of AI is required and, if so, how explainability can be operationalised throughout the lifecycle of AI systems, from design to deployment and after use. The panel will address the challenges of ensuring that AI military systems are not only technically explainable but also comprehensible and usable by military personnel during and after military operations. By unpacking the nuances of explainability, this session will contribute to a deeper understanding of its role in shaping global norms and regulatory frameworks for responsible military AI.  

The discussion will be moderated by Asser senior researcher Dr. Marta Bo and features fellow Asser researcher Taylor Woodcock. Other speakers include:   

  • Dr Zena Assaad (Senior Lecturer, School of Engineering/Australian Army Research Centre (AARC) Fellow)   
  • LtCol Tom Hong (Deputy Staff Judge Advocate, HQ Eighth Army, USAG Humphreys, Korea)   
  • Dr Nathan Wood (Postdoctoral Fellow, Institute of Philosophy of the Czech Academy of Sciences)     

About REAIM

REAIM is a platform for global discussion with all stakeholders on ways to ensure responsible military application of AI. The inaugural REAIM Summit was co-hosted by the Netherlands and the Republic of Korea (ROK), in The Hague, in 2023. See the programme for this year’s summit. 

Marta will also be speaking at another session during the Summit, titled ‘Bias in military AI and compliance with international humanitarian law. She and several other speakers will explore why – and how – biases in AI should be addressed to ensure compliance with IHL. Find more info about the session here.   

About the DILEMA Project 

The DILEMA project explores interdisciplinary perspectives on military applications of artificial intelligence (AI), with a focus on legal, ethical, and technical approaches on safeguarding human agency over military AI. It analyses in particular subtle ways in which AI can affect or reduce human agency and seeks to ensure compliance with international law and accountability by design. Read more  

Read more  

[Blog post] The ‘need’ for speed – The cost of unregulated AI-Decision Support Systems to civilians  

In a blog piece for Opinion Juris, Asser researcher Marta Bo criticises the lack of regulation for military use of AI-enabled decision-support systems (AI-DSS). These AI-enabled systems are being implemented by militaries at an alarming speed and scale, leading to a call for more scrutiny of AI-DSS usage. Read more.  

[New publication] The influence of AI decision-support systems on legal assessments in military targeting  

In an article for Global Society’s special issue on algorithmic warfare, Taylor Woodcock explores how algorithmic decision-support systems (DSS) in warfare impact legal reasoning in military targeting. She concludes that these systems currently lack the capacity to make the necessary contextual, qualitative and value-legal judgments required by the proportionality standard under international humanitarian law (IHL). Read more.   


Dr Marta Bo LL.M.
Taylor Kate Woodcock LL.M.