[Blog post] ICC warrants for Nethanyahu and Gallant could be a watershed moment for AI accountability in warfare
Published 6 December 2024In a new blog post for OpinioJuris, legal expert Marta Bo (Asser Institute) argues that the recent ICC arrest warrants for Israeli officials Netanyahu and Gallant could offer a unique opportunity to set a precedent for addressing the complex ethical and legal implications of using artificial intelligence in warfare.
In recent years, accountability for uses of artificial intelligence (AI) in warfare, especially under international criminal law, has become a critical issue in governance initiatives and scholarly and civil society debates. Initially, the debate predominantly centred around preventative measures, compliance with international humanitarian law, and, more recently, on responsible AI development and use, framing the issue as a future concern.
However, with AI technologies such as Lavender and Gospel reportedly being widely deployed by Israel in Gaza, it was only a matter of time before their use in targeting and the associated criminal responsibility came under the scrutiny of the International Criminal Court (ICC), writes senior researcher Marta Bo in the new blog post 'Netanyahu and Gallannt ICC Arrest Warrants: Tackling Modern Warfare and Criminal Responsibility for AI-enabled War Crimes'.
According to the author, the recent arrest warrants issued by the ICC for Israeli officials Benjamin Netanyahu and Yoav Gallant, allegedly involved in war crimes committed in the occupied Palestinian territory, present a unique opportunity for the ICC to engage with the complexities of modern warfare, particularly the role of AI in targeting systems.
The principle of distinction
In the blog post, Marta Bo unpacks the potential implications of the ICC's arrest warrants for Netanyahu and Gallant on international humanitarian law (IHL) and international criminal law (ICL). The piece examines how the use of AI-enabled targeting systems, such as Lavender and Gospel, might contribute to violations of the principle of distinction and how this could lead to individual criminal responsibility.
The blog also delves into the challenges of attributing responsibility for crimes committed using AI decision-support systems (AI-DSS), considering factors like accuracy rates, human-machine interaction, and the potential for automation bias.
According to the legal expert, the ICC could leverage this case to clarify how responsibility should be attributed in war crimes involving AI, address the technical challenges posed by AI, and potentially shape the future trajectory of AI development in warfare. Marta Bo: 'While the ICC may not be the primary regulator of AI, its ex-post scrutiny could provide valuable insights into permissible and impermissible uses of AI in warfare, influencing the decisions of industry, policymakers, and society as a whole.'
Read the full blog post.
About Marta Bo
Dr Marta Bo is a senior researcher at the Asser Institute, an associate senior researcher at the Stockholm International Peace Research Institute (SIPRI) and a research fellow at the Graduate Institute for International and Development Studies (Geneva). Her research focuses on emerging military technologies, autonomous weapons systems and their compliance with international humanitarian law, criminal responsibility for war crimes committed with autonomous weapon systems, AI and criminal responsibility, automation biases and mens rea for crimes committed with autonomous or automated systems, and disarmament and criminalisation.
Read more
Bo, M., V. Boulanin, Three lessons on the regulation of autonomous weapons systems to ensure accountability for violations of IHL, ICRC Humanitarian and Law Policy Blog, 2 March 2023.
Bo, M., ‘Three Individual Criminal Responsibility Gaps with Autonomous Weapon Systems’. OpinioJuris, 29 November 2022, available online at http://opiniojuris.org/2022/11/29/three-individual-criminal-responsibility-gaps-with-autonomous-weapon-systems/