[Blog post] AI-based targeting in Gaza: Asser researcher Klonowska refines the debate on military AI

Published 22 July 2024

Palestinians inspect the ruins of a building destroyed in Israeli airstrikes in Khan Younis in the southern of Gaza strip, on October 8, 2023. | By Palestinian News & Information Agency (Wafa) in contract with APAimages, CC BY-SA 3.0

As the death toll in Gaza rises, the Israel Defence Force’s controversial use of artificial intelligence-enabled decision-support systems sparks heavy debate amongst experts. In her recent blog post for the Lieber Institute, Klaudia Klonowska calls experts to shift away from abstract discussions of emerging AI and to instead concentrate on existing applications and how they are currently reshaping the realities of war. 

Artificial intelligence-enabled decision-support systems, (AI-DSS) like the Gospel and Lavender that are currently being used by the Israel Defense Forces (IDF) are technically different from autonomous weapons systems (AWS). AI-DSS process and filter information that is then presented to a human operator who makes the final decision of authorising a target. Although this system suggests that there is a greater level of human involvement in decision-making compared to AWS, Asser Institute researcher Klaudia Klonowska points to several expert opinions raising concerns about the shortened timeframe of deliberation and the cognitive limits of human operators when using AI-DSS. High-pressure situations and biases can lead to an overreliance on decisions made by AI, increasing the risk of targeting civilians.

Klonowska’s review, recently published as a blog post for the Lieber Institute, finds that two types of interventions have emerged in response to the IDF's reported use of AI-DSS: one focusing on human conduct and responsibilities under international humanitarian law (IHL) and international criminal law (ICL), and the other examining the technology itself and its impact on warfare. The latter group questions whether AI-DSS affect compliance with legal norms and contribute to civilian harm.  

The invisibility of engineers 
However, the role of engineers and developers in the design and production of AI-DSS has been largely neglected in expert interventions. In the blog, Klonowska criticises the invisibility of engineers in these debates as these individuals make crucial decisions that impact the system's performance and influence targeting decisions. For example, those designing AI-DSS determine who can be considered a ‘Hamas operative’ and what datasets are used to train AI. Recognising their role is essential to understanding how AI technologies shape warfare.  

Klonowska calls for grounding military studies in empirical research on the current impacts of AI on the battlefield, including a more critical look at the role of engineers and developers. She emphasises that military AI applications are no longer merely emerging, they are already a reality and reshaping contemporary warfare.  

Read the full blog post here. 

Klaudia Klonowska 
Klaudia Klonowska is a Ph.D. Candidate in International Law at the Asser Institute and the University of Amsterdam since September 2021. She studies the interactions of humans and AI-enabled decision-support systems in the military decision-making process and the consequences thereof to the exercise of (human) judgment under international humanitarian and human rights law. She is a member of the research project Designing International Law and Ethics into Military Artificial Intelligence (DILEMA). 

Klaudia is a part of the research strand ‘Regulation in the public interest: Disruptive technologies in peace and security’ This research strand addresses regulations to safeguard and promote public interests. It focuses, in particular, on the development of the international regulatory framework for the military applications of disruptive technologies and the arms race in conventional and non-conventional weapons. Read more. 

Interested in this topic? Read more: 

[New podcast] Gaza and the international legal community (?): South Africa v Israel at the ICJ 
In a new episode of JurisDictions, the Asser Institute international law podcast by researcher and podcast host Dr Carl Lewis, four guests discuss the ICJ’s South Africa v Israel case. The International Court of Justice (ICJ) has provided two orders of provisional measures in the Application of the Convention on the Prevention and Punishment of the Crime of Genocide in the Gaza Strip (South Africa v. Israel) case, following the further deterioration of the humanitarian situation in Gaza since the 26th of January 2024. But what are provisional measures? What does it mean to invoke a breach of an obligation owed to the ‘international community’? What implications follow from these proceedings beyond the Peace Palace? And in what sense could it be argued that the ICJ may be denying reality? Listen now.   

[Blog post] The ‘need’ for speed – The cost of unregulated AI-Decision Support Systems to civilians 
In their recently published blog piece for Opinio Juris, Marta Bo (Asser Institute) and Jessica Dorsey (Utrecht University) criticise the lack of regulation for military use of AI-enabled decision-support systems (AI-DSS). Read more. 

[New publication] Balancing military and humanitarian interests: Scaling the scope of autonomous weapon attacks 
In a new publication, researcher Jonathan Kwik proposes a scaling methodology to help characterise attacks by autonomous weapon systems (AWS). This could provide greater clarity on the legality of such attacks under international law, benefiting both civilians and belligerents.  Read more. 

[Analysis] Asser Institute researcher León Castellanos-Jankiewicz on Mexico’s request to intervene in the case against Israel for alleged genocide in Gaza
Spanish newspaper El País asked Asser Institute researcher León Castellanos-Jankiewicz to comment on Mexico’s request to intervene in the case against Israel at the International Court of Justice (ICJ) for alleged genocide in Gaza. Read more. 


Klaudia Klonowska LL.M.