[New publication] Rhetoric and regulation: The (Limits of) Human/AI comparison in debates on military artificial intelligence

The promise of artificial intelligence (AI) is ubiquitous and compelling, yet can it truly deliver ‘better’ speed, accuracy, and decision making in the conduct of war? As AI becomes increasingly embedded in military targeting processes, legal and ethical debates often compare who performs better, humans or machines? In a new publication, researchers Klaudia Klonowska and Taylor Kate Woodcock argue for the urgent need to critically examine the assumptions behind the human/AI comparison and its usefulness for legal analysis of lawful targeting.

Photo: @Shutterstock

Shutterstock 2644894273

The promise of artificial intelligence (AI) is ubiquitous and compelling, yet can it truly deliver ‘better’ speed, accuracy, and decision making in the conduct of war? As AI becomes increasingly embedded in military targeting processes, legal and ethical debates often compare who performs better, humans or machines? In a new publication, researchers Klaudia Klonowska and Taylor Kate Woodcock argue for the urgent need to critically examine the assumptions behind the human/AI comparison and its usefulness for legal analysis of lawful targeting.

These days, legal and policy debates about military artificial intelligence are full of comparisons between AI and human performance. Proponents claim that AI's superior speed, accuracy, and certainty improve adherence to International Humanitarian Law (IHL). This supposed AI superiority feeds hopes of overcoming human flaws in warfare through progress and rationalisation. The narrative also helps states in justifying major investments in military AI. 

In their new chapter titled “Rhetoric and regulation: The (limits of) Human/AI comparison in legal debates on military artificial intelligence” researchers Klaudia Klonowska and Taylor Kate Woodcock (Asser Institute) unpack and critique the prevalence of comparisons between humans and AI systems, including in analyses of the fulfilment of legal obligations under international humanitarian law (IHL),.

Binary framing

The authors challenge the often binary framing by highlighting misleading assumptions that neglect how the use of AI results in complex human-machine interactions that transform targeting practices. The researchers unpack what is meant by ‘better performance’, demonstrating how prevailing metrics for speed and accuracy can create misleading expectations around the use of AI given the realities of warfare. They conclude that holistic but granular attention must be paid to the landscape of human-machine interactions to understand how the use of AI impacts compliance with targeting obligations grounded in international humanitarian law (IHL).

You can read the the full chapter here.

 

About Klaudia Klonowska

Klaudia Klonowska is PhD researcher at the Asser Institute and the University of Amsterdam. She is also Managing Director of the U.S. West Point Military Academy’s Manual on Artificial Intelligence (AI). Her research focuses on techno-legal decisions that shape the design, development, and use of AI decision-support systems by militaries, and their implications under international humanitarian law.

 

About Taylor Kate Woodcock

Taylor Kate Woodcock is a PhD at the Asser Institute and the University of Amsterdam. Her research examines the implications of the development and use of military applications of artificial intelligence (AI) for international law, with a specific emphasis on International Humanitarian Law. She takes a practice-oriented approach to exploring the relation between international law and human agency, considering how the interaction between data-driven algorithms and military practitioners impacts this relationship.

Read more


[New publication] “Digital yes-men: How to deal with sycophantic military AI?”

In a new publication, researcher Jonathan Kwik (Asser Institute) examines sycophantic military AI assistants. He explores the reasons behind ‘bootlicking’ behaviour in AI systems, highlights the significant battlefield dangers it presents, and proposes a two-part strategy comprising improved design and enhanced training to mitigate these risks for military forces. Read more.

 

[Op-ed] Scholars warn for silent revolution in warfare driven by AI-powered decision systems

Researchers Marta Bo (Asser Institute) and Jessica Dorsey (Utrecht University) have published a critical op-ed in Dutch quality paper NRC Handelsblad, shedding light on a silent revolution in modern warfare. Their piece, titled "Er is een stille revolutie gaande in de manier waarop strijdkrachten beslissingen nemen in oorlogstijd," highlights the rapidly increasing use of AI-based Decision Support Systems (AI-DSS) in military operations. Read more.

 

[Interview] Google skips due diligence for cloud services to Israel

A new story published in The Intercept reveals that tech company Google had serious concerns about providing state-of-the-art cloud and machine-learning services to Israel. The piece quotes Asser Institute researcher León Castellanos-Jankiewicz weighing on Google’s contractual inability to conduct proper risk assessments. Read more.