As AI becomes increasingly integrated into military operations —from autonomous weapons to decision-support systems—armed forces face heightened uncertainty, unpredictable risks, and system vulnerabilities.
AI systems are often used in complex situations on the battlefield, where many risks and vulnerabilities only become apparent after deployment. Even AI operators acting in good faith may face situations in which unforeseeable civilian harm occurs, despite rigorous review and careful deployment.
Unintended civilian casualties are often dismissed as unavoidable ‘accidents of war’. Kwik's new research, titled “Iterative assessment for military artificial intelligence (AI) systems”, challenges this assumption. While acknowledging that some AI failures may be initially unforeseeable, he argues that their recurrence can be significantly reduced through systematic learning and adaptation.
Structured approach
In his chapter, Kwik introduces an ‘Iterative Assessment’ framework comprising two key mechanisms: Iterative Review and Iterative Assessment in Deployment. This structured approach enables military decision-makers to systematically capture insights from real-world AI performance, identify previously unknown risks, and update their operational procedures accordingly.
The framework is grounded in the principles of international humanitarian law, recognising that while initial failures may be unavoidable, military forces still have a duty to prevent repeat harm once risks become known and foreseeable.
Rather than accepting repeated incidents as inevitable, this iterative approach could help transform post-deployment evaluations into best practices for managing AI-induced uncertainty and minimising civilian harm from the use of military AI.
Read the full chapter
About Jonathan Kwik
Dr Jonathan Kwik is a researcher in international law at the Asser Institute. He specialises in techno-legal research on the military use of artificial intelligence (AI) related to weapons, the conduct of hostilities, and operational decision-making. He obtained his doctorate (cum laude) from the University of Amsterdam on the lawful use of AI-embedded weapons at the operational level. He recently published the book, Lawfully Using Autonomous Weapon Technologies. Jonathan is part of the research strand ‘Regulation in the public interest: Disruptive technologies in peace and security’, which addresses regulation to safeguard and promote public interests. It focuses on the development of the international regulatory framework for the military applications of disruptive technologies and the arms race in conventional and non-conventional weapons. The public interest of peace and security serves as the prime conceptual framework in this strand.