[New publication] Failures to stop autonomous weapon systems as a war crime?
Published 17 October 2023In a new article, researcher Marta Bo explores the issue of holding people accountable for the actions of autonomous weapon systems during wartime. Could individuals be held responsible if they fail to prevent autonomous weapon systems from carrying out illegal attacks in a conflict?
In the article “Criminal responsibility by omission for failures to stop autonomous weapon systems” for the Journal of International Criminal Justice, researcher Marta Bo asks whether it is a war crime to not stop an autonomous weapon system (AWS) that is launching an unlawful attack.
Autonomous weapon systems are designed to operate autonomously, meaning that they can, based on pre-programmed target profiles and partly triggered by their environment of use, make their own determinations about what, when and where to attack. Now let us imagine that an autonomous weapon system was pre-programmed to target military targets (say a tank) but ends up targeting a bus full of civilians instead. If humans cannot control the AWS, then no one can be held accountable for the attack, which would lead to impunity for war crimes.
So, could AWS users, including operators and military commanders, be held criminally responsible for violating the prohibition on indiscriminate attacks because they did not intervene in, and suspend the AWS-driven attack? And if so, under which conditions would a failure to stop an AWS-driven attack amount to a war crime?
“Commission by omission”
The author tackles this question by establishing how the doctrine of “commission by omission”, a legal construction originating from national criminal jurisdictions, can be applied on the basis of the grave breaches regime in the First Additional Protocol to the Geneva Conventions and the Rome Statute of the International Criminal Court.
The doctrine of commission by omission means that you can be held responsible for a crime even if you did not directly commit it, but you had a duty to stop it and failed to do so. For example, if you see someone drowning and could have easily saved them but chose not to, you could be held responsible for the person’s death.
In deconstructing the status of commission by omission under the legal frameworks of the Geneva Conventions and the Rome Statute, Bo analyses whether the substantive conditions of commission by omission, namely, the legal duty to act and the capacity to act, are met. Are AWS users under individual legal duties to act under international humanitarian law? Under which conditions could the dereliction of such obligations lead to their criminal responsibility?
Human Control
According to Bo, the fundamental justification for assigning criminal responsibility by omission lies in the concept of 'control.' Criminal responsibility for failures to stop AWS resulting in war crimes, is based on the assumption that AWS users had sufficient control over the systems to suspend their determinations.
Consequently, human control in the form of the ability to supervise, intervene in the operation of, and halt an AWS, becomes a necessary pre-condition to ensure accountability in certain foreseeable situations of unlawful attacks with AWS.
Examining how commission by omission applies to war crimes of unlawful attacks provides critical insights into the debate on ‘human control’. The author's conclusion advocates for the adoption of an additional treaty obligation to enforce human control over AWS, aiming to uphold accountability in the face of potential unlawful attacks.
Read the full article.
Read more
Three individual criminal responsibility gaps with autonomous weapon systems
To hold an individual criminally responsible for committing an unlawful attack, it must be established that they launched the attack with some form of intent or knowledge. However, what if an attack targeting civilian objects and military objectives indiscriminately involved an autonomous weapon system (AWS)? Read more.
[Research paper] In or out of control? Criminal responsibility of programmers of autonomous vehicles and autonomous weapon systems
In a new paper, Asser Institute researcher Marta Bo examines when programmers may be held criminally responsible for harm caused by self-driving cars and autonomous weapons. Read more.
About Marta Bo
Dr. Marta Bo is a senior researcher at the Asser Institute, associate senior researcher at the Stockholm International Peace Research Institute (SIPRI) and research fellow at the Graduate Institute for International and Development Studies (Geneva). Marta is part of the Asser Institute research strands ‘Regulation in the public interest: Disruptive technologies in peace and security’ and ‘In the public interest: Accountability of the state and the prosecution of crimes’.