Experts urge immediate safeguards on military use of AI Decision-Support systems

Published 28 April 2025

Photo by Diy13

A group of leading legal scholars has called on the international community to urgently regulate the use of artificial intelligence-based decision-support systems (AI-DSS) in military operations, warning that current frameworks fail to address the profound risks these systems pose to international humanitarian law and human judgement. Unlike autonomous weapon systems, which have been the focus of UN discussions for over a decade, AI-DSS remain largely unregulated and under-scrutinised.

AI-DSS can be described as tools that use artificial intelligence techniques to collect and analyse data, provide insights into the operational environment, and generate recommendations for military personnel - particularly in the context of targeting operations. For example, during the development and prioritisation of targets, AI-DSS can help evaluate strike options, assess proportionality, and offer suggestions on mission execution. These systems are now increasingly integrated across multiple phases of the joint targeting cycle - a core military process used to identify, evaluate, and engage targets in line with legal and operational standards.

In a formal submission to the United Nations, experts Marta Bo, (Asser Institute), Ingvild Bode (University of Southern Denmark), Jessica Dorsey (Utrecht University), and Elke Schwarz (Queen Mary University) recommend three immediate steps:

  1. Reassert human legal and cognitive reasoning in military decision-making;
  2. Recognise the subtle but significant ways AI-DSS influence human judgement; and
  3. Bolster global discussions on AI-DSS through forums such as the UN General Assembly’s First Committee and the Global Commission on the Responsible Use of AI in the Military Domain.

Their intervention, made in response to a request from the UN Secretary-General under General Assembly resolution 79/239, highlights how these systems—though distinct from autonomous weapon systems—are quietly transforming how decisions in armed conflict are made.

“Framing AI-DSS as mere tools obscures the depth of their influence,” said senior researcher Marta Bo (Asser Institute) and one of the signatories. AI-DSS, which are currently being used in the Gaza conflict, for instance, don’t just assist human decisions - they shape them, often in ways that limit oversight and accountability.”

The scholars argue that such systems can constrain human oversight and increase reliance on algorithmic outputs, raising concerns about whether users can continue to fulfil legal obligations to protect civilians and comply with the principles of distinction and proportionality.

Unlike autonomous weapon systems, which have been the focus of UN discussions for over a decade, AI-DSS remain largely unregulated and under-scrutinised. The authors stress that without a dedicated regulatory focus, the growing reliance on AI-DSS risks outpacing ethical and legal safeguards. To preserve meaningful human judgement in war, the world must treat AI-DSS not as neutral aids, but as technologies that demand urgent attention, transparency, and governance.

Read the full text.

Interested in AI-driven military technologies?

Join our short course (22 – 23 May 2025) to develop a deep understanding of military applications of AI. This short course aims to contribute to the progress of governance efforts and other initiatives around AI-driven military technologies. The programme features expert lectures from experts and academics in the field. Read the full programme.


Dr Marta Bo LL.M.