[Interview] Klaudia Klonowska: “The calls for ‘human judgement’ in AI warfare are not matched by developments in military defence.”
Published 22 April 2025By Irem Cakmak

@Picryl - US military defense operator.
As artificial intelligence (AI) continues to shape the future of military operations, Asser Institute PhD researcher Klaudia Klonowska discusses how machine learning is rapidly transforming military decision-making, particularly in terms of AI-enabled decision-support systems, (AI-DSS). “Despite AI’s portrayal in the media as a highly sophisticated technology, machine learning is still evolving - and so is our understanding of how to manage and govern it. Legal scholars need to engage with developers to make sure these systems reflect legal principles, especially in high-stakes conflict environments.” An interview.
What is the main research project you are working on right now?
“I’m currently finalising my PhD on how AI-enabled systems affect military decision-making - especially AI-DSS. These are tools that help human operators make sense of large volumes of data during military operations. Recently, we have seen AI-DSS deployed on the battlefield, with the Israeli Defense Forces (IDF) using systems like Lavender and Gospel, and Palantir’s AIP Platform used by Ukrainian forces.
A key focus of my research is how these systems change the way decisions are made. What kind of information do they prioritise? How is that information framed? And how does this influence the choices commanders make? It’s not just about speed or accuracy - AI systems filter and structure knowledge in a way that shapes human judgment.
When I started on my thesis, I was looking into autonomous weapons systems (AWS), but I shifted toward AI-DSS after joining the Asser Institute’s DILEMA project. Unlike autonomous weapons systems, AI-DSS don’t act independently; they support human decision-makers. But that doesn’t automatically mean there’s more accountability. These systems can still heavily influence outcomes - especially in high-pressure situations where people are overloaded and rely more on machine input. The line between AWS and AI-DSS is also getting blurry. Some tools are human-operated but can switch to autonomous modes if communications are jammed. So we need to look at the whole spectrum of AI applications in military decision-making, not just the extremes.”
How does your legal background shape your perspective on AI in defence?
“I see my research as a bridge between the legal and the technical fields. That’s why this research is exciting: legal and technical communities often use the same terms - transparency, trust, explainability - but they mean very different things.
Take the word ‘confidence,’ for example. In international law, confidence might mean near-certainty that a target is lawful. In machine learning, a ‘confidence score’ is a statistical estimate that an object is, say, a tank or a building. But this doesn’t always reflect how reliable the system actually is. Legal actors might interpret these scores as clear-cut when in fact they’re not. We need to be aware of these disconnects. Despite AI’s portrayal in the media as a highly sophisticated technology, machine learning is still evolving—and so is our understanding of how to manage and govern it. Legal scholars need to engage with developers to make sure these systems reflect democratic values and legal principles, especially in sensitive areas like defence.”
What motivated you to specialise in international law and military AI regulations?
“My fascination with technology has driven my career. I completed a master’s in international law and technology at Vrije Universiteit Amsterdam where I studied everything from blockchain to gene editing. I then worked at Amnesty International, where I researched the export of surveillance technologies from Europe to China. I’m talking about systems like intrusive malware that allows states to breach your phone’s security and read all your messages. Our work on this project resulted in an integration of due diligence obligations upon the exporting states at the European level, which was very influential.
What draws me to technology is that it is rarely simply good or bad, harmful or helpful, lawful or unlawful. Technologies often present both opportunities and challenges, and AI, in particular, reshapes how we create knowledge. AI offers new possibilities, but it also raises difficult questions about the nature of knowledge and how we legitimise it. In the military, for instance, AI can be transformative in targeting and decision-making. But it also introduces risks, such as the potential for errors with catastrophic consequences. Similarly, in everyday life, social media is a powerful tool for mobilisation and collective action, but it can also cause harm by isolating individuals or spreading misinformation. So, technology is neither inherently good nor bad; it’s both, and understanding this duality is key.”
How is AI shaping modern military operations? What are the key legal and ethical challenges it presents?
“The impact of AI on modern military operations is still emerging, because these systems are not yet in full deployment, but there are some impacts that we can anticipate. The increased speed of operations is one of them. This is precisely what the hope behind AI systems for the military is - that AI will provide the unprecedented speed to react to adversarial actions. This could be achieved either by having AI integrated into weapons, for example into anti-missile defenses, or by automating and speeding-up the analysis of information. We now talk of ‘machine speed’ in terms of the compressed time in which information is analysed and acted upon.
This is where the international calls for ‘human judgement’ in the use of AI-enabled systems are not matched by current developments in the military defence sector. If a State invests money into AI systems that are meant to speed-up operations, it is then unlikely to implement barriers or procedures that will slow down that process. Rather, we see that States are working on ‘trust calibration’ projects that are meant to minimise user rejection of systems and ensure that AI systems are in fact adopted by decision-makers.
The purpose is to increase the readiness of militaries to fast-paced threats. Hence, greater speed – and relatedly greater casualties – is likely going to be one of the impacts of AI technologies. But it is going to be important to observe how the initial stages of AI implementation are going to be handled. We know that technology is often not as easy to implement as is expected. It fails and breaks and requires much more attention and human effort to fix it than is anticipated. In those early years of adaptation, AI is likely to cause more delays and mistakes which can be costly both in financial terms as well as in terms of human lives.
However, most impacts of AI systems are going to be invisible. What is important is how targeting decisions are reached, which is not easily captured by photographs from conflict zones. For example, the problem of algorithmic opacity is one that requires attention. One may say that it is not necessary to know how an AI system provides a recommendation, as long as the overall system has been checked and is reliable. That sounds simple, but such an approach overlooks that AI systems are ‘learning’, which means that they evolve, are trained on new datasets – and that the environment in which they are used is also changing, with new adversarial tactics emerging or new weapons being introduced. Not to mention that adversaries can also intentionally fool or interfere with the AI’s performance. AI systems are not always able to reliably capture these changes and require humans to update them in order to adjust their predictive accuracy. So, even though an AI system has performed well in the past, it does not mean that it therefore poses no risks in the future. This is why understanding how AI systems reach predictions is an important element to being able to foreshadow when it might fail and react appropriately.”
Why is uncertainty in AI a problem, particularly in the military context? And is human uncertainty also a concern?
“There are two main reasons why AI uncertainty is in military contexts. First, AI operates on a large scale. Unlike human errors, which are more isolated and easier to identify, a mistake in one algorithm can propagate through networks of connected systems. These systems can make rapid decisions that affect multiple other systems and individuals, amplifying the scale and impact of errors. We call this the ‘cascading effect’. Identifying these errors is challenging and often only possible after a mistake (sometimes with large-scale consequences) has been made.
Secondly, AI systems are often mistaken for being objective. While humans are aware of their biases and errors, AI outputs are not always questioned, which creates a false sense of reliability. This is compounded by ‘automation bias’, where humans trust AI outputs without scrutinising them. In the military context, this lack of questioning is particularly concerning, as it can lead to grave consequences if mistakes go unnoticed.
However, looking at AI uncertainty as if it were separate from human uncertainty is inadequate. The uncertainty of developers how to program systems, which models to choose, which data to include or exclude in the training, and how to best design a user interface all influence the type of uncertainties that are then attributed to the technology. Also, users of AI systems may also experience or introduce uncertainties. For example, a user may be unsure whether or not the system’s recommendation is correct but nevertheless flag it as flawed. This would lead to human-induced errors that then propagate through the system and may lead to biases. So, in other words, AI systems are not uncertain because they need “fixing” to provide more accurate answers. They are inherently uncertain because the way we as humans produce knowledge and understand our world is uncertain, messy, and incomplete. The two are intertwined in ways that can’t be separated.
Ultimately, the uncertainty that is experienced when using AI systems can have legal implications. The difficulty exists in tracing the origins of mistakes or even to foresee if they are going to occur again in the future, and under what circumstances. Given that AI systems require substantial human input, it is challenging to pinpoint where errors originate from and who should be held accountable.”
What are you hoping to achieve with your research? How has being at the Asser Institute influenced your PhD?
“Through my PhD thesis, I aim to provide a more nuanced understanding of AI technology, particularly in the military context. There are numerous promises being made about AI, especially in military and corporate sectors, and I hope my research helps to demystify some of these claims. While I may not provide definitive solutions, I hope to offer important considerations for techno-legal decisions regarding the design, development, and use of AI systems. I use the term techno-legal decisions to highlight that the technical choices in the design and development of AI systems have legal consequences. For example, the choice of the appropriate AI model to develop a certain AI application has implications for the level of interpretability of the model (i.e., the extent to which the predictive logic of the system is understood by a human) and the expected level of accuracy. For certain applications, the law may be more concerned with accuracy whereas in other contexts it may be more important to ensure that the system is interpretable and traceable in order to ensure accountability. This is just a very brief and general overview, but there are numerous technical choices in the design and development of AI systems that later on have critical legal implications and may either impede or foster legal compliance.”
What has been the most rewarding or challenging moment in your career?
“The Asser Institute has been an excellent environment for conducting my PhD. The research community here is dynamic, and the opportunities for collaboration are immense. For example, I have organised PhD methodology workshops, writing days, and conferences with experts, which allowed me to bridge the academic and professional worlds. The Institute’s connections with practitioners and researchers have provided invaluable insights and have contributed significantly to my work.
One of the most rewarding moments in my career was that I was recently invited to present my research to the senior leadership of the Colombian Navy, which was an opportunity to directly influence the thinking of military officials shaping AI policy. The experience was immensely valuable, not just in sharing my findings, but also in hearing about the challenges the Navy faces in integrating AI into their operations. They are still in early stages of considering how to go about it and are responsibly considering their options. Should we rely on private companies to develop AI, yes or no? What regulations impede us to use data to train AI systems? How can we train our personnel to use AI responsibly? What are the risks we should be aware of? This dialogue was very rewarding.
I have also recently joined a project that develops the “Manual on International Law Applicable to Artificial Intelligence in Warfare”. As managing director, I will be facilitating the drafting process and overseeing the timeline of the various project objectives. This is a great opportunity for me to also use knowledge gained during the PhD trajectory and develop more concrete recommendations for the lawful use of AI in military operations. At the same time, developing a Manual on this topic is an immense challenge. Not only because the technology is changing so fast - we hear about “AI revolutions” every few days - but also because of the wide variety of uses of AI in the military context. While some uses are more concerning than others, a balanced approach that considers all of them is necessary. The Manual is meant to address in a more nuanced manner the legal concerns that stem from the existing legal frameworks and illuminate where the application of the law to AI systems is yet unclear.”
What advice would you give to young legal scholars interested in AI and international law?
“It is both an exciting and challenging time to be a legal scholar. Academic research is increasingly specialised and narrow, but my advice would be to ’wear many hats’. In this particular field, it is essential to understand not just the legal aspects but also the technology. Engaging with scholars from different disciplines is crucial for gaining a more holistic understanding of the issues at play. During my PhD, I have had to learn about AI systems, psychology, and international relations. So, embrace the complexity of problems and seek solutions from diverse fields to navigate this ever-evolving space.”
Interested in AI-driven military technologies?
Join our short course (22 – 23 May 2025) to develop a deep understanding of military applications of AI. This short course aims to contribute to the progress of governance efforts and other initiatives around AI-driven military technologies. The programme features expert lectures from experts and academics in the field. Read the full programme.
About Klaudia Klonowska
Klaudia Klonowska is a PhD researcher at the Asser Institute and the University of Amsterdam, focusing on AI decision-support systems in the military and their implications under international humanitarian law. Her research, part of the DILEMA project, includes empirical studies with military personnel, lawyers, and engineers.
Read more
[New Article] The Techno-Legal Co-Production of Terrorist Suspects
Lists once used to track terrorist suspects are now powered by algorithms that promise speed and precision. This article explores how these evolving tools shape legal definitions and enable interventions across domestic and international contexts. The research urges policy-makers and civil society to examine how digital surveillance may be shifting power and accountability. Read more.
[Blog post] AI-based targeting in Gaza: Asser researcher Klonowska refines the debate on military AI
As the death toll in Gaza rises, the Israel Defence Force’s controversial use of artificial intelligence-enabled decision-support systems sparks heavy debate amongst experts. In her blog post, Klaudia Klonowska calls experts to shift away from abstract discussions of emerging AI and to instead concentrate on existing applications and how they are currently reshaping the realities of war. Read more.
[Policy brief] Can Rules of Engagement and military directives effectively control military AI?
The growing integration of AI in military operations has raised pressing governance challenges. This brief provides insights from a recent expert workshop, led by researcher Jonathan Kwik and colleagues, underscore the need for strategic, flexible, and context-specific AI guidelines. Read more.
