[Interview] PhD researcher Taylor Woodcock: ‘Does reliance on AI in the military alter how practitioners exercise human agency?’
Published 27 February 2023By Naomi Lamaury
PhD researcher Taylor Woodcock conducts research in the context of the DILEMA project on designing international law and ethics into military artificial intelligence (AI). She focuses on the legal frameworks (international humanitarian law and international human rights law) that govern conflicts. During the Asser Institute’s Spring academy on Artificial Intelligence and international law, Taylor will lecture on AI and International humanitarian law (IHL). ‘Accepting that IHL presupposes the exercise of human agency, key questions are whether reliance on AI-systems alters human decision-making and whether this places military personnel in a position to fulfil obligations.’ An interview.
What is the focus of your research?
“In my research project, I begin with the understanding of international law as a practice, rather than only being understood as a body of rules. Using practice theory to conceptualise international law reveals how it involves interpretation, argumentation and legal decision-making, which also occurs outside of the courtroom. International law is not just something that legal advocates do, but also something that happens in the everyday use of international law. From this understanding of the law, I also focus on the role of people and how they exercise agency in the practice of law. It is from there that I begin to assess what sort of impact emerging technologies in the military sphere can have on international law as a practice and the effects that this has on how humans exercise agency. In addition to considering the nature of international law, I also look at how the content of specific duties in the laws of war require deliberative decision-making processes. My overarching objective is to try and gain an understanding of whether we can design technologies in a way that accounts for international law and how people exercise agency in legal decision-making in the military sphere.”
What are some of the most commonly used AI-dependent technologies that we see in military operations today?
“This is a really good question, because I think one of the objectives of the DILEMA project that I work on as a PhD researcher, is to shift the discussion from autonomous weapons towards the area of software-based AI. To date, there have been extensive discussions on autonomous weapon systems, but there should also be a greater focus on software-based applications of AI that can have significant implications in the military sphere.
There are a number of different potential applications for these software-based AI technologies. I focus on machine learning algorithms, for instance, where we see many advances today. An example of an application of machine-learning is computer vision, an automated analysis of visual data, which can be used to recognise certain objects in video imagery. This has potential for intelligence, surveillance, and target acquisition. The use of natural language processing is also making huge leaps. At the moment this technology is currently all over the news with Chat GPT, the AI chatbot that is capable of understanding human speech and producing in-depth writing that is easily understood by humans.
The potential applications of artificial intelligence in the military domain are extremely wide, with decision support systems, decision-making systems, as well as tasking systems. And the interesting thing about recent developments is that it is not just single discrete applications of AI that can be implemented, but also broader systems where various applications are integrated. So it is not just one sort of military application that we are looking at, but we see networks where different algorithms can be used in different ways and that feed into each other.”
What are the challenges arising from these emerging technologies in military operations?
1) Overlooking software-based AI in debates
“There has been a significant debate by a group of governmental experts in Geneva about the introduction of a new international instrument but as it only relates to autonomous weapons, the debate is too limited in scope. Though the prohibition of autonomous weapons is an important issue for States to debate, these discussions have tended to distract from some of the other issues that may in the end be more urgent. There are significantly fewer obstacles to software-based AI applications being integrated into military systems compared to autonomous weapons. It is much easier for militaries to develop and adopt these systems with far less scrutiny than is warranted. So, shifting our focus and broadening the debate to the use of these software-based AI applications is one way to overcome the limitations of discussions around international regulation so far.”
2) Human agency and qualitative decision-making
“I try to look at the various ways that certain international humanitarian and human rights law norms are predicated on the exercise of human agency. International Humanitarian Law (IHL) standards entail deliberative and reflective reasoning, meaning that they relate not to the effect of attacks, but rather to the decision-making processes that lead to attacks. And there is a necessary human agency that occurs within these decision-making processes. Moreover, most IHL principles have a qualitative character in that they require evaluative reasoning and decision-making. They cannot be based on pre-given calculations, they are intersubjective within reasonable bounds - in this case, the so-called ‘reasonable commander standard’. So, what was reasonable in the circumstances, with the information that the commander had at the time? If we accept that IHL presupposes the exercise of human agency, key questions are whether the reliance on AI systems alters human decision-making processes and crucially whether this places military personnel in a position to fulfil legal obligations. Can machine learning algorithms really support human qualitative decision-making? Is this really the same as contextually driven decision-making? I think this is unclear and worth exploring.”
3) Comparison of humans and AI systems
“In the military sphere, there is a lot of discussion about whether AI algorithms can perform as well as humans. However, this framing is seriously flawed. It is humans that bear obligations under IHL, and commanders will always retain responsibility for ensuring they meet the legal requirements in the conduct of hostilities. Framing the issue around whether AI performs as well as humans has multiple issues. Some of these include that AI systems generate output in a substantively different way than humans make decisions. Machine learning generates probabilities of likely outcomes based on statistical analysis, it is not doing what humans do. Another issue is that the law does not only regulated the effects of force in armed conflict, but also decision-making processes. This relates to the reasonable commander standard that is applied to certain IHL standards that I mentioned before. So trying to make the comparison between humans and AI systems is, in my view, asking the wrong questions.”
4) Cognitive bias
“There are also challenges in the way that humans rely on systems with the potential of cognitive biases arising. Research into how humans make decisions when relying on technology shows that there are risks that technology is seen as an objective source of information and that it may be over-trusted. Consequently, people are not willing or able to identify things that might challenge what the technology is saying to them. So you can see the challenges that arise at different levels and this really reflects why it is important to understand the dynamic relationship between humans and algorithmic systems in military practices, where there are great risks.”
5) Black box
“Finally, there is also the issue of the lack of transparency around these systems. The nature of machine learning is that it is a black box, meaning that even experts are not able to fully understand why our system generates the particular output that it does. This can also challenge human decision-making when humans try to rely on these systems but are unable to justify the output that the system has reached, and therefore to fully justify their own decision-making.”
Do you think that more law is needed to address the challenges created by the emergence of AI in warfare?
“For me, it is really important to first ask and clarify how international law - as it exists today - actually regulates these technologies, before we can ask the question of whether new international law is needed. For me, it is a question of whether we understand how legal frameworks, like IHL, that are already complex, apply in new contexts. In my view, military applications of AI do present new and novel challenges, and so first it makes sense to understand how to interpret the law in this context and in light of the specific issues that military AI raise.
Apart from drafting new international treaties, I think that there are other options to address the challenges of the use of AI in warfare. Take for instance the potential of interpretive guidance on the issue of military applications of AI that we discussed as part of the DILEMA project. An example would be the Tallin manual, which gives interpretive guidance on international law in the specific context of cyber. Documents like these can help to understand how existing law applies in new and novel circumstances. Andy tools like this could also be used in the context of military AI. So, it is not necessarily that we need new law to address the use of artificial intelligence in warfare as such, but we need novel ways of understanding the law.”
In light of these emerging issues, why do you think that it is important to stay aware of the developments in the use of AI-related technologies?
“I think the first reason is to develop a deep understanding of the technology. One of the flaws of current debates is that these technologies are often discussed in a really superficial way. Engaging with these technologies in a more comprehensive and granular way will help to enrich these discussions and generate more expertise about both the challenges and opportunities of specific use cases of these technologies, particularly in the military domain. This also goes for the law as well. You can have a great technical understanding of a certain technology but you will also need to have a deep understanding of the law and how it functions in this specific context.
The singer-songwriter Nick Cave recently said that the ‘scary thing about AI is that it will always be in its infancy, as it will always have further to go, and the direction is always forward, always faster because there are always so many developments’. While there are certainly risks with thinking about technology as always being in its infancy, at the same time the reality is that these sorts of things can be implemented and adopted in military practices today. Trying to keep pace with those developments and gaining this expertise is a really important way forward. Trainings such as the Spring academy on artificial intelligence and international law, where I will lecture on military applications of AI and international humanitarian law, is a great way to reach this expertise in both these complex areas.
I also think we are at risk of remaining stuck in our own silos, within our own particular area of expertise. Going beyond that area to gain really new perspectives is important and enriching. This is why our DILEMA Project is interdisciplinary, with international legal scholars, computer scientists and philosophers all working together. Though it certainly has its challenges, trying to break down those barriers and gain different perspectives is key in an area like military AI. And this interdisciplinarity is also key to our Spring academy, with not only legal scholars and practitioners teaching and participating in the programme, but people with all sorts of different viewpoints.”
Do you have any advice for either young academics or legal scholars that might want to enter the field you are in now?
“I think one of the big things is to find your niche - that what really interests and excites you. Apart from that, you should try to find writing opportunities, as this also a way to demonstrate interest and get your name out there. There are many blogs or university journals that you could contribute to. Also, go out and talk to people, because I think most people are pretty nice and will be happy to help you and have a coffee if you are interested in their work. So, do not hesitate to make genuine connections with people on the basis of these research interests.”
About Taylor Woodcock
Taylor Woodcock is an international law researcher at the Asser Institute, working on the research strand: Regulation in the public interest: Disruptive technologies in peace and security. This research strand addresses regulation to safeguard and promote public interests. It focuses, in particular, on the development of the international regulatory framework for the military applications of disruptive technologies and the arms race in conventional and non-conventional weapons. The public interest of peace and security serves as the prime conceptual framework in this strand.
[Spring academy] Artificial intelligence and international law
The Asser Institute’s Spring academy artificial Intelligence and international law, is an annual interdisciplinary programme offering in-depth perspectives on AI and international law. It addresses fundamental issues at the intersection of theory and practice. The programme will cover the technical aspects of AI, the philosophy and ethics of AI, human rights in relation to AI, AI in international humanitarian law, AI and international responsibility and international governance. The spring academy provides an ideal venue to help you understand these aspects of AI through a short interactive course with plenty of room for discussion with your fellow multidisciplinary participants. Read more.
Read more
[International Humanitarian Law] Taylor Woodcock: ‘We should focus on the effects of decision-making aids, tasking, intelligence, surveillance and reconnaissance technology in warfare’
In a new podcast episode by On Air, Asser Institute researcher Taylor Woodcock discusses today’s ‘overshadowing focus on autonomous weapon systems (aws) in warfare’, and the consequential lack of attention to other military applications of artificial intelligence, such as the use of data-driven algorithms to assist with target recognition, decision-making aids, for military tasking and to support intelligence, surveillance and reconnaissance.
[Blog] Artificial intelligence: Is the answer more law?
Artificial intelligence (AI) intersects with everyday life in myriad ways and is increasingly integrated into human decision-making processes in various domains. Yet, the AI revolution with which we are faced poses a number of social, political and legal challenges. So is the answer to this more law?