‘Killer Algorithms: How to Keep Military AI under Human Control’
Published 14 June 2021In an interview with the University of Amsterdam, project leader Dr Berenice Boutin discussed some of the challenges associated with military AI and how the DILEMA research project seeks to address them.
The full interview is available here in English or here in Dutch, and is reproduced below.
***
Although types of automated weapons have existed for hundreds of years – with anti-personnel mines being the earliest example – the development of such systems controlled by artificial intelligence (AI) has bought with it a whole raft of new issues and concerns. And as such systems have proliferated over recent years, so has the disquiet about their use grown. Berenice Boutin of the Amsterdam Law School and the TMC Asser Institute is head of the DILEMA project - Designing International Law and Ethics into Military Artificial Intelligence - which is examining the legal, ethical, and technical approaches to safeguarding human agency over military AI.
What is the DILEMA project about?
‘We’re looking at military AI in its broadest sense. Lots of the coverage of this issue focuses on ‘killer robots’ (i.e. autonomous weapon systems), and that is certainly an important aspect of our research, but our project seeks to look at the implications of the whole sweep of military use of AI. Our goal is to ensure that military AI technology is in line with ethics and international law by design. We don’t want to wait for the tech to be developed and only then start checking if its respect our norms and values. And, from a more legal point of view, we don’t want to wait for harm to occur before we think about the legal implications.
‘Another important area of the project is looking at how AI technology can, sometimes in very subtle ways, affect human agency and human judgment. For example, we look at the use of algorithms at all levels in the chain of command. These algorithms are supposed to be there to provide support to human decision-making. But, in practice, if a machine suggests a decision based on one million data points and within 5 seconds, a human does not have enough cognitive powers to be able to second guess that decision or decide that it is wrong. So, formally, these sorts of big, life and death decisions are not delegated to machines, but in reality, it happens, and that needs examining urgently.’
Do new laws need to be written?
‘There are laws around AI – it’s not a legal vacuum, especially with regard to military use, where there are established norms of international humanitarian law which apply no matter what technology you are using. As far as use by private companies is concerned, there are often some limitations imposed by the states in which companies are based, but possibly not enough. And there is the problem of technology being developed by private companies which is then co-opted for military use. For instance, an image-recognition system developed by a private company, which is then implemented in a military setting. Such a system was designed with one purpose and is now being used for a very different one, so questions of, for example, the distinction between civilians and combatants weren’t taken into account when it was made, and that could be a serious problem. So there needs to be some way of checking such things when a transfer between uses is made – it may not be that new laws are needed, but we need more rigidly apply laws that already exist.’
How widespread is the current use of military AI?
‘There is a clear interest in military AI, and we see it is increasingly used in a number of states. At the current moment, and for the near future, there are no ‘killer robots’ in use. But in practice there are what could be called ‘killer algorithms’. For example, there are already a number of algorithms in use, which are designed to identify possible military targets. They will scan images of hundreds of kilometers of terrain and combine that with other types of data, such as metadata from phones, and they put it all together and, for example, to put it slightly simplistically, say ‘There is a terrorist camp here’. And when there are questions of human judgement and agency being affected by such information, there are already issues of possible violations of law or ethical principles being created. As well as issues of accountability and responsibility, because if we end up in a situation where a decision support algorithm is being used in a way where the human operator doesn’t really have control and it results in a war crime, we may not be able to assign criminal liability to the individual person, because that person did not have sufficient willful intent or hasn’t reached the high threshold of criminal responsibility.
‘That is why now is the moment to act on these concerns. Technology does not emerge from a vacuum. What emerges is a result of our choices. AI is developing faster than ever, and we need to make sure that our laws and ethics are ingrained in them right from the start.’