[New publication] Many hands in the black box: Artificial intelligence and the responsibility of international organisations
Published 27 July 2023In a new book chapter, researcher Magda Pacholska (Asser Institute) analyses the possible legal reverberations of the rapidly growing interest in AI-enabled technologies by international organisations within the military and security domains.
In the past decade, there has been a growing number of international organisations that have been using artificial intelligence (AI). These initiatives range from monitoring AI trends to buying AI-enabled solutions. However, there has not been much discussion about the legal implications of these initiatives.
In the new book chapter ‘Many Hands in the Black Box: Artificial Intelligence and the Responsibility of International Organizations’, researcher Magda Pacholska examines the legal implications of AI use by international organisations, using the procurement and deployment of AI surveillance technologies by NATO and FRONTEX as case studies.
The author focuses on AI-enabled technologies that could lead to legal liability for international organisations. The Articles on the Responsibility of International Organizations (ARIO), which were adopted by the International Law Commission in 2011, define when an international organisation can be held responsible for its actions.
According to ARIO, international organisations (IO) are primarily responsible for the actions of its agents or organs. When AI is involved, however, it can be difficult to know who is responsible if something goes wrong. This so- called ‘black box dilemma’ happens because AI systems are often complex and opaque, making it difficult to understand how they make decisions and how they could cause harm.
The ‘problem of many hands’
Pacholska argues that this problem is even more difficult when it comes to international organisations that use AI-enabled technologies. This problem is known as the ‘problem of many hands’. As IOs often consist of many different people, agents and organs, all of whom may have a role in the development and use of AI systems, this makes it even harder to know who is responsible for any harm that is caused with AI-enabled technologies.
When analysing the responsibility of IOs for harm resulting from the use of AI, Pacholska advances an interpretation of the concept of ‘an agent of the international organisation’ to include items of equipment owned by the organisations. This would mean that if an AI system owned by an international organisation contributes to a wrongful act, the organisation could be held responsible. Such an interpretation, Pacholska argues, would prevent IOs from ‘outsourcing’ their responsibilities. The author points out that this interpretation would allow international organisations to enjoy the benefits of AI technology while also ensuring that they are held accountable for their actions.
Privacy concerns
Pacholska concludes that, in the near future, IOs are not very likely to use AI-enabled technologies that violate the conduct of hostilities’ rules within international humanitarian law. However, the author warns that some of the AI-enabled surveillance drones that IOs are considering procuring, may raise privacy concerns. According to Pacholska, these drones should be subject to a legal review that takes privacy into account. She supports her argument by citing the 2021 NATO AI Strategy, which sets forth principles for the responsible use of AI, and supposedly customary status of Article 36 of Additional Protocol I to the Geneva Conventions, which in tandem could be read to require IOs to conduct a legal review of new weapons and technologies.
Magda Pacholska, “Many Hands in the Black Box: Artificial Intelligence and the Responsibility of International Organizations, Asser SSRN Research Paper, forthcoming in R. Deplano, R. Collins and A. Berkes (eds.) Reassessing the Articles on the Responsibility of International Organizations: From Theory to Practice (Edward Elgar).
About Magda Pacholska
Dr Magda Pacholska is a Marie Sklodowska-Curie Individual Postdoctoral Fellow working on the project entitled ‘Implementing International Responsibility for AI in Military Practice’ within the DILEMA project on ‘Designing International Law and Ethics into Military Artificial Intelligence’. Pacholska is also the project leader of the EU-funded research project I2 RAMP, which examines how to conceptualise and implement international responsibility for violations of human rights and international humanitarian law resulting from the use of military AI.
Magda is part of the research strand ‘Regulation in the public interest: Disruptive technologies in peace and security’ which addresses regulation to safeguard and promote public interests. It focuses on the development of the international regulatory framework for the military applications of disruptive technologies and the arms race in conventional and non-conventional weapons. The public interest of peace and security serves as the prime conceptual framework in this research strand.
Read more
Complicity and the Law of International Organizations: Responsibility for Human Rights and Humanitarian Law Violations in UN Peace Operations, Edward Elgar International Law Series, Magdalena Pacholska (2020).
Military Artificial Intelligence and the Principle of Distinction: A State Responsibility Perspective, Israel Law Review (2022), Magdalena Pacholska
Autonomous Weapons, Forthcoming in: Brożek, B., Kanevskaia, O. & Pałka, P. (eds.), Research Handbook on Law and Technology, Edward Elgar (2023), Magdalena Pacholska