[Blog] Artificial intelligence: Is the answer more law?
Published 25 January 2021By Taylor Woodcock
Artificial intelligence (AI) intersects with everyday life in myriad ways and is increasingly integrated into human decision making processes in various domains. Whether it be in hospitals, factories or courtrooms; on our roads, smartphones or social media; or being used by governments, companies or academia, reliance on AI-enabled technologies is undeniably becoming a pervasive global phenomenon. Yet, the AI revolution with which we are faced poses a number of social, political and legal challenges. So is the answer to this more law?
In his Sixth Annual T.M.C. Asser Lecture on ‘Almost Human: Law and Human Agency in the Time of Artificial Intelligence’ and a prior interview, Prof. Andrew Murray called for the development of a legal framework to regulate AI. An international institution tasked with setting new standards on the development and use of AI was proposed as urgently needed to tackle the disruptive changes that we face in light of emergent technologies. The assumption here is that existing legal frameworks are insufficient to deal with these challenges; that is to say, yes, we do indeed need more law.
The challenges of AI to human agency
According to Prof. Murray, one of the key problems raised by reliance on AI is its impact on human agency. AI impedes the agency of the user by dictating what information is (and isn’t) presented to them, constituting an interference in their ability to freely make decisions. It also strips agency from individuals who are the object of an algorithm’s calculations, the process of datafication reducing them to mere datapoints. As such, Prof. Murray argues that at the very least, a conversation needs to be had about what we are willing to give up or over to algorithms.
The trouble is, the inherently opaque and complex character of machine learning (ML) algorithms make it exceedingly difficult for humans to understand how and why these reach particular outcomes. The fact that these algorithms are trained on vast bodies of big data, have the capacity to evolve and employ reasoning processes that are substantially different to that of a human being in terms of both substance and scale make these systems incomprehensible even to the trained eye, let alone end-users without such expertise (see here). This is only compounded by the lack of transparency and the secrecy surrounding the development of these technologies, which is particularly acute in the military domain. These features make ML algorithms ‘unknowable’, as put by Prof. Murray, and are the basis of calls for the development of ‘explainable AI’ in recent years (see here, here). The difficulty we face is that if we cannot comprehend how an AI-enabled system reaches a particular outcome, it makes it difficult – if not impossible – to perceive exactly what it is we would be giving up to algorithms and make an informed decision on this basis.
The ramifications of algorithmically supported decision making are significant, creating a mediated view of the world, threatening democracy and rule of law and dehumanising individuals, with potential life or death consequences. These challenges lead Prof. Murray to make a call for greater international cooperation and the development of international legal standards, as well as domestic regulation and enforcement regimes, to regulate AI.
Human agency and the law
Before we leapfrog from the challenges that emergent AI technologies pose to the need for new legal standards, there seems to be an important middle step missing. We must explore how human agency relates to the concept of law, as well as existing international law norms, and identify what bearing this may have on the development and use of AI, as well as its regulation.
Prof. Murray relies on a definition of agency expounded by philosopher Joseph Raz (see here), who indicates that this concept requires capacity, the availability of choices and – importantly for our purposes – freedom from interference or coercion. Whilst the abundance of philosophical definitions of human agency goes beyond the scope of this blog post, it seems that the ability to freely make decisions and take action without interference emerges as a common theme. But how does this concept relate to law? According to Prof. Murray, the notion of human agency here refers to the view that law is the product of collective human agency and autonomy, with rule of law reflecting the operationalisation of this principle.
A mediated human experience
The crux of Prof. Murray’s message of caution regarding AI is that if the human experience is mediated by ML algorithms such that we cannot understand how these algorithms select outcomes to present and what is hidden from us, this erodes our ability to freely make informed decisions without interference and thus, our agency. It also shapes our view of the world around us, with much being lost in the way of context and nuance when real life factors are reduced to datapoints. The question we must ask ourselves is if our view of the world is shaped and reshaped by AI to the point where our ability to freely make decisions is diminished at scale on a societal level, what does this mean for the law that comes about as a result of this mediated human experience?
Looking at existing international legal frameworks, it seems relevant to ask to what extent existing norms presume some level of human agency. Exploring this question is an initial step towards determining whether existing international law is sufficient to regulate AI. After all, it is only if existing norms are found wanting that new regulation should be developed. Even if we do accept Prof. Murray’s argument that new legal regulation is required, this question still holds relevance for shedding light on how law should interact with the concept of human agency.
Human agency as a benchmark to new international standards
The question of whether new international standards should be set for military applications of AI is a mainstay in debates of the Group of Governmental Experts on Lethal Autonomous Weapon Systems (LAWS). Though these debates demonstrate a lack of consensus around the definition of key terms related to LAWS, generally they revolve around technologies that exhibit ‘full autonomy’ in critical targeting functions, that is to say, weapon systems which identify, select and engage targets independently from human intervention (see here and here).
In this context there is also significant disagreement on whether the development and use of LAWS should be banned or subject to specific regulation, or if existing norms of international humanitarian law (IHL) are sufficient. However, as rightly highlighted by Prof. Murray, these questions divert the gaze away from systems with lesser degrees of autonomy that are currently under development and which may have important consequences for international law. More emphasis should therefore be placed on the integration of algorithmic decision-making aids in militaries.
When we consider these technologies, a number of questions arise related to international law and agency: Is it possible to reduce the requirements of lawful warfare to datapoints such that algorithms can mimic human reasoning in order to fulfil conduct of hostilities obligations? Is replication of human reasoning processes even required, or is the end result more important here? If the ability of commanders and soldiers to freely make informed decisions is eroded when algorithms intermediate their choices, does this constitute a breach of IHL or fail to meet some standard level of agency that is presumed for the fulfilment of IHL obligations?
These questions are illustrative of the lines of inquiry that are not yet sufficiently addressed in debates, but which merit more rigorous exploration. Though considerations of agency and international law with respect to AI are by no means limited to the military domain, this context reflects an existing debate on international regulation of AI that has not yet fully dealt with these questions, but which could provide a useful forum in which to do so.
The need for further debate on AI
This post raises more questions than it sets out to answer. In doing so, it serves to highlight the need for further debate around the interrelated issues of AI, agency and international law. As cautioned by Prof. Murray, the emergence of AI infringes upon our human agency, with dire consequences. If we do seek to preserve human agency, or at the very least start a conversation about what we are and aren’t willing to give up to algorithms, then we must consider how exactly (international) law presupposes human agency. It is only upon interrogating this question that we can truly know whether new international legal standards are in fact so urgently needed. Exploring these questions further is one of the goals of the DILEMA research project on Designing International Law and Ethics into Military Artificial Intelligence conducted at the Asser Institute.
Taylor Woodcock is a PhD researcher in public international law at the Asser Institute. She conducts research as part of the research project Designing International Law and Ethics into Military Artificial Intelligence (DILEMA), funded by the NWO–MVI Programme on Responsible Innovation (2020–2024). Her work relates to applications of AI in the military domain, reflecting on how these emergent technologies impact compliance with international humanitarian law and international human rights law.
Winter academy: Artificial intelligence and international law
Are you interested in all things AI? From 10-19 February 2021, the Asser Institute will host its third annual winter academy on ‘Artificial intelligence and international law.’ This interdisciplinary online training programme offers you an in-depth and comprehensive overview of AI and international law. It will address both the technical and legal aspects of AI, so whether you are a lawyer or a programmer, this academy will offer you the skills and knowledge to advance in your professional or academic career. Learn more and register here.