Blog: Does Artificial Intelligence exacerbate injustices?
Published 14 January 2020By Rebecca Mignot-Mahdavi
The digital revolution has made our lives more convenient, expanded our range of choices and opened new learning possibilities. Simultaneously, however, modern states and private companies have acquired an unprecedented capacity to collect data about everyone in a timely, cost-effective and almost invisible manner writes Asser researcher Rebecca Mignot-Mahdavi.
We find a place to dine on TripAdvisor and pay for it via contactless cards, and then we take an Uber home, where we buy some groceries and clothes online and connect with friends and acquaintances through our virtual profile on Facebook. We follow the latest news on Twitter, and search for a job on LinkedIn. These ordinary activities convey a tremendous amount of information. The digital revolution has made our lives more convenient, expanded our range of choices and opened new learning possibilities. Simultaneously, however, modern states and private companies have acquired an unprecedented capacity to collect data about everyone in a timely, cost-effective and almost invisible manner.
In 2012, the British intelligence agency conducted a mass surveillance program called Optic Nerve, intercepting webcam video communications of about 1.8 million internet users using video chat platforms like that provided by Yahoo. In 2016, Obama set up the Center for Global Engagement to prevent the radicalisation of vulnerable youth: based on data collected by Google, Netflix, Amazon, an algorithm is supposed to identify persons suspected of easier radicalisation to send them dissuasive third party content. These are just two examples of artificial intelligence (AI) systems that scrutinise behaviours and establish presumptions of normality and deviance.
Against this background, and going beyond privacy concerns, a pressing question arises: does artificial intelligence exacerbate injustices?
Defining Artificial Intelligence and Injustices
To explore this question, some definitional elements are useful. Although its exact definition varies, artificial intelligence usually refers to algorithmic decision-making processes. Big data, another useful term, can be defined as the practice of accumulating extremely large amounts of information from a variety of sources, and processing that information using algorithms and statistical analysis.
Finally, injustices can derive from discriminations, understood as differential treatment based on arbitrary grounds such as race, colour, sex, language, religion, political or other opinions, social origin, birth or other status. On the macro level, injustices also include consequences of Western prominence and global inequalities.
Hypotheses on the Topic
Ahead of the public event Roundtable of AI and diversity, co-organised with the Embassy of Switzerland on January 21, 2020, I would like to share three starting hypotheses on the question.
- AI systems are likely not only to replicate, but more importantly, also to exacerbate societal biases and injustices. If an AI system is biased, the injustices that it generates will be more organised and numerous than if a human with the same biases was in charge of the decision-making process. Indeed, while human capacities are biologically limited, AI systems allow to store, process and analyse an enormous quantity and variety of data and, in that case, automatically and constantly lead to the biased outcome. Meanwhile, the biased human has omitted some data, has had to sleep, eat, brush his teeth, take a shower, or go for a walk. Overall, he has had much less time and capacity to make problematic decisions.
- AI systems are also likely not only to replicate, but more importantly to exacerbate unrefined belief in universalism. And again, if they do, it is in a more systemic manner that regionalisms are neglected. Various UN organs have been focusing on the ethics of AI and have established lists of values that AI systems must include. Similarly, in 2018, the European Commission launched a European Strategy on Artificial Intelligence, which led to a report providing “Ethics Guidelines for Trustworthy Artificial Intelligence”. In 2019, the Netherlands also launched a national AI strategy, called the “Strategisch Actieplan voor Artificiële Intelligentie”.
Institutional discussions on the ethics of AI lead to the creation of static checklists that are generally not informed by diversity. They often exclude reflections on gender, ethnicity and cultural differences. However well intentioned, as such, these institutional discussions can foster value alignment and prevent the constructive disagreement that would promote inclusion and diversity.
- Finally, AI systems are likely to crystalise and enhance what is considered as “normal” and “deviant” in our societies. In the context of counter terrorism, for instance, security practices (both domestically and abroad) are driven by the objective to anticipate threats. Big data is a very attractive tool to identify suspicious behaviours and dangerous individuals ahead of the commission of offences. This process not only implies to sacrifice core criminal law principles of certainty of punishment and proportionality of the sanction to an act (in order, instead, to punish the presumed dangerousness of an individual). It also requires establishing fixed checklists of the ‘deviant behaviour’ and of the ‘normal behaviour’ signs, in a way that might very well be simplistic and discriminatory.
Many other related issues and perspectives deserve our attention with these fast evolving technologies: Can the risks posed by AI systems really be solved by multiplying AI ethics checklists? How can we have a meaningful and performative discussion about the ethics of AI? Finally, what functions does the law have in this evolving techno-social context? Is the law enabling or constraining injustices? Or both?
Rebecca Mignot-Mahdavi is part of the Asser research strand Human dignity and human security in international and European law, which adopts as its normative framework a human rights approach to contemporary global challenges in the fields of counter-terrorism, international and transnational crimes, challenges at sea, new technologies & and artificial intelligence, non-discrimination and historical memory. It examines what it means to safeguard human dignity - also in relation to human security - in these areas.
Sign up for the roundtable on biases in artificial intelligence: Does AI have a diversity problem?
As part of our winter academy on AI and International Law, T.M.C. Asser Instituut and the Embassy of Switzerland to the Netherlands co-organise the free roundtable on ‘Biases in artificial intelligence: Does AI have diversity problem?”. The conference, which is held on Thursday January 21, brings together academics, policymakers and the private sector to discuss the social impact of artificial intelligence, diversity in the tech sector and whether it is possible to build fair algorithms.
Speakers: Professor Solange Ghernaouti, Director of the Swiss cybersecurity Advisory & Research Group (SCARG) based in Switzerland, and leading international expert on cybersecurity will deliver a keynote lecture, followed by a roundtable discussion. Panellists are Irakli Beridze (UNICRI Centre for Artificial Intelligence & Robotics), Dr Haroon Sheikh (WRR; VU University), Dr Tanya Krupiy (Tilburg University), Eve Logunova (Women in AI Ambassador Netherlands). Moderator: Rebecca Mignot-Mahdavi (T.M.C. Asser Instituut)
Register here if you would like to join.
Want to know more about AI? Sign up now for a unique winter academy on Artificial Intelligence and International law
2020 will be a critical year to set the tone for the next decade of innovations in Artificial Intelligence (AI), one of the most complex technologies to monitor or regulate. Stay ahead of the curve by signing up for our winter academy on Artificial Intelligence and International Law (20-24 January 2020). Top speakers will give you the latest insights into the current and future issues raised by AI from the perspective of international law. Are you a policymaker, industry professional, or academic researcher working on issues related to AI and international law? Have a look at the programme here. For more information or to register for the training programme click here.