Editor's note: Samuel Brobby graduated from Maastricht University's Globalisation and Law LLM specialising in Human Rights in September 2020. A special interest in HRDD carries his research through various topics such as: the intersection between AI and HRDD, the French Devoir de Vigilance or mHRDD at the EU level. Since April 2021 he has joined the Asser Institute as a research intern for the Doing Business Right project.
I am not convinced that inherently
evil technology exists, rather, bad business models perpetuate and accentuate
existing problems. AI is no exception to this phenomenon and diligent
discussion is required to ensure that the negative impacts of artificial
intelligence are meticulously scrutinised. In the end, transparency, responsibility
and accountability must be ensured around technology that has the power to be
an important tool for Human Rights and to provide support for development across
every sector of society. Given that this very same
technology, if used irresponsibly, has the power to compound and accelerate the
very issues we would like it to help solve, it is the intention of this blog to
raise further questions and continue to provide discussion surrounding AI and
responsibility. In the first part of this publication, I discussed how AI has
the potential to contribute to HRDD by being technologically integrated into
the process. However, before AI will even be considered as a possible tool to aid
in the HRDD process, it will play a large part in making businesses more
profitable. It will also be used by civil society, States and State-backed
institutions in the pursuit of their respective goals.
AI and its declinations are, and
will, continue to be deployed in a number of sectors including, marketing,
healthcare, social media, recruitment, armed conflicts and many more. Thus,
given that AI has the potential for contributing negatively to Human Rights and
the environment, it is important to discuss the risks and potential legal
challenges surrounding AI and responsibility. Identifying these is crucial to
the goal of taming AI in an attempt to mitigate some of the potential negative
impacts it may have on Human Rights. The pervasive nature of this technology
along with the particular place AI developers hold in supply chains warrants
some attention. As such, this section aims at analysing the HRDD obligations of
AI developing businesses. To do so, we will illustrate some of the Human Rights
(and environmental) risks linked to the creation of these AI agents before
looking at the manner through which ex ante responsibility through HRDD can be
applied to AI developing businesses in the creation and commercialisation of AI
algorithms.
AI and Human Rights risks
In principle, it seems that the
effects of AI agents are felt very far (be it in the spatial or temporal sense)
from the point of creation of these same agents. This is problematic in terms
of delineating the responsibility of AI developers who are far removed from the
negative impacts they have a hand in instigating. The literature on the Human Rights and
Environmental risks surrounding AI is quite extensive. This sub-section aims at
presenting some of the risks linked to the use of AI in transnational business
to illustrate the capacity for AI to negatively impact Human Rights.
Perhaps the most common risk evoked
regarding AI and Human Rights is the problem of algorithmic bias. This refers
to the manner through which AI may unintentionally perpetuate and,
subsequentially deepen, inherent human/societal prejudices by producing discriminatory
results. These biases are transmitted via training models and data sets that
are “fed” to AI agents. In the end, these biased results are reproduced and
reinforced through a continuous feedback loop. The seemingly ever-present nature
of algorithmic biases poses some real problems in terms of responsibility. The
examples are numerous and vary in nature, such as the Syri case which caused an uproar in the Netherlands.
This big data analysis system was designed to be deployed in neighbourhoods
with the objective of identifying potential risk-profiles in relation to
fraudulent social welfare claims. Its use targeted disadvantaged neighbourhoods
on the basis of a list of possible suspects elaborated by Syri. It’s “trawling
method” meant that once deployed, it would comb through data connected to every
resident in that area in order to flag inconsistencies between social welfare
claims and actual living situations, without notifying the residents that were
subjected to it. February 5th 2020 saw the District Court of the
Hague render a potentially far reaching ruling, which provided (amongst other
things) that such technology contravenes the right to respect for private and
family life (article 8 of the ECHR), citing a “special responsibility” for
signatory states in the application of new technologies. The potential for
identification of “fraudsters” (none of which were actually found using Syri)
could not counterbalance the infringements of convention rights that the use of
this algorithm would lead to. The strategic choice to bring the case on the
basis of Article 8 of the ECHR should not detract from the discriminatory
nature of Syri which could potentially have been challenged on the basis of
article 14 (Prohibition of discrimination). Phillip Alston’s amicus curiae brief touches on the manner through
which the violations of the right to private and family life are compounded by
the discriminatory targeting of areas with “higher concentrations of poorer and
vulnerable groups”. Other examples of algorithmic bias leading to discriminatory
outcomes are numerous. They include the discriminatory facial recognition
algorithms developed by Amazon to help law enforcement, the
use of AI in recruiting or its application in healthcare. As seen in the Syri case above, AI
also contains some well documented risks in terms of privacy.
The acquisition and use of AI agents
for the purposes of mass surveillance may be an illustration of AI developers pandering
to the market to the detriment of Human Rights. The issue of pandering is linked to the near-sighted short
termism solely designed to increase profits. By pandering to these short-term
goals without a view for the long-term impact of AI, the path we cut for AI,
and later responsibility, can only be reactive. Here we may consider, for
example, the recent reports citing EU based companies selling surveillance
tools, such as facial recognition technology to key players in the Chinese mass
surveillance mechanism. Despite being aware of the potential violations that this
technology could lead to and, in spite of the potential Human Rights abuses
that its use could facilitate, these companies elected to proceed. The
subsequent Human Rights consequences of the use of these technologies for mass
emotional analysis to aid law enforcement or network cameras to survey the
Xinjiang Uyghur Autonomous Region (XUAR) are well known. Less so, is the responsibility of
AI developers in facilitating these violations.
It must remain in mind, however,
that the distance (be it spatial or temporal) between the creation of a new AI
algorithm and its contribution to Human Rights violations or environmental
damages can at times be quite large indeed. These algorithms are created and
then subsequently modified, sold and used in a number of ways that further blur
and diffuse any hope for a simple solution in terms of responsibility.
In
short, the risks that are carried by AI, or facilitated by its use are considerable.
In a report to
the General assembly, the UN Working Group on Business and Human Rights
clarified that due diligence requirements are “commensurate to the severity and
likelihood of the adverse impact. When the likelihood and severity of an
adverse impact is high, then due diligence should be more extensive”. Despite
this, the risks that were identified in this section, and indeed by many long before
this article, have not yet been met with heightened HRDD obligations. The next
section aims at providing some elements to clarify the ex-ante responsibility
of AI developers to conduct HRDD.
Subjecting AI
to HRDD: Ex-ante Responsibility
The Human Rights risks related to
the development of AI can be put into two categories. The first relates to
internal risks that are inherent to the way AI functions following the
creation stage, these include algorithmic bias, privacy issues, or environmental costs of training and computation to name
a few. The second relates to external risks that AI developers are exposed to
at the stage of commercialisation. Here the issue of pandering is
salient since it leads to the development and sale of AI agents to actors which
could, reasonably foreseeably, use the technology in a manner that is adverse
to Human Rights. The ex-ante responsibility of AI developers through HRDD will
be looked at through these lenses. HRDD at the point of origin (creation stage)
and HRDD at the point of arrival (commercialisation/sale).
HRDD at the creation stage of AI:
Several inherent risks have been
identified with regards to AI agents. Given the knowledge of these inherent
pitfalls to the technology, HRDD must be conducted at the point of origin to
identify and deal with their existence.
Whilst we can acknowledge AI
presents some new issues that must be solved, we may recognize that the issue of
AI’s human rights impact is by no means a radically new one. In fact, the UNGPs offer a framework for apprehending
these issues. UNGP 13b calls on businesses to “[s]eek to
prevent or mitigate adverse human rights impacts that are directly linked to
their operations, products or services by their business relationships, even if
they have not contributed to those impacts”. As BSR’s paper series Artificial Intelligence: A
Rights-Based Blueprint for Business remarks: “This means that data-sets,
algorithms, insights, intelligence, and applications should be subject to
proactive human rights due diligence”. It also means that the HRDD process is
not solely reserved to AI engineers. The process would have to be undertaken by
all relevant instances within AI developing businesses that contribute to the
elaboration of an AI agent. These include management, the marketing department
or data brokers to name a few. From this point, the question of proximity
between AI developing businesses and adverse human rights impact that are
subsequently felt far down the line may begin to be apprehended. HRDD
obligations requiring undertakings to identify, assess, prevent, cease,
mitigate, monitor, communicate, account for, address and remediate potential
and/or actual adverse impacts on human rights and the environment can reduce
the space of corporate irresponsibility. A contraction of this space between AI
developing businesses and adverse Human Rights & environmental impacts downstream
would help hold the former accountable for the latter. This is especially true
if accompanied by a robust liability regime that holds these entities legally
responsible for the impacts of their creations.
AI developers
can best assess the viability of their algorithms in search of a given result. The
main driver here is often whether or not this AI agent solves a given problem
with sufficient accuracy. To this effect commercial interests are at the wheel,
naturally so. However, the turn to integrating ethics into AI along with an
increase in attention towards assessing Human rights impacts are becoming important
parameters in this sector. This may be in part thanks to increasing acceptation
of HRDD as a method to regulate business activities. The additional threat
carried by a potential introduction of a robust liability mechanism (perhaps in
the form of an upcoming EU mHRDD legislation) could strengthen this dynamic
further. The reasoning being that if sanctions are imparted for products
presenting avoidable systemic biases, or any other inherent defects
leading to adverse impacts for which corporate groups will be subsequently
liable, then more attention will be focused on preventing such harms. Indeed,
if businesses operate as rational actors in a system where Human Rights or
environmental impacts incur a real cost, then this seems like a natural consequence.
As such, ideas like introducing the obligation for AI developers to develop a bias impact statement or
include environmental impact assessments as part of an AI due diligence would
be an interesting place to begin. This process would benefit from the inclusion
of different potentially affected stakeholders as well as potential vulnerable
populations in the process of testing and creating AI agents. The resulting AI
impact statement carrying the weaknesses and subsequent risks of a given algorithm
could be subject to publication in order to increase transparency or, be
required to be acknowledged by the buyer of an AI algorithm.
HRDD at the stage of commercialisation of AI:
The manner in which AI is deployed
hugely affects its capacity to impact Human Rights. For instance, the use of
computer vision and language processing to identify and remove content aimed at
promoting terrorism or racism certainly has its positive applications. The same
technology may also have the potential to lead to strong violations of freedom
of expression. Whilst these violations can arise as a consequence of AI agents
being insufficiently accurate or error prone, they may also arise intentionally
through the use of ill doing actors. As a consequence, it is of vital
importance that AI producers consider the point of arrival of their technology
as a key source of human rights risks as part of their HRDD process.
AI producers
find themselves in an intriguing position in this regard. Given the current talent gap
and the very high technicity involved in their field, producers are in a strong
bargaining position, unlike say producers of garment. This means that AI
developers, as suppliers of relatively rare and sophisticated technology, can
leverage, or at the very least influence, where their AI agents will be put to
use. This might not be the case in the long-term as the supply of AI
specialists will likely increase to catch up with current demand at some point.
However, the fact that AI developers are currently in a position of relative
strength is of vital relevance to the current content of their obligation to
conduct HRDD in the process of selling their product. Thus, the HRDD process of
AI developers must concern itself with the sale of AI agents to ensure that
their algorithms are not being put in the hands of actors which could (reasonably)
foreseeably generate adverse Human Rights impacts.
A
parallel can be drawn between the sale of AI and weapons to demonstrate the
importance of HRDD at the point of arrival. The connection between the high
capacity to negatively impact Human Rights and a heightened need for
responsibility mentioned prior is intuitive, though not currently implemented in
the case of AI. In that conceptual vein, the Arms Trade Treaty
(ATT) which aims to regulate the international trade in conventional arms, provides
several restrictions on the possibility to export weapons on the basis of an
export assessment. One of these conditions concerns the case in which the
seller is informed that the weapons would be used to” commit or facilitate a
serious violation of international human rights law”. Setting the actual impact of
the ATT in regulating arms trade aside, the notion of Buyer Due Diligence it
proposes for weapon-selling states may have an analogous application for AI
developers. Similarly to weaponry that (fairly obviously) does not mean that AI
does not have a whole set of legally justified uses. It does, however, mean that
the HRDD process of AI developers should be more directly focused on assessing
buyers than, for example, the HRDD process introduced by garment manufacturers.
Conclusion
This
contribution aims at highlighting the manner through which HRDD and AI will
likely interact with each other in the near future. If AI is as pervasive as it
is expected to be and presents itself as a general-purpose technology which
will permeate all aspects of our society then it must be watched very closely.
We know some of the pitfalls it carries internally in terms of bias, opacity or privacy to
name a few. External pressure will further compound these. The UNGPs and the HRDD
process enshrined therein provide an authoritative vantage point to apprehend
the responsibility of AI developers. As I have argued, the due diligence
process should be focused particularly at the point of origin (creation of an
AI agent) and the point of arrival (buyer due diligence) of the AI in question.
As the EU continues to press forward
with general mHRDD legislation, the idea of introducing a sector specific set
of hard HRDD requirements for AI similar
to what we see with the EU conflict ore regulation or the EU Timber regulation, whilst interesting to consider, seems unlikely. As such, in light of the unique
inherent issues that are linked to the development and sale of AI, the work of the OECD in the elaboration of
sector-specific due diligence guidance could be extremely valuable. Taking AI’s
huge reach, it’s polymorphous nature and its incredible speed of development
into consideration; the flexibility and potential reactivity of soft law
presents itself as a good match to further clarify the HRDD process of AI
developers. Coupling the non-binding guidance from legitimate institutions like
the OECD, along with hard legislative measures in the form of EU mHRDD
legislation may provide AI developers with the tools required to navigate the
complex shifting terrain of responsibility before them. Additionally, stitching a comprehensive liability regime
for failure in HRDD would, in my view, be vital to ensure the efficacy of HRDD.
Although, the considerable distance
between the development of AI, its sale and the occurrence of a damage as a
result of its use by the end user will likely see a multitude of complex legal
challenges arise as a consequence. Questions in terms of establishing causality
or providing elements of proof (especially if the burden of proof remains on
the claimants) are particularly salient. It is precisely these types of complex
questions that must be answered in light of implementing a functioning system
of human rights responsibility of AI developers. Whether or not this happens
still remains to be seen as developments at the EU level for mHRDD are keenly
awaited.
The
potential contribution of AI to the HRDD process seems clear as posited in the
first part of this blog. Indeed, if HRDD is non static, continuous and
preventive then it seems entirely possible that AI would be called upon in an
attempt to enhance this process at some point. This is especially true if you
consider AI’s prowess in terms of risk assessment, which is a key aspect of
HRDD. Inversely the boundaries set by HRDD along with the possibility of
developing civil liability mechanisms will also affect the shape of AI in the
future. In light of AI’s potential societal impact, it seems reasonable to
expect those who develop it to be held to a high threshold of responsibility
for its negative impacts.