Editor's note: Samuel Brobby graduated from Maastricht University's Globalisation and Law LLM specialising in Human Rights in September 2020. A special interest in HRDD carries his research through various topics such as: the intersection between AI and HRDD, the French Devoir de Vigilance or mHRDD at the EU level. Since April 2021 he has joined the Asser Institute as a research intern for the Doing Business Right project.
I am not convinced that inherently
evil technology exists, rather, bad business models perpetuate and accentuate
existing problems. AI is no exception to this phenomenon and diligent
discussion is required to ensure that the negative impacts of artificial
intelligence are meticulously scrutinised. In the end, transparency, responsibility
and accountability must be ensured around technology that has the power to be
an important tool for Human Rights and to provide support for development across
every sector of society. Given that this very same
technology, if used irresponsibly, has the power to compound and accelerate the
very issues we would like it to help solve, it is the intention of this blog to
raise further questions and continue to provide discussion surrounding AI and
responsibility. In the first part of this publication, I discussed how AI has
the potential to contribute to HRDD by being technologically integrated into
the process. However, before AI will even be considered as a possible tool to aid
in the HRDD process, it will play a large part in making businesses more
profitable. It will also be used by civil society, States and State-backed
institutions in the pursuit of their respective goals.
AI and its declinations are, and
will, continue to be deployed in a number of sectors including, marketing,
healthcare, social media, recruitment, armed conflicts and many more. Thus,
given that AI has the potential for contributing negatively to Human Rights and
the environment, it is important to discuss the risks and potential legal
challenges surrounding AI and responsibility. Identifying these is crucial to
the goal of taming AI in an attempt to mitigate some of the potential negative
impacts it may have on Human Rights. The pervasive nature of this technology
along with the particular place AI developers hold in supply chains warrants
some attention. As such, this section aims at analysing the HRDD obligations of
AI developing businesses. To do so, we will illustrate some of the Human Rights
(and environmental) risks linked to the creation of these AI agents before
looking at the manner through which ex ante responsibility through HRDD can be
applied to AI developing businesses in the creation and commercialisation of AI
algorithms. More...