Artificial Intelligence and Human Rights Due Diligence - Part 2: Subjecting AI to the HRDD Process - By Samuel Brobby

Editor's note: Samuel Brobby graduated from Maastricht University's Globalisation and Law LLM specialising in Human Rights in September 2020. A special interest in HRDD carries his research through various topics such as: the intersection between AI and HRDD, the French Devoir de Vigilance or mHRDD at the EU level. Since April 2021 he has joined the Asser Institute as a research intern for the Doing Business Right project.

I am not convinced that inherently evil technology exists, rather, bad business models perpetuate and accentuate existing problems. AI is no exception to this phenomenon and diligent discussion is required to ensure that the negative impacts of artificial intelligence are meticulously scrutinised. In the end, transparency, responsibility and accountability must be ensured around technology that has the power to be an important tool for Human Rights and to provide support for development across every sector of society.  Given that this very same technology, if used irresponsibly, has the power to compound and accelerate the very issues we would like it to help solve, it is the intention of this blog to raise further questions and continue to provide discussion surrounding AI and responsibility. In the first part of this publication, I discussed how AI has the potential to contribute to HRDD by being technologically integrated into the process. However, before AI will even be considered as a possible tool to aid in the HRDD process, it will play a large part in making businesses more profitable. It will also be used by civil society, States and State-backed institutions in the pursuit of their respective goals.

AI and its declinations are, and will, continue to be deployed in a number of sectors including, marketing, healthcare, social media, recruitment, armed conflicts and many more. Thus, given that AI has the potential for contributing negatively to Human Rights and the environment, it is important to discuss the risks and potential legal challenges surrounding AI and responsibility. Identifying these is crucial to the goal of taming AI in an attempt to mitigate some of the potential negative impacts it may have on Human Rights. The pervasive nature of this technology along with the particular place AI developers hold in supply chains warrants some attention. As such, this section aims at analysing the HRDD obligations of AI developing businesses. To do so, we will illustrate some of the Human Rights (and environmental) risks linked to the creation of these AI agents before looking at the manner through which ex ante responsibility through HRDD can be applied to AI developing businesses in the creation and commercialisation of AI algorithms. More...

Artificial Intelligence and Human Rights Due Diligence – Part 1. Integrating AI into the HRDD process - By Samuel Brobby

Editor's note: Samuel Brobby graduated from Maastricht University's Globalisation and Law LLM specialising in Human Rights in September 2020. A special interest in HRDD carries his research through various topics such as: the intersection between AI and HRDD, the French Devoir de Vigilance or mHRDD at the EU level. Since April 2021 he has joined the Asser Institute as a research intern for the Doing Business Right project.


The recent surge in developments and debate surrounding Artificial Intelligence (AI) have been business centric, naturally so. The conversation has long been centred on the possible gains “digitally conscious” companies can recoup from their sizeable investments in the various forms this technology can take. The ink continues to flow as numerous articles are released daily; debating between the ultimate power of artificial intelligence (and topical subsets like machine learning) on the one hand, versus the comparatively more philistinish views regarding what these technologies can offer on the other. Our objective here is not to pick a side on the AI debate. Rather, we would like to explore the Business & Human Rights implications of the development of AI and, in particular its intersection with the human rights due diligence (HRDD) processes enshrined in the UN Guiding Principles on Business and Human Rights and subsequent declinations. How compatible is AI with HRDD obligations? Where does AI fit into the HRDD process? Can AI be used as a tool to further HRDD obligations? Can the HRDD process, in return, have an effect on the elaboration and progress of AI and its use in transnational business? And, to which extent will the roll out of AI be affected by HRDD obligations? These are all questions we hope to tackle in this blog.

In short, it seems two distinct shifts are occurring, rather opportunely, in close time frames. The impending mass adoption of AI in transnational business will have strong consequences for the state of Human Rights. This adoption is not only substantiated by an uptick of AI in business, but also in policy documents produced or endorsed by leading institutions such as the ILO or the OECD for instance. Inversely, we must consider that HRDD obligations elaborated by the BHR community will also have strong implications for the development and roll out of AI. These two transformations will interact increasingly as their positions are consolidated. It is these interactions that we wish to analyse in the two parts of this article. Namely, the emergence of Artificial intelligence as a tool to shape and further HRDD obligations (1) and the emergence of HRDD as a process to shape the development of AI (2). More...