[Blog] Beyond AI ethics: International law and human rights for AI accountability
Published 16 November 2020By Berenice Boutin
Asser senior researcher Berenice Boutin recently published a short piece in which she cautioned about the negative implications of artificial intelligence (AI) and discussed the role of international law as a regulatory mechanism for this technology. ‘While the global efforts to reflect on AI ethics are laudable and necessary, it is time to move beyond AI ethics and towards binding legal frameworks and enforceable regulation of AI.’
As AI is progressively being deployed in various public domains such as healthcare, energy, welfare, border security, criminal justice, law enforcement, or defence, we must ensure that the development and use of AI technologies are guided by core democratic values and subject to legal mechanisms of accountability. To this end, established norms and processes of international law, in particular international human rights law, have an important role to play.
Steering the development of AI
In recent years, the sharp advances of AI capabilities have been accompanied by a growing recognition of the need to proactively reflect on its societal implications, so as to shape the development and applications of technology in line with ethical values. Public and private institutions alike have called for a fundamental questioning on the potential impacts of AI, in order to steer AI research and policy towards beneficial outcomes, and to ultimately maintain agency over the technologies we decide to adopt.
The unfettered deployment of data-driven policy-making and algorithmic decision-making in the public sector can indeed come at the cost of many negative consequences, in terms of discrimination, privacy, due process, transparency, and accountability. For instance, the use of risk-assessment algorithms in the judicial system has led to blatant discrimination in the United States, and automated detection of welfare fraud is being litigated in the Netherlands in the SyRI case. The potentially promising and seemingly less controversial applications of AI for example to improve healthcare or energy management should as well be the subject of close reflection and scrutiny, as they are not exempt from risks and concerns.
International legal standards for AI
In this context, sets of guiding principles for ethical AI and informal codes of conduct for self-regulation have proliferated. While the global efforts to reflect on AI ethics are laudable and necessary, it is time to move beyond AI ethics and towards binding legal frameworks and enforceable regulation of AI. It is not to say that new laws are needed: on the contrary, policy and regulatory efforts should primarily seek to interpret and implement existing legal frameworks.
In order to advance AI accountability, international law has a two-fold role to play. First, international law provides for established, globally agreed, actionable and enforceable standards – in particular within the human rights framework, which embodies values such as fairness, equality, dignity, and individual autonomy. Second, international institutions and processes are an ideal forum to debate and engage with possible grey areas and unsettled questions. The international legal dimension does not supplement – but complements – ethical and technical approaches to AI accountability. It is together that the ethical, legal, technical, and policy aspects must be addressed in order to achieve accountability in relation to AI.
*This contribution was first published as part of the I4ADA Accountability Paper Vision 2020: Taking Stock & Looking Forward (October 2020)
Dr Berenice Boutin is a senior researcher at the T.M.C. Asser Instituut and part of the research strand ‘Human dignity and human security in international and European law’. Her work focuses on international responsibility, security, and new technologies. Berenice is project leader of the research project Designing International Law and Ethics into Military Artificial Intelligence (DILEMA). She also the coordinates the winter academy on artificial intelligence and international law.
Winter academy: Artificial intelligence and international law
In February 2021, the Asser Institute will host its third annual winter academy on ‘Artificial intelligence and international law.’ This state-of-the-art training programme will be hosted in an online format and will take place over six days, split across two weeks: 10-12 February and 17-19 February. Learn more and register here.
T.M.C. Asser Sixth Annual Lecture
In a thought-provoking Sixth Annual T.M.C. Asser Lecture, Prof. Andrew Murray, a leading thinker on information technology and regulation, will discuss the challenges that Artificial Intelligence and Big Data pose for human agency and the Rule of Law. Prof. Murray’s lecture is entitled ‘Almost human: Law and human agency in the Time of Artificial Intelligence’.
The Sixth Annual T.M.C. Asser Annual Lecture, a unique online event, will take place on Thursday, 26 November 2020 at 16.00 hours CET. Registrations are open now and you can sign up here.