Societal Relevance and Impact
Research produced as part of this project will contribute to shaping public debates and policy developments on the critical societal issues raised by the use of AI technologies in the military. The question of how to concretely ensure that human soldiers remain in control of military technologies is of great importance and concern for the armed forces, national and international governing institutions, and the general public. This project will provide a platform for discussion and exchange amongst stakeholders and result in solid input for policy and regulation.
The valorisation and dissemination activities and products in this project include:
- Policy Guidelines on Maintaining Human Control over AI Technologies in the Military
The research team will produce background papers and reports highlighting the main policy implications of the research results, and on this basis will formulate detailed recommendations for a responsible use of AI in the military. The project will specifically develop detailed policy guidelines on where and how to maintain human control in order to ensure compliance with, and accountability for violations of, international law in the design, acquisition, and deployment of military technologies involving AI. The policy guidelines will be developed with the involvement of relevant stakeholders from the military and the industry, and in dialogue with policy makers at the national, European, and international levels, as well as non-governmental organisations.
- Technical Standards and Protocols for Testing and Certification of Compliance
Policy guidance will further be operationalised with the development of technical tools to test and certify whether AI systems which development, acquisition, or deployment is envisaged technically meet ethical and legal thresholds. Together with industry partners and policy stakeholders, the research team will develop standards and protocols aimed at safeguarding human agency and ensuring international legal compliance by design. The technical guidance will be of direct use to both industry and governmental partners.
- The Hague Testing Lab on AI Compliance with International Law
Policy and technical guidelines will further be concretised and applied by setting up a testing lab in The Hague on AI compliance with international law. Initially, the lab will focus on military applications of AI, and their compliance with the law of armed conflict and military operations. Later, the lab could extend to other applications of AI, and offer features to test compliance with international law more generally. This unique initiative would have the potential to become an internationally leading testing lab for AI and international law, in the ideal location of The Hague.
- Rules of Engagement for Deploying Military AI in Compliance with International Law
In collaboration with stakeholders such as the Ministry of Defence and NATO, the research team will seek to assess, adjust and supplement existing Rules of Engagement, and to draft model Rules of Engagement which incorporate and implement thresholds and modalities of human control over military AI.
- Professional Trainings
Tailored trainings will be offered to members of the armed forces and policy makers. Together with partners such as the Netherlands Defence Academy, the project team will develop a flexible curriculum with modules on ethical, legal, technical, and policy aspects. The trainings will involve an active coaching approach, as well as serious gaming, simulations and interactive exercises on concrete scenarios of human-machine partnerships.
- Innovative Outreach Activities
Public outreach activities aimed at a broad dissemination of, and engagement with, research findings will be rolled out throughout the project. They will include events and lectures in The Hague, online non-scientific publications, multimedia products such as podcasts and videos, and innovative activities such as knowledge cafés, and a survey and hackathon on public values for military AI.