[Interview] Google skips due diligence for cloud services to Israel
Published 2 June 2025
Me dia on Shutterstock
A new story published in The Intercept reveals that tech company Google had serious concerns about providing state-of-the-art cloud and machine-learning services to Israel. The piece quotes Asser Institute researcher León Castellanos-Jankiewicz weighing on Google’s contractual inability to conduct proper risk assessments.
Tech giant Google understood the risk of providing cloud and machine-learning services through its so-called Project Nimbus to Israel, a nation “long accused of systemic human rights violations and wartime atrocities”, according to by The Intercept. The story reveals that Google would be “unable to fully monitor or prevent Israel from using its software to harm Palestinians”.
The story also reveals that Google is under contractual obligation to “stonewall criminal investigations by other nations” into Israel’s use of its technology. Moreover, it requires close collaboration with the Israeli security establishment, including intelligence sharing, which was “unprecedented” in Google’s agreements with other countries.
Inability to prevent misuse
Google internal documents reviewed by Asser Institute researcher León Castellanos-Jankiewicz show that the tech company would be unable to monitor and oversee Israel’s usage of its tools. Castellanos-Jankiewicz told The Intercept that Google’s awareness of the risks and foreknowledge that it could not conduct standard due diligence may pose legal liability for the company under domestic legal rules. According to the news story, portions of these documents were first reported by The New York Times, but "Google’s acknowledged inability to oversee Israel’s usage of its tools has not been previously disclosed.”
The contract with Israel keeps Google in the dark about what the customer is doing with its technology, and Google would be given “very limited visibility” into how its software is deployed, according to The Intercept. “They're aware of the risk that their products might be used for rights violations.” Castellanos-Jankiewicz is quoted as saying. “At the same time, they will have limited ability to identify and ultimately mitigate these risks.”
The story includes passages of an internal Google report confirming the company’s concerns: “Google Cloud Services could be used for, or linked to, the facilitation of human rights violations, including Israeli activity in the West Bank,” which could result in “reputation harm”. Despite this, Google has limited ability to monitor, mitigate or interrupt services to Israel according to portions of the agreement reviewed by Castellanos-Jankiewicz.
Increased scrutiny of the Israeli government’s activities
The revelations come amid the increased scrutiny of Israeli activities in the Occupied Palestinian Territories. In October 2024, the International Court of Justice (ICJ) in The Hague issued a non-binding Advisory Opinion urging countries to “take all reasonable measures” to prevent corporations from aiding or assisting the illegal occupation of the West Bank. In separate ICJ proceedings, South Africa is accusing Israel of alleged violations of the Genocide Convention.
Conducting due diligence
The Project Nimbus deal was concluded before Hamas attacked Israel on October 7, 2023. However, international best practices outline the duty to conduct periodic risk assessments, especially when circumstances on the ground have changed. Continuously assessing “actual and potential human rights impacts” is increasingly expected in corporate behaviour, according to the UN Guiding Principles on Business and Human Rights, a non-binding document laying down human rights due diligence standards for multinational corporations.
“The UN guiding principles, plain and simple, require companies to conduct due diligence,” Castellanos-Jankiewicz said. “Google acknowledging that it will not be able to conduct these screenings periodically flies against the whole idea of due diligence.
It sounds like Google is giving the Israeli military a blank check to basically use their technology for whatever they want.”
According to The Intercept, Google declined to answer any of a list of detailed questions about the company’s visibility into Israel’s use of its services or what control it has over Project Nimbus.
Read more
Experts urge immediate safeguards on military use of AI Decision-Support systems
A group of leading legal scholars has called on the international community to urgently regulate the use of artificial intelligence-based decision-support systems (AI-DSS) in military operations, warning that current frameworks fail to address the profound risks these systems pose to international humanitarian law and human judgement. Unlike autonomous weapon systems, which have been the focus of UN discussions for over a decade, AI-DSS remain largely unregulated and under-scrutinised. Read more.
[Interview] Klaudia Klonowska: “The calls for ‘human judgement’ in AI warfare are not matched by developments in military defence”
As artificial intelligence (AI) continues to shape the future of military operations, Asser Institute PhD researcher Klaudia Klonowska discusses how machine learning is rapidly transforming military decision-making, particularly in terms of AI-enabled decision-support systems, (AI-DSS). “Despite AI’s portrayal in the media as a highly sophisticated technology, machine learning is still evolving - and so is our understanding of how to manage and govern it. Legal scholars need to engage with developers to make sure these systems reflect legal principles, especially in high-stakes conflict environments.” An interview.
