Google's Relationship with Israel Defense Forces Military AI Technology
Hey Everyone,
Google has a complicated relationship with Military tech historically. Most recently, nearly 200 workers inside Google DeepMind on May 16th, 2024, the company’s AI division, signed a letter calling on the tech giant to drop its contracts with military organizations earlier this year, according to Time.
Google of course hasn't been very transparent with the general public, let alone its own employees about the range of its military contracts over the past decade and what exactly is involved. Of course we know a fair bit about Palantir's involvement and frankly the Israeli army (IDF) has been using Amazon’s cloud service and artificial intelligence (AI) tools from Microsoft and Google for military purposes amid the increasing amount of data on Palestinians and Gaza, according to reports by +972 Magazine and the Hebrew-language news site Local Call. Of course these claims have been disputed.
Still that folk from DeepMind have signed something, even with all the people Google have fired over this, is eyebrow raising. Project Nimbus is a $1.2 billion cloud computing and AI agreement between the Israeli government and tech companies Google and Amazon. Under this project, Google is contractually barred from preventing specific arms of the Israeli state from using their technology. The project has been controversial due to concerns about its potential military applications.
The letter is a sign of a growing dispute within Google between at least some workers in its AI division—which has pledged to never work on military technology. It appears to be a violation of Google's own rules. The letter also points to reports that the Israeli military uses AI for mass surveillance and to select targets in its bombing campaign in Gaza, with Israeli weapon firms mandated by the government to purchase cloud services from Google and Amazon.
“Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles,” the letter adds.
The signatures represent some 5% of DeepMind’s overall headcount. The Israeli military has been using an AI-powered system called Lavender to identify potential targets in Gaza. This system has marked tens of thousands of individuals as potential targets based on their links to Hamas or Palestinian Islamic Jihad (PIJ). Lavender processes vast amounts of data to rapidly identify these targets, which has raised legal and moral questions about the use of AI in warfare.
The letter calls on DeepMind’s leaders to investigate allegations that militaries and weapons manufacturers are Google Cloud users; terminate access to DeepMind technology for military users; and set up a new governance body responsible for preventing DeepMind technology from being used by military clients in the future. It does not seem like Google has learned its lesson since Maven in 2018. Six years later, Google is still firing protestors internally.
Tens of thousands of people who have died in Gaza, has AI tools used by the IDF implicated. In a New Yorker article published on April 12, +972 reporter Yuval Abraham explained the rationale behind Lavender’s deployment: “When [high ranking Israeli officials] decided to expand the list to so many people, it became impossible, and that’s why they decided to rely on all of these sophisticated automation and artificial-intelligence tools.” Most of the civilians were of course defenseless.
From what we know, as of July 10, 2024, the total Palestinian death toll had reached 38,243 according to Gaza health officials. It's entirely possible Palantir and American tech is at the heart of these AI systems. Apparently the elite division of the Israel Defense Forces (IDF) who helped create the tool, claims the tool has 90 percent accuracy in identifying people. Only 90%?
While it's a charged topic, following the murderous attack by the Hamas terror organization on October 7, the IDF has been operating more or less under its own authority with limited rule of law by the international community. Israel has essentially killed people in Gaza at a 40:1 ratio, while going after Hamas. The problem is if you work at Google Cloud or Google DeepMind you are certainly complicity in some way.
But the same can be said for AWS and Microsoft. The Israeli army has been using cloud services from major tech firms like Amazon, Microsoft, and Google to enhance operational efficiency in Gaza. These services offer unlimited storage and advanced AI capabilities, which have been crucial for data processing and operational effectiveness during military operations.
It's impossible to verify the sources of +972 Magazine, but the accounts if they are factual, are difficult to read. To this day it remains their most read piece. BigTech does not seem to care about the opinions of their employees however. The May 16th memo hints at a culture clash between Google and DeepMind, which Google acquired in 2014 and whose tech Google pledged in 2018 would never be used for military or surveillance purposes. It's highly likely Google's participation in AI military systems aren't accountable to people, only accountable to the elite within the company and involved in National Defense.
It's entirely possible Gaza and the Ukraine have been a sort of training ground for American BigTech's increasing involvement in National Defense and Military AI programs for even bigger geopolitical and global conflicts ahead as could occur with Taiwan, Iran, North Korea and other hot spots.
Google's relationship with its employees and internal conflicts of interest have been a significant point of stress for at least the past 5-10 years. The letter highlights tensions within Google between its AI division and its cloud business, which sells AI services to militaries. As AI becomes more embedded in military systems, drone technology and cybersecurity - this has the risk of escalating given the probability that some see for global conflict with China and its allies.