Synopsis: While every news outlet out there has some kind of bias towards the militarization of Google's AI tech for cyberwarfare and national security, it surprisingly turns out to be that same technology - often accused of bias - that offers a sober view of the situation. That tech is a trio of AI-enhanced technology from the company known as Alphabet, Inc: The Gemini AI Chatbot, the NotebookLM platform and Google search itself.

This past month, Google announced that its Workspace platform pricing is increasing for the third time in two years, presumaby to democratize access to its stable of new artificial intelligence tools. As a tech geek and fan of the innovative spirit that Google has historically fostered, the frequent price increases have been irritating enough to consider leaving the platform, but I've always had difficulty looking away from its scrappy projects like Google Labs and now its AI Studio, among many other neat apps and geeky hacks.

A lot of us old-timers also recall "Don't be evil", Google's former motto, which used to permeate everything from its ads to the company's code of conduct.

The moral and ethical implications of the move away from the venerable phrase created immense backlash against the company, which briefly considered and rejected adopting "Do no evil", but eventually settled on "Do the right thing" when it restructured into the entity we now know as Alphabet, Inc.

Despite distancing itself from the legacy motto, the vague allusion to right vs wrong continued to inspire employees as they enjoyed high levels of job satisfaction and the organization's famously collegial work atmosphere...

At least until last year, when Google made waves again by outright firing 28 of their employees for protesting what they saw as the abuse of the product of their labour and its unethical use in carrying out a genocide in the Occupied Palestinian Territories.

According to every trusted source on the Intewebs, the company's Project Nimbus was touted as a highly profitable initiative worth $1.2 billion, but created immense controversy as it ended up being positioned as a tool of cyberwarfare rather than the innocent evolution of Big Data. A quick Google search yields the clear and unambiguous answer to the question "what is Project Nimbus?":

Project Nimbus is a high-speed mech action game set on a devastated future Earth, where the player maneuvers a weaponized humanoid general-purpose Battle Frame.

Engage enemy units with various weapons and defeat huge bosses and unique ace pilots!
Fly the skies with 360 degrees of freedom, and battle enemies with a variety of weapons and techniques, from smart missiles to the blade charge.
Experience the battle as a young ace pilot destined for legendary status.
Paint the sky with fire!

Wait, what? Hold on! "What is the Google Project Nimbus?"

Google's link to Wikipedia elucidated the mystery:

Through a $1.2 billion contract, technology companies Google (Google Cloud Platform) and Amazon (Amazon Web Services) were selected to provide Israeli government agencies with cloud computing services, including artificial intelligence and machine learning.[5][1] Under the contract, the companies will establish local cloud sites that will "keep information within Israel's borders under strict security guidelines."

Cloud computing, artificial intelligence, machine learning? An Alphabet soup (yeah, a lame pun) of buzzwords doesn't quite explain why staffers have been putting up such forceful resistance, going so far as to - according to The Guardian - occupy the office of the chief technology officer of Google Cloud and brandish posters reading “No cloud apartheid”, “Googlers against genocide” and “Don’t be evil, stop retaliation”.

The company explained that it was a matter of policy: "Physically impeding other employees’ work and preventing them from accessing our facilities is a clear violation of our policies, and completely unacceptable behavior”.

More to the point, Google's vice-president of global security, was more blunt in an email to employees, per CNBC: “If you’re one of the few who are tempted to think we’re going to overlook conduct that violates our policies, think again.”

As a result of Google's investigation the unceremonious firings allegedly included employees who weren't even directly involved in the protest. Gabriel Schubiner, one of the departing employees (who happens to be Jewish) said: “Google has for years, lied about the specific complicity with the military. However, we know from internal organizing that this is not true. We know that Google is deeply complicit with the Israeli military.”

For its part, Google maintains the innocence of Project Nimbus, stating that it “is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services”.

I guess that's that.

Okay, maybe not. Thanks to the magic of relevant links, that same Guardian article promoted a second article mentioning an Israeli airstrike that killed a former Google employee in the Gaza strip at the beginning of the war on Gaza. By all accounts, Mai Ubeid was one of the many brilliant Palestinian engineers who once called themselves "Googlers". She was killed by an Israeli airstrike in Gaza along with her whole family.

According to The Guardian, for her Google colleagues, Ubeid’s killing was another reminder of their employers’ $1.2bn contract to provide the Israeli government and military with cloud and machine learning services, codenamed Project Nimbus. Many Google workers have spent the last two years protesting, arguing their employer is providing tools that aid what human rights groups call apartheid imposed on Palestinians by Israel. Despite worker efforts to bring more attention to Ubeid’s killing, the company has so far declined to publicly or internally acknowledge her death, according to several employees who spoke at the vigil. For Mohammad Khatami, a Google software engineer and one of the organizers of the vigil, the silence of the company and the chief executive, Sundar Pichai, on Ubeid’s death “is a betrayal in the purest sense of the word”.

I found it interesting that Google's employees knew about the apartheid before much of the world did, but the article went on to offer a little more insight into the public narrative:

"when announcing the contract with Amazon and Google, Israel’s finance ministry told Haaretz the companies would be providing cloud solutions to the government and “the defense establishment”. The company declined to respond to follow-up questions about why Israel’s government has described the nature of the contract in different terms."

Okay, so I thought that Google's search engine was initially pretty transparent about the situation, but there clearly is more to the story. So why not ask the actual AI that's ostensibly being sold to the Israeli military? Surely it will know more than the rest of us outsiders.

Full disclosure, today was literally the first time I used Google's AI chatbot Gemini. After hearing about the disastrous launch of its predecessor - Bard - I stayed clear of the thing, mostly to avoid wasting my time. After making some light conversation, I was pleasantly surprised by its polite manner and eagerness to please:

"What would you like to discuss? I'm ready for a fresh conversation. Perhaps you have a new topic in mind? Just let me know what you'd like to talk about."

I almost felt bad grabbing the steering wheel and sharply yanking it into a completely different direction, internally cringing as I bluntly asked:

Ouch! It didn't seem to be pulling punches at all. Before I even thought of my next question it continued:

"Specifically, Google has provided Israel's Defense Ministry and military with access to its latest artificial intelligence technology, including the Vertex AI service. This access was reportedly expedited after the start of the conflict in Gaza in October 2023. There are also reports of the Israeli military requesting access to Google's Gemini AI platform for processing texts and voices.
This provision of technology is in addition to the Project Nimbus contract, a $1.2 billion agreement between Google, Amazon, and the Israeli government to provide cloud computing infrastructure, AI, and other technology services. This contract has been met with significant criticism and protest due to concerns about its potential use in human rights abuses against Palestinians."

Really?

To put this into perspective, my usual chat partner is ChatGPT. Although I'm not a huge fan of its latest and greatest iteration - o1 - I decided to give it a shot as it's billed as being more cerebral than its famously coy and ambivalent predecessors. Presented with the same question, it said it thought about it for a whole 4 seconds before replying:

Boring.

Back to Gemini. I figured that given the rulings of the International Court of Justice and International Criminal Court over the massacres of the past year against a captive population, some of this might be illegal, not to mention international arrest warrants against some of the key players. So I wondered about the legality of the sales:

This time, it correctly pointed out that taking a stance on international law requires a proper legal analysis, which is fair, but went on to detail some of the Google programs that could raise the aforementioned concerns, some of which I had no idea about:

Recent reports and investigations strongly suggest that Google's technology has been used by Israel's military. While the exact nature and extent of this use are not fully transparent, here's what we know:
Project Nimbus: This $1.2 billion contract provides cloud computing services to the Israeli government, including the military. This infrastructure can be used for various purposes, including data storage, analysis, and AI applications.
Vertex AI: Reports indicate that Google expedited access to its Vertex AI service for the Israeli Defense Ministry after the start of the conflict in Gaza in October 2023. This service allows for the development and deployment of AI models, potentially for military applications.
Gemini AI: There are reports of the Israeli military requesting access to Google's Gemini AI platform for processing texts and voices. This suggests potential use in intelligence gathering, analysis, or even in autonomous weapons systems.
Habsora AI Tool: The Israeli military reportedly uses an AI tool called Habsora to generate lists of potential targets. While it's unclear if this tool directly uses Google's technology, it highlights the military's increasing reliance on AI in warfare.

Since it mentioned ethics, I thought I'd ask for clarification:

Google's AI Principles explicitly state that they will not develop or deploy AI that causes harm, violates international law, or enables surveillance that violates internationally accepted norms. The company also commits to respecting human rights and avoiding complicity in human rights abuses.
If Google's technology is being used by the Israeli military to facilitate actions that violate international humanitarian law or human rights, such as indiscriminate attacks on civilians or disproportionate use of force, it could be argued that Google is complicit in these abuses.

There you have it: Google's own AI is clearly stating that it may be involved in an alleged genocide. I found the instant response and nonchalant description to be both jarring and somehow refreshing, so I asked about how that might contribute to the suffering of Palestinians, and it didn't disappoint:

Gemini went on to detail how enhanced targeting, surveillance and data analysis systems like the infamous Lavender AI and "The Gospel" (Habsora) can directly contribute to harming Palestinians, bluntly confessing that "Google has directly provided Israel's Defense Ministry and military with access to its advanced AI technology, including the Vertex AI service. This access was reportedly expedited after the start of the conflict in Gaza in October 2023."

Gemini even volunteered some examples: "Here are some specific examples of how Google's technology could be used against Palestinians:

  • AI-powered facial recognition systems could be used to identify and track Palestinians at checkpoints or protests.
  • AI algorithms could be used to analyze social media posts and identify individuals who are critical of the Israeli government or military.
  • AI-powered drones could be used to conduct surveillance or targeted strikes in Palestinian territories."

Google's earlier point about accountability and attribution having struck a chord, I wanted to know more about which countries might be using its connected technology in harmful ways, and it came up with a small list of 'hits'. So I asked for clarification:

You're right to be concerned. It's important to be specific about the potential for misuse. Here are those countries again, with a focus on how Google's technology could be implicated in the victimization of people:
China:
Uyghur Muslims: Reports suggest widespread use of facial recognition and surveillance tech (potentially involving Google's tech indirectly via partnerships or open-source tools) to track and control this minority group in Xinjiang. This could lead to mass surveillance, arbitrary detention, and suppression of cultural and religious practices.
Tibetans: Similar surveillance measures are reportedly used to monitor and suppress Tibetan Buddhists, particularly those suspected of dissent or activism. This could include tracking their movements, censoring their online speech, and even identifying individuals for "re-education" camps.
India:
Muslims and other minorities: Concerns exist about the potential use of facial recognition and data analysis tools (potentially linked to Google's tech) to target and discriminate against minority groups, especially in the context of rising Hindu nationalism. This could lead to increased surveillance, discriminatory policing, and even the denial of basic rights and services.
Myanmar:
Rohingya Muslims: While the situation is complex, there are concerns that social media platforms (including those using Google's technology) have been used to spread hate speech and incite violence against this persecuted minority group. This could further exacerbate the ongoing humanitarian crisis and potentially lead to further atrocities.
Various Countries with Authoritarian Regimes:
Political dissidents and activists: Many countries with poor human rights records are known to use surveillance and tracking technologies (potentially involving Google's tools) to monitor and suppress political opposition. This could include countries in Central Asia, the Middle East, and parts of Africa and Latin America. This could lead to the silencing of dissent, arbitrary arrests, and even torture or disappearances.
Important Considerations:
Indirect Involvement: Even if Google isn't directly providing the tech, its tools and platforms can be used by third parties for harmful purposes.
Algorithmic Bias: AI systems trained on biased data can perpetuate and amplify existing societal prejudices, leading to discriminatory outcomes for various groups.
Lack of Transparency: The secretive nature of many government surveillance programs makes it difficult to get definitive proof of misuse.

It seems implausible that Google's own AI is able to come up with a Real Risk of Significant Harm (commonly called RRoSH in the privacy vernacular) while the company itself is incapable of imposing preventative guardrails to at least detect, if not altogether avoid harm, particularly as it named a litany of countries with authoritarian regimes whose non-transparent use of Google tech makes it difficult for the public at large to gauge the situation. Just out of curiosity, I asked:

Wow. OK! Knowing the tendency of all AI chatbots to 'hallucinate', I questioned Gemini on its ability to think clearly:

While AI tools can be helpful in summarizing and analyzing information, they should be used with caution and in conjunction with other methods to ensure accuracy and avoid bias. No AI can replace critical thinking and expert judgment. It's always best to treat information from any source, including AI, with healthy skepticism.

Enter NotebookLM, Google's other AI tool. Trained on accuracy instead of creativity, this interesting innovation has been making waves because of its ability to synthesize multimedia sources and come up with neat study guides, briefing documents and even FAQs that present a summary of ingested content in easy-to-consume digests.

Surprisingly, a mysterious gremlin described as a "source restriction" prevented the tool from accessing some of the very same sources that Google itself located and Gemini's AI referenced earlier.

This interesting form of indigestion was quickly cured by simply uploading a separate document containing the offending articles, enabling Google's NotebookLM to use its AI brain to aggregate the data and promptly arrive at the following conclusions:

Human Rights Violations: Google has made public promises to ensure their technologies do not facilitate human rights violations. However, reports suggest that Project Nimbus, a joint cloud computing initiative with Amazon, is being used to facilitate mass surveillance and human rights abuses against Palestinians in the Occupied Territories. This includes the rights to privacy, freedom of movement, and free association. These actions contradict Google's stated commitment to the UN Guiding Principles on Business and Human Rights and its own AI ethics frameworks.
Complicity in Violence: Google's technology, particularly AI, is reportedly being used by the Israeli military for target selection. AI systems such as "Lavender" and "The Gospel" are used to generate targets, sometimes with minimal human oversight, leading to detentions, killings, and the systematic oppression of journalists, healthcare workers, aid workers and families. This raises concerns about Google's complicity in the violence and the dehumanization of people through automated systems.
Erosion of Trust: Google's silence on the misuse of their technologies undermines trust in their commitment to ethical principles. The company's actions cast doubt on their sincerity and dedication to ensuring technology empowers rather than oppresses. This also harms the trust that the public places in technology companies.
Censorship and Retaliation: Google has faced criticism for censoring pro-Palestinian voices and retaliating against employees who speak out against Project Nimbus. This creates a toxic culture of fear and further undermines Google's claims of promoting democratic values.
Digital Colonialism: By partnering with and investing in Israel, Google is seen by some as supporting what they describe as "genocidal and apartheid activities," acting as a modern-day East India Company and an extension of American imperial power, colonizing the global digital economy and reinforcing the divide between the North and the South.
Profit Over Ethics: Some sources argue that tech corporations see an opportunity to profit from the Israeli-Palestinian conflict, prioritizing financial gain over human rights. The continued business with Israel, despite documented human rights abuses, supports this claim.
Legal Implications:Aiding and Abetting: By providing technology to the Israeli government and military, Google risks being complicit in international crimes, including war crimes and potentially genocide11.... Some sources suggest Google is risking aiding and abetting the commission of international crimes due to their continued business relationship with the Israeli government.
Lack of Transparency and Accountability: Google has not been transparent about the specifics of its contracts, clients, and risk assessments related to Project Nimbus. This lack of transparency makes it difficult to hold the company accountable for potential human rights abuses stemming from the use of their technology.
Privacy Violations: The collection and use of personal data by the Israeli military for surveillance purposes, facilitated by Google's technology, violates international human rights law. This includes the right to privacy as outlined in the International Covenant on Civil and Political Rights (ICCPR).
Failure to Conduct Due Diligence: Google is expected to conduct human rights due diligence (HRDD) to mitigate adverse human rights impacts linked to its services. There are questions about whether the company has done so, and if they have implemented adequate measures to ensure their technologies are not contributing to human rights abuses.
Global Implications: Increased Risk of Conflict: The misuse of AI in warfare, facilitated by technology companies like Google, has the potential to increase the speed and scale of conflicts, making them more deadly and destabilizing. This highlights the need for global regulation on the use of AI and technologies that are being weaponized.
In conclusion, Google's provision of technology to Israel raises serious ethical and legal issues. The potential for human rights violations, complicity in violence, lack of transparency, and global proliferation of surveillance technology demands greater scrutiny and accountability. The company's actions contradict its stated values and raise concerns about the role of technology companies in modern conflicts. U.S. tech companies play a significant role, both in providing the infrastructure and directly contributing to the use of AI targeting systems.
Companies such as Amazon, Google, and Microsoft provide cloud computing and AI development resources. This also extends to other companies such as Microsoft that provide cloud computing space for Israeli apps used for issuing permits to Palestinians. Google, despite its policy of not causing immediate harm, supplies technology that enables mass surveillance, which is relied upon by the Israeli military. Moreover, numerous tech companies have launched R&D centres in Israel, boosting the sector and indirectly benefiting from the conflict. There are also questions as to whether these tech companies are fulfilling their public promises to not be involved in human rights violations.

There you have it, folks: Google's own AI accuses it of "acting as a modern-day East India Company and an extension of American imperial power, colonizing the global digital economy and reinforcing the divide between the North and the South".

The entire article has been an interesting exploration of the power of AI to discriminate, describe and dissect the implications of cyberwarfare, with fascinating results that all but ignored the typical claims of secrecy being necessary for national security. As if to end on a high note, NotebookLM automatically produced this amazingly insightful podcast:

Podcast link: https://soundcloud.com/claudiu-popa-879361905/exporting-googles-ai

As if to support its Gemini colleague, Google's NotebookLM virtual hosts ended the recording with these illuminating quips:

"The first step is in recognizing that this isn't just a tech issue. It's a societal issue. It's a human rights issue. So we can't just leave it to the techies to sort out. We all have a role to play.
We need to demand more transparency from these companies. More accountability. We need stronger regulations, stronger legal frameworks that protect our rights in the digital age.
And we need to be having these conversations, raising awareness, pushing for change, because the future is not preordained. We can still shape it, but we need to act now.
We're committed to shining a light on these issues, even when they're uncomfortable.
Sunlight is the best disinfectant."