archiveauthordonateinfoloadingloginsearchstarsubscribe

Interview. 'The creation of systems designed to automate murder and mass destruction in wartime poses a serious danger to global peace and security. Whatever technology Israel employs in Gaza will not stay there.'

Marwa Fatafta: Israel uses Gaza as a test lab for dystopian technologies

In the war in Gaza, the role of technology, from social media to the use of artificial intelligence, has emerged in an unprecedented manner compared to the past. This role is forcing the West under the spotlight: from Europe and its attempts to place rules on the tech giants, to the United States where Silicon Valley is now a de facto countervailing power and a source of untenable pressure on institutions.

We spoke about these developments with Marwa Fatafta, leader of policy and advocacy work on digital rights in the Middle East and North Africa (MENA) region for the NGO Access Now.

The Gospel, Lavender, Where’s Daddy are the main AI systems employed in the war in Gaza, whose existence was revealed by the Israeli magazine +972. Are we witnessing a war in which artificial intelligence is being used more systematically than ever before?

Israel’s use of AI systems to conduct warfare is not new. In May 2021, the 11-day bombardment of the Gaza Strip inaugurated the Israeli military’s unprecedented use of these technologies, which dubbed it “the first AI war” at the time. For decades, Tel Aviv has used Gaza as an open-air laboratory to test these dystopian technologies, outside the confines of any regulations or ethics.

What will be the consequences, considering the fact that such systems are subject to far fewer regulations than conventional weapons? Moreover, beyond the obvious destruction in Gaza, is there a new threat to us all inherent in this systematic use of artificial intelligence-based weapons?

These systems are the epitome of all that is wrong with artificial intelligence. They are “biased,” inaccurate, unreliable, and employed to authorize decisions that have fatal consequences for people’s lives and dignity. The creation of systems designed to automate murder and mass destruction in wartime poses a serious danger to global peace and security. Whatever technology Israel employs in Gaza will not stay there. As happens in every war in the Strip, technologies are then exported to the rest of the world, with the endorsement of having already been deployed on the battlefield. Notably, this is how the “eminence grise” behind Lavender’s creation, the former head of the 8200 intelligence unit, described his vision for the future: to make these systems mainstream in the conduct of wars. They should be banned.

In an article on surveillance in Gaza, the New York Times revealed that Google Photos is being employed as a surveillance tool and database by the Israeli army’s intelligence units. By their own admission, it is the most useful application out of all of them. And this is despite Google’s policy against using its technologies to harm people. What does this say about Silicon Valley’s role?

Big Tech is gambling with the very serious risk of making itself complicit in genocide. In times like these, companies should place special attention on assessing and reducing the danger that their technologies and policies may contribute to, or be colluding with, human rights abuses and heinous crimes. Instead, companies like Google and Amazon are profiting from death by continuing to provide AI and cloud services to the Israeli government, including the Ministry of Defense. Also, Google fired its employees en masse who protested the company’s involvement in genocide.

You wrote the Access Now report on the censorship by Meta (the parent company of Facebook and Instagram) of Palestinian and pro-Palestinian voices. You found that the algorithm has been changed to lower the level of “tolerance” for Palestinian content to just 25 percent. This trend was already in effect prior to October 7, as revealed in an investigation by The Intercept into Facebook’s policies against dangerous organizations and individuals (DOI).

Meta relies heavily on algorithms to discover and remove content in violation of its policies. Since October 7, it has been observed that the company has been “tinkering” with its system to lower the level of certainty needed to track and hide “hate speech” in comments originating from Palestine. It simply means that the algorithms are more zealous in removing completely innocuous content.

At the same time, social media is teeming with anti-Semitic and Islamophobic content. How is this possible?

This is the heart of the problem. While social media platforms are systematically censoring Palestinian voices, they are allowing the spread of war propaganda, hate speech and genocidal rhetoric. This evil is not just about the Palestinian issue, it is the problem of content moderation pure and simple. Platforms’ choices are driven by profit, not by protecting the safety and freedom of speech of oppressed and marginalized groups.

Subscribe to our newsletter

Your weekly briefing of progressive news.

You have Successfully Subscribed!