Warfare in Gaza: Israel’s Use of AI System Raises Alarming Ethical Concerns

Tel Aviv, Israel – The use of AI in warfare has long been a topic of concern among experts, who have highlighted various potential risks associated with its deployment. While much of the focus has been on the fear of autonomous weapons reminiscent of scenarios from popular culture, such as the movie “Terminator,” recent reports from Israel have shed light on a different, dystopian aspect of AI in the military.

According to Israeli publications +972 and Local Call, the Israeli Defense Forces have been utilizing an AI-based system named Lavender to identify targets for assassination in their conflict with Hamas in Gaza. This system reportedly uses a wide range of data sources, including photos, cellular information, communication patterns, and social media connections, to identify individuals linked to Hamas and Palestinian Islamic Jihad. The dataset even includes information on Palestinian civil defense workers, raising concerns about the system’s accuracy and potential for collateral damage.

Sources within Israeli intelligence have revealed that Lavender assigns a score to individuals in Gaza based on how closely they match the characteristics of known militants. Despite being only 90% accurate in identifying militants, the system was reportedly used with minimal human oversight, leading to a vast list of potential assassination targets, including thousands of individuals deemed of low importance. As a result, the use of Lavender may have contributed to the high number of civilian casualties in the conflict.

The Israeli Defense Forces have denied using AI systems to identify terrorists or predict a person’s involvement in such activities, describing Lavender as a database for intelligence cross-referencing. However, the reported impact of Lavender on the civilian death toll raises questions about the ethical use of AI in warfare. The excessive civilian casualties and the manner in which targets were selected and engaged by the IDF highlight the need for greater oversight and accountability in deploying such technologies.

The situation in Gaza underscores the importance of international agreements on the responsible military use of AI and autonomy. While over 50 countries have signed a Political Declaration on this issue, notable names like Israel, Russia, and China are not among them. The need for a principled approach to the military use of AI, which considers risks, benefits, unintended bias, and accidents, is becoming increasingly apparent as technology continues to play a significant role in conflicts worldwide.

The revelations about Lavender’s role in the Gaza conflict may serve as a catalyst for further discussions and negotiations on regulating the use of AI in warfare. As the world grapples with the implications of autonomous systems in military operations, the need for ethical guidelines and international cooperation in this field becomes ever more pressing. The incidents in Gaza serve as a stark reminder of the human decisions behind the deployment of AI in life-or-death situations, highlighting the importance of upholding humanitarian laws and minimizing civilian harm in conflict zones.