AI System Lavender Used by Israeli Defense Forces in Gaza Conflict Reportedly Targeted 37,000 Potential Assassination Targets

Gaza City, Gaza Strip – The use of artificial intelligence in warfare has long been a topic of concern among experts. While previous discussions have often centered around the potential dangers of autonomous weapons reminiscent of the “Terminator” movies, a recent development in Israel’s conflict with Hamas in Gaza has highlighted another troubling aspect of AI deployment on the battlefield.

Israeli news outlets +972 and Local Call recently published a detailed account of an AI-based system known as Lavender, which the Israeli Defense Forces utilized to identify targets for assassination. Unlike traditional protocols that required thorough assessment of senior Hamas operatives before authorizing assassinations, the Lavender system reportedly led to a more indiscriminate program of killings following a series of attacks in Gaza.

According to sources within Israeli intelligence cited by the publications, Lavender was trained on a variety of data sources, including photos, cellular information, communication patterns, and social media connections, to identify characteristics of known Hamas and Palestinian Islamic Jihad operatives. The system then assigned a score to individuals in Gaza based on how many characteristics they matched, potentially marking them as assassination targets. At one point, the system reportedly listed up to 37,000 individuals.

Despite acknowledging that the system was only 90% accurate in identifying militants, sources revealed a lack of extensive human review processes. With a lengthy target list and minimal verification efforts, the sources noted that targets were often quickly verified as male before proceeding with strikes on their homes. This approach may have contributed to the significant civilian death toll early in the conflict, with nearly 15,000 Palestinians, including many women and children, perishing in the first six weeks.

The Israeli Defense Forces issued a statement denying the use of AI in identifying terrorists and emphasizing adherence to rules of proportionality and precautions in attacks. However, the reported outcomes of Lavender’s utilization raise questions about collateral damage and the military’s decision-making processes in conflict zones.

The revelations about AI’s role in the Gaza conflict underscore broader implications for the ethical and responsible use of technology in warfare. As countries grapple with the implications of AI in military operations, concerns about unintended bias, excessive collateral damage, and ethical decision-making processes come to the forefront. Efforts to regulate and establish guidelines for the military use of AI are gaining traction globally, prompting discussions on the need for international agreements to govern the development and deployment of such technologies on the battlefield.