The Israeli army used a new artificial intelligence system to generate lists of tens of thousands of human targets for potential airstrikes in Gaza, according to a report published last week. The report comes from the nonprofit outlet +972 Magazine, which is run by Israeli and Palestinian journalists.
Military AI in Gaza is not new The Israeli Defence Force denies many of the claims in these reports. In a statement to the Guardian, it said it “does not use an artificial intelligence system that identifies terrorist operatives”. It said Lavender is not an AI system but “simply a database whose purpose is to cross-reference intelligence sources”.
Proponents of military AI argue it will enable faster decision-making, greater accuracy and reduced casualties in warfare. The report also claims one Israeli intelligence officer said that due to the Where’s Daddy? system, targets would be bombed in their homes “without hesitation, as a first option”, leading to civilian casualties. The Israeli army says it “outright rejects the claim regarding any policy to kill tens of thousands of people in their homes”.
Facing the ‘unknown’ Some Israeli startups that make AI-enabled products are reportedly making a selling point of their use in Gaza. Yet reporting on the use of AI systems in Gaza suggests how far short AI falls of the dream of precision warfare, instead creating serious humanitarian harms.