The IDF has been using Artificial Intelligence (AI) as part of their ongoing assault on Gaza, significantly contributing to the scale and intensity of the war. AI tools, such as the "Hapsora" system, allow rapid data analysis, identifying hundreds of targets with alarming speed. This technology relies on satellite imagery, communication data, and social media to inform military decisions, often making decisions about both military and civilian targets.
Experts like Saed Hassouna argue that while AI is presented as a tool for precision, it often leads to the indiscriminate targeting of civilians, causing large-scale suffering and destruction. AI technology in this context is seen not as a necessity but as a way for Israel to demonstrate technological superiority while deflecting international criticism.
The deployment of AI in warfare, according to experts, raises ethical concerns and could lead to a dangerous arms race, reducing human involvement in military decisions, thus making conflicts even more violent and less accountable. Additionally, it increases the risks of incorrect or unethical decisions, which could escalate human suffering, especially for the Palestinian population in Gaza.
Human rights organizations, like Human Rights Watch, have criticized the use of these technologies, claiming that the data used by Israel to make life-or-death decisions often leads to flawed calculations and may violate international humanitarian law. These tools, instead of minimizing harm to civilians, may actually contribute to unlawful killings and injuries, according to their reports.