#targeting
The truths on the ground of data-driven warfighting have become abundantly clear with the reported application of ML to targeting by the IDF. Journalist Yuval Abraham (2023, 2024) has given us detailed accounts of two of the IDF’s current systems, named respectively Habsora or ‘the Gospel’ and ‘Lavender.’[1] The first report, published in November of 2023, draws on sources within the Israeli intelligence community who confirm that IDF operations in the Gaza strip combine more permissive authorization for the bombing of non-military targets with a loosening of constraints regarding expected civilian casualties. This policy enables the bombing of built structures in densely populated civilian areas, including high-rise residential and public buildings designated as so called ‘power targets’. Official legal guidelines require that selected buildings must house a legitimate military target and be empty at the time of their destruction; the latter has resulted in the IDF’s issuance of a constant and changing succession of unfeasible evacuation orders to those trapped in diminishingly small areas of Gaza. A direct corollary of this operational strategy is the need for an unbroken stream of candidate targets. To meet this requirement, the Habsora software is designed to accelerate the generation of targets from surveillance data, creating what one former intelligence officer (quoted in the story’s headline) describes as a ‘mass assassination factory.’
Before I begin to further unpack this relationship, a brief explanation of the systems in question is in order. Autonomous weapon systems are systems which can perform so-called ‘critical functions’ – identifying, tracking and taking out a target - without intervention into this kill chain by humans. This may take on different forms of autonomy within a given system: this might, for example, be a drone, which is able to execute the identification and targeting function on the ‘last mile’ without human guidance or communication. Or it might be as rudimentary as a rifle mounted on a mobile robotic platform which is programmes to identify a specific target through facial recognition and discharge its munition accordingly. But this may also materialise as an AI-enabled system of systems, by which an AI decision system not merely recognises, but discovers, or “acquires” and nominates targets, identifies a viable connected weapons platform for attacking these targets and then executes the kill decision autonomously, without a human intervening in this action chain. This latter type of autonomous weapon system is not yet in operation, but the components to such a configuration of autonomous violence are viably in place and the call for expanded uses for AI targeting systems is swelling in military and defence industry circles. The allure for such systems – whether fully autonomous or with some nominal human decision process in the loop – is to increase the speed and scale of targeting. A 2024 report issued by the Center for Security and Emerging Technology (CSET) states that AI decision support systems are hoped “to meet a new vision of firing units to make one thousand high-quality decisions – choosing and dismissing targets – on the battlefield in one hour” (Probasco 2024). That is 16 targeting decisions per minute on which a human, or team of humans would need to make an informed decision. It is easy to see that human agency in such a configuration is necessarily marginalised with potentially dire consequences.
To recover those lives requires us to turn to the question of truths on the ground. Close analyses of US operations by investigative journalists make clear that the problem of civilian death in military operations is not a matter of inadequate sensor networks, or access to information, but of the frames of war into which militarism’s subjects and objects are incorporated. Claims for the precision of US operations against the Islamic State of Iraq and Syria, or ISIS, were definitively challenged in a series of articles in the New York Times by investigative journalist Azmat Khan in 2021. Khan and her team analysed over 1,300 “credibility assessments” by the US Defense Department regarding reports of civilian deaths from airstrikes that took place between September 2014 and January 2018. She confirms that rather than a series of tragic errors the reports documented failure to detect civilians, to investigate on the ground, to identify causes and articulate lessons learned, or to hold anyone accountable in ways that would prevent the recurring problems from happening again. It was, she writes “a system that seemed to function almost by design to not only mask the true toll of American airstrikes but also legitimize their expanded use” (Kahn 2021).
Systematic killing – as a signifier – is associated with some of the darkest historical periods of mass violence and widespread death and destruction. It is a term laden with moral abhorrence, and to raise such a term within the contemporary discussion of lethal autonomous weapon systems, and AI targeting, may well seem like somewhat of an overreach. However, as Anders does by raising the term ‘the monstrous’, we think it prudent to examine the foundations of systematic killing in order to better understand the logic that underpins this mode of violence, and its effects, inside and outside the battlefield.