Source: https://carrier-bag.net/path/generous-paradigm-parser
Date: 31 Aug 2025 03:40

#targeting

The truths on the ground of data-driven warfighting have become abundantly clear with the reported application of ML to targeting by the IDF. Journalist Yuval Abraham (2023, 2024) has given us detailed accounts of two of the IDF’s current systems, named respectively Habsora or ‘the Gospel’ and ‘Lavender.’[1] The first report, published in November of 2023, draws on sources within the Israeli intelligence community who confirm that IDF operations in the Gaza strip combine more permissive authorization for the bombing of non-military targets with a loosening of constraints regarding expected civilian casualties. This policy enables the bombing of built structures in densely populated civilian areas, including high-rise residential and public buildings designated as so called ‘power targets’. Official legal guidelines require that selected buildings must house a legitimate military target and be empty at the time of their destruction; the latter has resulted in the IDF’s issuance of a constant and changing succession of unfeasible evacuation orders to those trapped in diminishingly small areas of Gaza. A direct corollary of this operational strategy is the need for an unbroken stream of candidate targets. To meet this requirement, the Habsora software is designed to accelerate the generation of targets from surveillance data, creating what one former intelligence officer (quoted in the story’s headline) describes as a ‘mass assassination factory.’

Forensic Architecture has conducted an extensive investigation of ten exhibits introduced as evidence by the Israeli defense team at the International Court of Justice, providing systematic counterevidence for misrepresentation in each, in this case the claim of Hamas’ use of ‘human shields’ by basing their operations in civilian infrastructure. Forensic Architecture’s analysis of this exhibit showed that the building labeled ‘Hospital’ is in fact a residential building outside of the grounds of the hospital complex that was subsequently bombed. We could of course also question the designation ‘terrorist,’ part of the wider erasure of any reciprocity regarding who has a right to self-defense, as well as the question of proportionality in Israel’s massive and ongoing attacks on residential buildings, hospitals, and schools in Gaza, all of which are now also refugee camps and so further prohibited as targets under the laws of war.

Before I begin to further unpack this relationship, a brief explanation of the systems in question is in order. Autonomous weapon systems are systems which can perform so-called ‘critical functions’ – identifying, tracking and taking out a target - without intervention into this kill chain by humans. This may take on different forms of autonomy within a given system: this might, for example, be a drone, which is able to execute the identification and targeting function on the ‘last mile’ without human guidance or communication. Or it might be as rudimentary as a rifle mounted on a mobile robotic platform which is programmes to identify a specific target through facial recognition and discharge its munition accordingly. But this may also materialise as an AI-enabled system of systems, by which an AI decision system not merely recognises, but discovers, or “acquires” and nominates targets, identifies a viable connected weapons platform for attacking these targets and then executes the kill decision autonomously, without a human intervening in this action chain. This latter type of autonomous weapon system is not yet in operation, but the components to such a configuration of autonomous violence are viably in place and the call for expanded uses for AI targeting systems is swelling in military and defence industry circles. The allure for such systems – whether fully autonomous or with some nominal human decision process in the loop – is to increase the speed and scale of targeting. A 2024 report issued by the Center for Security and Emerging Technology (CSET) states that AI decision support systems are hoped “to meet a new vision of firing units to make one thousand high-quality decisions – choosing and dismissing targets – on the battlefield in one hour” (Probasco 2024). That is 16 targeting decisions per minute on which a human, or team of humans would need to make an informed decision. It is easy to see that human agency in such a configuration is necessarily marginalised with potentially dire consequences.

To recover those lives requires us to turn to the question of truths on the ground. Close analyses of US operations by investigative journalists make clear that the problem of civilian death in military operations is not a matter of inadequate sensor networks, or access to information, but of the frames of war into which militarism’s subjects and objects are incorporated. Claims for the precision of US operations against the Islamic State of Iraq and Syria, or ISIS, were definitively challenged in a series of articles in the New York Times by investigative journalist Azmat Khan in 2021. Khan and her team analysed over 1,300 “credibility assessments” by the US Defense Department regarding reports of civilian deaths from airstrikes that took place between September 2014 and January 2018. She confirms that rather than a series of tragic errors the reports documented failure to detect civilians, to investigate on the ground, to identify causes and articulate lessons learned, or to hold anyone accountable in ways that would prevent the recurring problems from happening again. It was, she writes “a system that seemed to function almost by design to not only mask the true toll of American airstrikes but also legitimize their expanded use” (Kahn 2021).

Systematic killing – as a signifier – is associated with some of the darkest historical periods of mass violence and widespread death and destruction. It is a term laden with moral abhorrence, and to raise such a term within the contemporary discussion of lethal autonomous weapon systems, and AI targeting, may well seem like somewhat of an overreach. However, as Anders does by raising the term ‘the monstrous’, we think it prudent to examine the foundations of systematic killing in order to better understand the logic that underpins this mode of violence, and its effects, inside and outside the battlefield.

The conditions of authorisation, routinisation and dehumanisation are latent within the AI-targeting environment. The technology acts as authority, the environment abounds with distributed routinised tasks as part of the wider targeting process and, as detailed above, de-humanisation through objectification is always implicit in such AI targeting contexts. Rather than facilitate a more discriminatory or ‘humane’ use of lethal force, as some of the autonomous weapons advocates are often tempted to suggest, the AI targeting configuration has the potential to expand violence, perhaps even to foster mass violence. The reports that reach us from Gaza in which AI targeting systems seem to have played a crucial role in accelerating and expanding the application of violence may well confirm what Anders, Kelman and others have indicated in their prescient analyses.

This aspiration becomes technically realisable with the technologies of concern for this essay. It is a vision which implicitly categorises humans as objects of suspicion. A terrifying vision which seems all too-plausible in the present moment: “real-time actionable intelligence” at speed and scale – such is the aspiration for AI decision support systems today (see for example Shultz/Clarke 2020). While the technological limitations in earlier totalitarian contexts reduced the scope for this objectification, the technological substrate available through algorithmic tools fosters it.

Most notably, the Israeli bombardment of Gaza has shifted the argument for AI-enabled targeting from claims to greater precision and accuracy, to the objective of accelerating the rate of destruction. The importance of this  IDF spokesperson Rear Adminal Daniel Hagari has confirmed that in the bombing of Gaza ‘the emphasis is on damage and not on accuracy’ (Abraham 2023). For those who have been advancing precision and accuracy as the high moral ground of data-driven targeting, this admission must surely be disruptive. It shifts the narrative from a technology in service of adherence to International Humanitarian Law (IHL) and the Geneva Conventions, to automation in the name of industrial scale productivity in target generation, enabling greater speed and efficiency in killing.

The media reports surrounding Project Maven overwhelmingly beg the question of the criteria by which objects are identified as imminent threats. We are told that 38 categories were used by those who hand-labeled 150,000 images to form the initial training data (Allen 2017), including the object ‘ISIS pickup truck’ (Peniston 2017). Project Maven raises the question of the correspondence between systems for the categorization of images of objects, in this case a truck, and how objects are incorporated into complex and changing relations and associated practices, which brings us back to the problem with which I began. As Gregoire Chamayou observed a decade ago in his book A Theory of the Drone (2014), the claims for precision that justify new investments in automated targeting systems are based on a systematic conflation of the relation between a weapon and its designated target on one hand, and the identification of what constitutes a (legitimate) target on the other. No amount of improvement in the precision of targeting in the first sense can address the growing uncertainties of target identification. This conflation is part of a campaign to deny the increasing reliance of these systems on ever more questionable forms of stereotypic categorisation of what and who constitutes a legitimate target, and the expanding temporal and spatial boundaries of what comprises an imminent threat.

The leading justification for these systems has always been the promise of precision and accuracy in targeting, in the name of adherence to International Humanitarian Law and the Geneva Conventions. In a public plenary in February of 2021 Robert Work, then co-chair with Eric Schmidt of the National Security Commission on AI, offers this demonstration of moral reasoning: “The biggest contributor to inadvertent engagements is target misidentification … Humans make mistakes all the time in battle. And the hypothesis is, to be proven, that artificial intelligence will improve target identification, which should improve, and reduce the number of collateral damages, reduce the number of fratricides. So, it is a moral imperative to at least pursue this hypothesis” (Work 2021).

Your path through #targeting

carrier-bag.net/path/generous-paradigm-parser/?noaiallowed