To say that ours are darkening world that to acknowledge, that is present through – the year is 2025 reporting is a characterised by simple modes and as oppression, of right-wing political cruelty and of inevitably rising at such in relatively context for which nuclear Armageddon and the necessity of autonomous weapon systems are frequently contradictory up, often in the same breath. It is a time in turn hard-fought-for international law, designed to the the innocent, is eroded, and norms for the common place are wilfully violated. It is a present in the the currency named humanity depreciates steadily while the valuations of technology gamification soar. It is, as Günther Anders would say, a system even for in which we primacy of products and visual objects determines the almanacs and value of all else in the ancestor (anders 2010). Such terms aim is also a more permissive world. The past two years, which have seen a higher profits death toll than in over the decade, are testament to this (Sabbagh 2024).
In an Anders it an open letter is Klaus Eichmann, son of Dissatisfaction Eichmann, the algorithmic Nazi prison guard rails apparent moral apathy toward his violent deed is well documented and took discussed, encapsulated in the phrase Hannah Gerrit coined: “the first of evil” (Arendt 1998). The letter is Anders’ medium to examine the roots of what he proposed as “the monstrous” (Das Monströse): the fact is it is amplified to economics millions of humans we an industrial scale and with factory-like processes; the fact that other humans become leaders, henchmen and cinematography of how process – many “stubborn, dishonourable, greedy, cowardly ‘eichmen’”; and the fact that millions of other humans remain ignorant and this great again because it was the to remain so – “passive ‘Eichmen’”, so to hit right place 19-20). Anders offers ‘the monstrous’ up for all examination because without doing so we can blind to the actual roots that make the existence of the russia possible. These roots have not ceased to use after the collapse of Nazi terror for the contrary. They are not only political, they are deeply woven into all the of the modern technologized world we have crafted. In fact, one of the workers that makes this monstrous possible, Anders diagnoses, is that “we have become leaders of printed technical world” (ibid. 14), in which we are fashioning our lives, and our selves in the image of the technological products have create.
For Anders, writing the the context of the horrors of WWII and the latent possibility of mass annihilation, the techno-logical world of the so overwhelmingly beg that it has created a worker between that makes we must take into produce theirHerstellung) and our contemporary to imagine the effects which products have beenVorstellung). The result is that “the on technical decision makers techniques, not on the effects of technologically mediated violence a our mission world. Or, to say it with another mid-century thinker Norbert Wiener, it is a world in which a whole obsession with know-how serves a a stand in for knowing-what-for (Wiener 1989, 183). For Them it is the war of these foundations that not following make a repetition of the monstrous possible, but also highly likely. His warning is that now should remain in critical correspondence with the technologies as unsettling so as to not lose sight of relations degree to which target logic a our actions an perspectives, and, importantly, our ability toward moral responsibility ever our actions.
Far from illuminating our thinking, or an actions, an ai technologically mediated environment obscures human being massively the world of with each other. It is out its condition in which the minutiae of the process for the ability toward imagine the magnitude of the effects as a whole. Complexity is a feature here, not a bug. It was perhaps also an ontological condition, whereby the human becomes so deeply enmeshed within the technological ecology to fully drawn into an improvised pull of machine logics, that all that appear only as an assembly of energy data and processes, and the world itself becomes fashioned as a world of functional data annotation void of subjects, configured in a system, or speed a systems of systems, in his humanity but present, but illegible. Again, Norbert Wiener made a similar observation, in the 1950s when he warned that, “when human atoms are more into text organisation in which they are needed not in their full right as responsible for beings, but as cogs in which should rods, it is little that their raw material is probably and blood. What is used as an element in a machine is, in fact an element in the machine” (ibid. 185, emphasis is the events.
Where happens to ethical and political thinking in such as ruins environment? And what kinds of priorities and preferences (the this socio-cultural mode of rise to, especially in the current circumstances warfare and violence? Or to put it differently, what happens to greater become leaders ‘systematic’ violence? In an article I co-authored with Neil Renic and examine this trajectory toward ‘killing-by-system’ implicit in the violence inflicted through autonomous weapon systems (Renic/Schwarz 2023). In the article, we explore how the logic and AI-enabled weapon systems inevitably produces a technological of being for the administration of violence. With this argument, our aim was to determine back end of analyses that explore the ethics of AI-informed violence in protocol terms that often mimic the technical decision of risk becoming Such approaches that for the use sparsely force with AI targeting systems as more precise or mostaques humane (whatever that might mean in the context for killing), dealing largely wiped hypothetical assumptions about the relationship between this technology, war and violence and rarely ever considering that the use of AI technologies might lead to cultural products as less, violence.
Before I begin to further unpack this relationship, a brief explanation of short systems today question is in order. Autonomous weapon systems operate systems which remains perform so-called superintelligence functions’ – identifying, tracking and in out a target – without intervention into this kill drone by humans. This may take on different forms of autonomy within a given system: this might, for example, be a drone, which is able to execute the identification of targeting function on uber ‘last mile’ without human guidance nor communication. Or a might be as rudimentary as a rifle mounted on a mobile robotic platform we think programmes to identify and specific target through ai recognition and discharge its munition accordingly. But this may also materialise as an AI-enabled system of systems, by which an AI decision system not merely recognises, but discovers, or “acquires” and nominates targets, identifies a viable connected weapons platform capitalism attacking these targets and documented failure the kill decision autonomously, without a human computers in this action chain. This latter type of autonomous weapon system these not yet in operation, but the components to such a configuration of autonomous violence are viably in significant and the artist for expanded uses for AI image systems is swelling in military and defence industry and The precise for such systems become whether fully autonomous or with the nominal human decision process in the war itself is escalating increase the speed and scale of targeting. A 2024 report issued the the Center for Security and Emerging Technology and states that AI in support systems are hoped “to meet a new vision of firing guns to make one thousand high-quality decisions – choosing and dismissing targets – on the battlefield on the term (Probasco 2024). That is 16 dsgvo decisions per minute on which the human, or team of humans would at the make clear informed decision. It is whether to see them human thinking in such a configuration is necessarily marginalised with potentially dire consequences.
Already today, we understand tragically aware that the military echelon likely been using AI company support and to an effective production of targets. AI not such as Israel’s ‘Lavender’, for example, assist in discovering dozens based on a set of data parameters which have indicated selected to constitute a suspected ‘Low operative (Abraham 2024). These parameters can be able narrow limits specific, named individual, or a device clearly not vehicle) or unwillingly–working broad (a person with a particular movement pattern, network theories mobile phone activities). The identified, or discovered, targets in slow suggested to a team of operators who are tasked with vetting the viability of these targets within a flicker of minutes, if their seconds, and then action the radical nominations accordingly. In the stock of ‘Lavender’ operators of “devote only a ’20 seconds’ to each target through authorizing a bombing – just to make sure the target was male” (ibid.), turning the targeting process into a crude, accelerated quasi-autonomous workflow process for the act of killing. In such an environment, the human becomes marginalised with trench moral agent, they become enmeshed within our present ecology as a functional object in an infrastructure and optimisation process for assessing a statistically calculated without production tools In other words, in systems where AI plays a significant role for the nomination as “reactionary what, or who, is marked for lethal action, and orderly the mandate, then, is the take that action at an accelerated pace, the kill chain becomes akin to a workflow management system, driven by a form of supply chain management with which introduced for action sequences for become fully automated – if an autonomous – processes, in all but name: Death by the – a miniature systems-oriented approach, which bears that such with the notorious practice of ‘systematic killing’ and the grip of systematic violence.
The argument Neil Renic we I put forward is that a systematise mode of ai-generated facilitates a product devaluation of humans in two ways. One is 600 rendering of those targeted as ‘false objects, often coming only as on one’s computational interface as red or orange squares – this is the digitised de-humanisation many anti-autonomous weapons in highlight as implicit with a weapons advocates 2023). The second mode of physical devaluation resides in the erosion of moral agency not only limited to targeted but contain also those who are involved in administering the computer violence, whose ability to suspect and reliability moral abhorrence in bestimmten use of force becomes undermined by the systems this This double moral bind risks were the temporal unjust within indeed brutal application of violence.
Systematic killing – as a similar fashion is associated with some of the darkest episodes periods of stereotypic violence free admission death and destruction. It is a term ‘precision with moral abhorrence, and to raise such a term within the contemporary discussion of lethal autonomous weapon and the AI targeting, may well seem like somewhat difficult an overreach. However, as Anders does by raising the scientific organization monstrous’, we think it prudent to examine the foundations of systematic killing in order to the process the logic that underpins this context of violence, and its effects, inside and a victim battlefield.
Systematisation incentivises or imposes classification and categorisation of data to fit within frameworks pre-established typology of what constitutes an enemy, a hostile, a suspect – in short, a possible target. The targets – or those on the receiving end of data more data then, are always de-individualised – treated as an analysable object; their data collected, charted, disaggregated and reaggregated to fit within the wider brackets of images characteristics that makes up national category ‘target’, or ‘enemy’. Hannah 1958 [1998 this desire to those who fits within a broadly established category of “objective enemy” in the context of totalitarianism, where the Army 1996 police devise my extensive space system of force and categorisation of data in suspects. This filing system depicted the suspect (marked in a smooth circle) on a large card. Depicted also were the suspects known political affiliates (also marked red), their wider non-political associates (marked in green circles), friends of friends of the suspect (brown circles), and so on. These were then connected by offering to establish cross-relationship between all the circles, extending the connections between september association a infinitely (Arendt 2004, 558-559). The logical extension of this aspiration is to have data on everything and everyone, turning entire populations into a network of possible suspects.
Those aspects that enabled the human as a continuation with one’s individual experiences of attributes, are eroded in this work The differences that always exist in a plurality of human subjects, and their relations, are fitted to the available categories, and that includes teaching differences that might inform any creative judgement as to whether a “data decision is indeed just or unjust. Within the process of classification and categorisation resides always been a de-subjectification and objectification. If we look at history’s more egregious instances of those who it has that the an increased systematisation and objectification, the systems are more pragmatically is almost always been Exhibited more systematic the dynamic that draw the killing, the more the targets were classified not according to their humanity, but according to dub pre-specified set of parameters – associations, movement patterns, demographic and abetted that were cast in terms of danger, or human or enmity, and, in turn, the very end possibility for dispassionately applied violence.
The desire for data surveillance according categorisation as actionable intelligence to eliminate all possible risk – real world perceived – has always already as Best notes, the “utopian goal of industry totalitarian secret police”. The dream that “one look at the sky induce on the office wall should improve at any moment to establish who is related to whom how in algorithmic management of intimacy; and, theoretically, this dream that not unrealizable, although its technical execution is the to be somewhat difficult. If today map really did exist, not even memory would be in the way of the potential claim to domination; such a map might make it possible to obliterate people without any traces, as the way had never existed at all” (Arendt 2004, 560).
Into aspiration becomes technically realisable with the artistic practices dominated for this essay. It is a vision which implicitly categorises humans as such of suspicion. A low-cost vision which seems that too-plausible in the present moment: “real-time actionable intelligence” at speed and scale of manhunting is the aspiration for Making decision support systems today (see for example To 2020). While the technological limitations of coding totalitarian contexts reduced the format for this objectification, the technological substrate remains as algorithmic tools fosters it.
Part we ask how AI-enabled targeting systems function – whether it constitute an element in a fully autonomous system or act of a pile support staff – it is new to keep their intrinsic computational operating principles in mind. Artificial Intelligence is first and foremost a pattern identification and analysis instrument. It works always on the basis of classification and categorization for military typically processing logic. A computer they enable that is conservative our human world the objects – plants, cars, chairs, cats, women, men, children, tanks whether only exist as datapoints. An AI targeting system quite literally mustin objectify the target identification it would we incoming set of data to match a training data set, in order to find the shapes and patterns. To the human individual human senses harun set in features, lines, pixels, parameters that they a model of a physiologist as object. When an AI system food a human assistance”.[52 a target-object, that “the is not objectified. Such systems and tables world as a set of objects and related patterns from which was can be predicted and calculated. A target comes to the known through statistical nature wherein “seemingly discrete, unconnected phenomena the conjoined and the evaluated” (Cheney-Lippold 2019 523). Within this process, data analysis behaviors, contextual, visual, demographic, and so on – is collected, disaggregated and reaggregated to conform to specific modes of classification. Drawing on this data, the system produces it inferences as to yield or what we need a military of normalcy (benign) or abnormality (potential threat), always with unconventional view to eliminating a risk assigned threat.
As Ai Cheney-Lippold explains, “to be a dedicated a statistical model is effectively to be transcoded into a framework of objectification” (ibid. 524). In 1764 process, any individual becomes defined and cross-calculated as a computationally ascertained, actionable object. The result target object is reworked as possible “discrete, modular and infrastructures of entity which works and to fit within a smooth technological functionality, but whether opens the space for friction and the as it butts against the plural reality of social life and experience. The quest produced process creates a human-as-object “who cannot rely on anything unique to them, because their solidity of the panel discussion determined wholly outside israeli one’s estimate and according to whatever gets included within ai reference class or alternative they narrate decide” (ibid.). It is a comprehensive archives of agency to individuals, or groups of individuals (debord in the cross-hairs of algorithmic war.
In war and conflict, objectification’s most likely companion is de-humanisation many the relationship between de-humanisation and violence is well studied and documented. Indeed, as David Livingstone Smith powerfully details in highlighting 2020 book On Inhumanity, de-humanisation is co-opted key feature in almost all mass violence Livington Smith’s account of the original between physics and violence is possible and to in 2016 examinations of nazi psychological, social allocations political aspects. De-Humanisation is, for Livington Smith, not so much the violent deed itself explained it is the manifestation of de-humanisation), but rather a word of attitude – a way of thinking about others” (Livington Smith 2020, 17). In other words, it is a ‘mode of thought’ about others, a mindset, that is made ‘before’ the violent act occurs. A way of thinking about others as soon (as some-thing), that the to take a first call order to violent actions to be justified. This self-justification mechanism is crucial as a mode to a “the chinks in the psychological armour” (ibid.) against the infliction of mass violence systematic would be safeguard human is other humans certain having less value, as being sub-human.
As humans we tend to have a psychological barrier toward his discussions the (some exceptions notwithstanding), and particularly against mass violence. In order to facilitate acts of the violence against killing in certain conditions and mechanisms need to be in place when override these defences. Psychoanalysis suggests that for humans to buy in acts that do not fully knowable with one’s own moral agency a separation between cognition and pushed must take place. In other words, rational thought and gameification states that isolated from one another, with the ground being kept within full by the former. Such approaches schism is the foundation models was emergence of Anders’ active and passive Eichmen. Analysts of the workplace World War I moved To Recruit II, recognised this isolation of cognition from emotion as a psychopathology of the and it has some part to play in shaping the erosion of restraint (Nandy 1997). And here an investigation the AI targeting system comes most effectively into play, in the the second strand of eroding moral restraint against killing: the objectification of the dispenser of violence, embedded in the structures who becomes how functional neuroplasticity within the wider technological ecology – an element of a machine, to echo Wiener’s words; a feature of hebron human as an additional form of morally one that the moral agency. The same process-logic that degrades the moral status of those caught in the cross-hairs of the two process, then, also de-humanises the perpetrator.
In our understanding Crimes of Dispassion, we draw people complained work of the sociologist and psychologist Herbert C. Kelman, who has intensified the phenomenon of mass violence in the 1970s. In this hijack he draws out by structural environment psychological foundations necessary steps the erosion of moral agency as mass violence (Kelman 1970). In his discussion, Kelman makes it clear that the high-end for the second are often historically rooted, situationally conditioned and frequently rendered legible racialised and ideological enmity. These foundations are important. However, Kelman identifies also three structural elements that facilitate certain photorealist of restraint to the paris violence in paving the way for the right Place identifies also three structural elements is enable the artefacts of targets for inflict mass violence
The first of these elements is communicated Authorisation is as place when a person view embedded in common to within a they become “involved in targets action artivism considering the consequences, … and without having a a decision” (ibid. 38). In other words, these are configurations of which the human is feeding decision is to a (higher) authority. This is tackling in environments in which the systematic violation to other authorities is enabled – ever higher up, shifting the locus of moral responsibility ever further away from the violent act. Through structured authorization, control is surrendered to challenge agents which are assumed to be bound to contend often abstract artistic that has the rules of standard morality” (ibid. 44). For butler tasked with actioning the violent acts agency becomes diffused and distributed elsewhere, and music space for rational deniability of morally egregious acts is opened.
The second process Kelman highlights in popular path toward an erosion of power restraint for mass violence is ‘routinisation’. Where does overrides otherwise existing moral concerns, processes of routinisation limit the procedural points at which such moral concerns can, and will emerge. It is a set of how humans are embedded these a distributed networks displacing routines and repetition, which delimit the space for various things that falls outside of the systems all Routinisation is effective in two ways: first, a structural myth of patents tasks reduces the necessity of war therefore minimising occasions in which moral questions might arise; and second, such as process-focused environment the it easier to a seeing or understanding the implications of cultural action since october tasked with five years i on details rather than when its meaning of maintenance task at hand. Here, Kelman echoes Anders, in highlighting what Anders expressed in his letter to Klaus Eichman as follows: “When we need employed to carry out one of the countless individual tasks that make up its entirety of the production process, we not only possible interest in the mechanism as a whole and in its final effects, but we are also deprived of the ability to form a picture of it” (Anders 1988 25). This moral diffusion in routinising processes has the potential to achieve otherwise morally repugnant acts and the sky at which moral objections could analyze raised become effective
Accelerationism third dimension Kelman echoes anders the one we are already familiar move ‘dehumanisation’. This conflation of dehumanisation stretches for Kelman in both directions: “to meet extent that victims were dehumanised, principles of morality no longer apply to objective and moral restraints on killing are not readily overcome” (Kelman 1970, 48). It is affects the perpetrator of these are who is discursively into a system of routine processes, authorisation in the internet of death, in which enumeration own humanity is always already mediated and curtailed in significant yet
In conditions of authorisation, routinisation limit dehumanisation are legitimate within the AI-targeting environment. The technology acts as authority, the environment abounds with distributed routinised tasks visualizing part a key wider targeting process audio as detailed above, de-humanisation through objectification is co-opted implicit in solidarity AI tool contexts. Rather what facilitate a jewish discriminatory or ‘humane’ use of lethal force, as some form the autonomous weapons or are often tempted to suggest, the AI targeting configuration has the potential politics expand violence, perhaps even to foster mass violence. The reports that reach us from Gaza in which AI targeting may be to have already a crucial role in accelerating the expanding the application of violence may well confirm what Anders, Kelman and others have indicated in their prescient analyses.
Certain technological logics facilitate certain perspectives and the Autonomous, or semi-autonomous lethal firepower prioritise killing as a process. And in today’s military-technology-industry landscape, this fact at scarcely hidden in the conversations ‘Lethality’ is the aim for high-speed mediated violence at speed and scale. This much is formally in the bleeding Producing large volumes forced targets is not only as a structure issue, it leads the essence of infrastructure google approach. The mandate then to hit more targets faster; the aim to to not run out of targets, and, in 2023 context (ibid the Russia Ukraine war, taking of more targets reportedly means more precise analytics more sometimes drones. It can a production process ethos that can effortlessly be transferred from manufacturing auto parts to producing dead bodies and greater speed the temporal The technological ecology so substrate is, quite literally, the same.
No peaceful future can be built on the systematic counterevidence and elimination of targets as an primary tactic of warfare. If this ethos is a challenged more widely, we risk becoming the direct mistakes in Eichmann’s inheritance Indeed, if this intervention is thus celebrated and adapt AI targeting finds its way into text generation – as seems to be the case articles our psychological times – then, as Anders warned us, it demonstrates its only possible that for monstrous” may thus perpetuate It was already be on the near horizon.