Source: https://carrier-bag.net/path/angry-temperature-chart
Date: 31 Aug 2025 17:02

#violence

Moreover, the evacuation of wounded soldiers from the battlefield became impossible due to immediate drone attacks, which left them suffering for days or even weeks at times. Perhaps autonomous lethal weapons disgust us the most from an ethical point of view. The current discussions point out that autonomous weapons are unacceptable for two main reasons: they remove human oversight that separates the decision-maker from the consequences of their choices, and they deny the human dignity of the victim. The ethics of seeing from a drone, aka god’s eye view, which enables the conduct of war from a distance, is controversial in terms of the dehumanising effect of killing as a practice of manhunting. The execution of violence is outsourced to controlled semi-robots as an efficient way to eliminate the enemy while minimising the death toll among the members of the military powers.

While we need to pay attention in the current moment to the enormous expansion of signal generating infrastructures we also, I am arguing, need to attend to that which escapes capture by datafication. The aim is to destabilise the premises through which technomilitarism perpetuates its logics of rational and controllable state violence, while obscuring its senseless and unaccountable injuries. Rather than further accelerate the speed of warfighting, we need to challenge the premise of an inevitable AI arms race and redirect our tax dollars to innovations in diplomacy and social justice that might truly de-escalate the current threats to our collective and planetary security. The United States is the overwhelming military hegemon in the world, with military spending equal to the next most heavily armed countries including China and Russia. The investments in maintaining that position extend beyond national security to an ever-expanding military-industrial-commercial-academic complex, held in place by the unexamined assumptions that underpin belief in the inevitability of war and marginalise movements away from profits for some toward just and sustainable futures for all.

At the same time, generating ISIS images on Stable Diffusion does not lead to images explicitly depicting acts of violence or bloodshed, even though those images can be found with the training data according to Have I Been Trained. A prompt containing ‘ISIS execution’ or ‘ISIS beheading’ may lead to a hooded militant lying on the ground, but with no overtly disturbing elements. My first attempts to generate ISIS-related images on Stable Diffusion took place in 2023, and they produced a distinctly different set of images than the slick and sanitized results one year later. The earlier attempts led to images that had certain photorealist elements – grain, texture – that elicited a feeling of genuine disturbance, similar to what I felt when I would look at ISIS videos. In this way, these results felt more ‘authentic’ than what can be produced today. But this feeling of ‘authenticity’ was confusing because the figures in these images were deformed, some lacking limbs or in awkward postures, clearly not real. Rationally, I knew that these images did not represent actual people, but were statistical renderings derived from a large sample of image data. But they disturbed me nonetheless because their appearances bore a trace of indexical reference to real images: images of real people, real lives, real deaths.

AI-assisted targeting systems may have allowed the military to target and kill at an unprecedented scale, as international media outlets have reported. However attention to developments on the ground evidences how all the violence was the result of concerted decisions: a prime minister ordering a campaign of destruction and a military echelon eager to heed his demands.

AI has played a pivotal role in Israel’s war in the Gaza strip since October 7th 2023, when Hamas militants massacred 1,200 Israeli civilians and soldiers and took more than 250 hostage (CNN 2025).  Reporting by Israeli and international news outlets have detailed how large language models and predictive analytics are helping to determine when and where bombs fall from the sky and troops shoot on the ground (Abraham 2024). Developed by Israeli military units, yet bolstered with computing infrastructure and technologies provided by private civilian firms, US military officials and AI experts say it is the first time automated systems have been used in warfare at such a large scale (Biesicker et al 2025).

No peaceful future can be built on the systematic production and elimination of targets as a primary tactic of warfare. If this ethos is not challenged more widely, we risk becoming the direct inheritors of Eichmann’s legacy. Indeed, if this ethos is instead celebrated and more AI targeting finds its way into more conflicts – as seems to be the case in our darkening times – then, as Anders warned us, it is not only possible that “the monstrous” may be repeated. It may already be on the near horizon.

The Russo-Ukrainian war is the largest drone war in history. With Ukraine losing approximately 10,000 drones a month, the war is a real-time laboratory for the use of commercial UAVs in warfighting by both sides on an unprecedented level (Palathra, 2024). While the military innovations have significantly helped Ukrainian defence, we must consider the ethical implications of technological progress. Overall, war dominated by robots that can’t negotiate or show kindness but can only follow and kill is probably not the way to resolve human conflicts. In the absence of established rules of engagement and international laws, consumer drones are employed akin to the futuristic claws envisioned by Philip K. Dick, pursuing any movement indiscriminately. These drones target both military personnel and civilians within seconds, leaving little time for analysis or informed decision-making. In the chaotic environment of combat, the likelihood of errors escalates significantly, and tragically, there exists no reset button to safeguard human lives.

While we enter the age of killer robots, there is an urgent need for humanitarians, governments, media, policymakers, and drone-makers to deal with the specific issues around consumer drones in conflict. The complete terror from above threatens to wipe up the possibility of humanity, especially in the case of human observers lacking a clear vision of targets. Technology and information sharing are not the ultimate tools to overcome invisibility. In the field of international relations, Kyle Grayson & Jocelyn Mawdsley argue that “there is always the imperceptible, that over which we cannot be certain, in everything we see” (Grayson & Mawdsley 2018, 2). There is always more to every situation that is visible, and the presence of the invisible might be accepted into the visual field of knowledge.

One day I looked at my editing timeline and wondered how much it resembled the timeline of the person who had created this propaganda video. Through my analytical process, I had unwittingly performed myself into becoming a kind of mirror for the original creator. How did my attempts to create critical distance somehow lead me to this uneasy proximity to the work? I became sensitive to the ways in which technologies for analytical insight merely disguise the problematic implications of engaging disturbing content.

These changes marked a shift from the Israeli Defense Force’s (IDF) policies. Since the early 2000s, military leaders had been promising a progressively technological occupation would make Israeli military rule easier to sustain. Digital and then automated technologies – reconnaissance drones, CCTV cameras, biometric cameras, remote sensing systems – had promised to effectively manage Israel’s military rule. Wartime innovations were said to reduce the number soldiers deployed to combat, prevent acts of terrorism, and minimize the intrusiveness of military rule for Palestinians. But by the late 2010s, violence in the West Bank was rising. At the same time, right wing Israeli politicians were gaining unprecedented political power. They and their supporters were sick of promises of humane military strategies that could pave the way for gradual peace plans, even if the plans were sure to never materialize. They demanded more brutal military tactics, more violent displays of force, more lethal outcomes.

How do we understand the harm experienced on one side of an image in relation to that on another? When the lab coat-wearing researcher reported his psychological trauma to his colleagues, he was chastised for portraying himself as a victim, when the real victims were those suffering violence in the videos he was cataloguing. Not far from his office I met another German researcher, Nava Zarabian, who searched the internet for harmful content as part of a youth protection unit. As the only worker in her office with a non-white, Muslim background, she not only absorbed the trauma of the images, but the anti-Muslim remarks of her colleagues. She became not just a witness to the content, but a kind of proxy for it—conflated with the very subjects of the violence she was working against. Distance from violence is a privilege that certain systemic configurations of race and representation grant to some more than to others and is determined by how one is framed in relation to the violence. The technological distancing we seek through analytical tools or AI delegation cannot resolve these underlying social dynamics, but instead often obscures and perpetuates them.

For Anders, writing within the context of the horrors of WWII and the latent possibility of nuclear annihilation, the techno-logical world has become so overwhelmingly expansive, that it has created a chasm between that which we are able to produce (Herstellung) and our ability to imagine the effects these products have (Vorstellung). The result is a focus on technical processes and techniques, not on the effects of technologically mediated acts on our human world. Or, to say it with another mid-century thinker, Norbert Wiener, it is a world in which a strange obsession with know-how serves as a stand in for knowing-what-for (Wiener 1989, 183). For Anders it is the persistence of these foundations that not only make a repetition of the monstrous possible, but also highly likely. His warning is that we should remain in critical correspondence with the technologies we make, so as to not lose sight of the degree to which their logic shapes our actions and perspectives, and, importantly, our ability toward moral responsibility for our actions.

My experience was not unique. I met another German media analyst whose extended work with ISIS videos had left him similarly troubled. “I really felt myself dehumanized watching these videos,” he explained. “I was just a robot taking aesthetic notes while witnessing horrible footage” (Anon. 2020). He devised a simple automated procedure, using a media player to generate screenshots from videos at regular intervals, one image per second. This produced a dataset of still images, disrupted from their temporal flow, which he could analyze calmly at his own pace. The one second intervals imposed a rigidly mathematical temporal structure (what Sergei Eisenstein labeled ‘metric montage’) upon the material, which helped him to resist the editing rhythms that had drawn me deeper into the video.

Deloylo deloylo deloylo, oh my slender Sheikh.
It is spring, and I climbed to the heights of Sulaymaniyah, to a battlefield.
The beautiful sky has sounds of airplanes, machine guns,
of bombs and automobiles.
Oh brothers, call the guardsmen.”

“Deloylo deloylo deloylo, Şêxê zirav.
Bihare bi serê Silêmanî yê daketim mihelê şeran û wêla,
li asmana xweş tê girênê teyara, şirqênê metirliyoza,
ringênê bomba, wirşênê tirumbêla
bira gaziye gaziyê nobedara.”

Your path through #violence

carrier-bag.net/path/angry-temperature-chart/?noaiallowed