Source: https://carrier-bag.net/path/gifted-tensor-regression
Date: 31 Aug 2025 04:23

#automation

In 1964, Anders penned an open letter to Klaus Eichmann, son of Adolf Eichmann, the notorious Nazi prison guard whose apparent moral apathy toward his violent deeds is well documented and widely discussed, encapsulated in the phrase Hannah Arendt coined: “the banality of evil” (Arendt 1998). The letter is Anders’ medium to examine the roots of what he names as “the monstrous” (Das Monströse): the fact that it is possible to exterminate millions of humans at an industrial scale and with factory-like processes; the fact that other humans become leaders, henchmen and handmaidens of this process – many “stubborn, dishonourable, greedy, cowardly ‘Eichmen’”; and the fact that millions of other humans remain ignorant of this great horror, because it was possible to remain so – “passive ‘Eichmen’”, so to speak (Anders 1988, 19-20). Anders offers ‘the monstrous’ up for close examination because without doing so we are blind to the actual roots that make the existence of the monstrous possible. These roots have not ceased to exist after the collapse of Nazi terror, quite the contrary. They are not only political, they are deeply woven into all aspects of the modern technologized world we have crafted. In fact, one of the roots that makes the monstrous possible, Anders diagnoses, is that “we have become creatures of a technical world” (ibid. 14), in which we are fashioning our lives, and our selves in the image of the technological products we create.

The suspicion of cheating continues to haunt all automated systems produced since the Duck and the Turk. In 1997, during the rematch between Deep Blue and Kasparov, a controversy arose in the 2nd game. Kasparov resigned and accused IBM of cheating, alleging that a human grandmaster had been playing one certain move (36.axb5! axb5 37.Be4!). Kasparov requested the logs of the machine, but IBM refused to provide them and dismantled the machine.[43]

While these are its latest instantiations, the data-driven targeting imaginary has a longer history dating back at least to US operations in Southeast Asia, articulated in 1969 by General William Westmoreland, then Chief of Staff of the US Army, who offered these thoughts over lunch: “On the battlefield of the future, enemy forces will be located, tracked, and targeted almost instantaneously through the use of data links, computer assisted intelligence evaluation, and automated fire control. … I see battlefields on which we can destroy anything we locate through instant communications and the almost instantaneous application of highly lethal firepower. … With cooperative effort, no more than 10 years should separate us from the automated battlefield” (Westmoreland 1969). While the US effort to implement this fantasy on the Ho Chi Minh trail in Vietnam, Operation Igloo White, was an infamous failure, the holy grail of data-driven omniscience and weapon systems automation lives on. Some five decades after General Westmoreland’s vision, the US Department of Defense has built out its infrastructures of surveillance beyond its capacity to render the data generated as usable information.

Let’s summarize. We have seen that computing have originated from the Division of Labor, that factories have developed human metrics for optimization purpose, and that they rely now on algorithmic management based on instant data feedback, in which humans are used as cogs in the machine, whether or not they are paid for that. We have seen that automata were historically indistinguishable from illusionism, and that behind the curtains of some of the latest high-tech systems, workers are exploited to maintain the illusion of automation. And that the whole language that surrounds Laborious Computing is set to preserve the narrative. In the meantime, below the surface of the Tech Newspeak, every day, numerous press articles point out all the immense flaws of the Laborious Computing, as is still forced upon us, in all direction at once.

No peaceful future can be built on the systematic production and elimination of targets as a primary tactic of warfare. If this ethos is not challenged more widely, we risk becoming the direct inheritors of Eichmann’s legacy. Indeed, if this ethos is instead celebrated and more AI targeting finds its way into more conflicts – as seems to be the case in our darkening times – then, as Anders warned us, it is not only possible that “the monstrous” may be repeated. It may already be on the near horizon.

This ‘hypothesis’ enacts a familiar move, as failures of humans become the justification for further automation. Combining magical thinking with a leap of faith (named here a ‘hypothesis’), the proposition is that automated systems will somehow transcend the limits of the very knowledge practices that must necessarily inform them. But we now have ample evidence that the automation of data analytics works to reproduce and amplify the classificatory schemas that inform the data set. And nowhere is this more treacherous than when the data set is made from proxy signals, profiles and so-called patterns of life.

The idea that human judgement is flawed (or corrupt) and that markets could neither be regulated nor fully predicted and planned has long been central to the automation and computerization of financial exchanges. Throughout the middle of the twentieth century, increased trading volumes forced clerks to fall behind on transaction tapes and often omit or fail to enter specific prices and transactions at particular times. Human error and slowness came to be understood as untenable and “non-transparent,” or arbitrary in assigning price (Kennedy 2017).

Your path through #automation

carrier-bag.net/path/gifted-tensor-regression/?noaiallowed