#automation
In 1964, Anders penned an open letter to Klaus Eichmann, son of Adolf Eichmann, the notorious Nazi prison guard whose apparent moral apathy toward his violent deeds is well documented and widely discussed, encapsulated in the phrase Hannah Arendt coined: “the banality of evil” (Arendt 1998). The letter is Anders’ medium to examine the roots of what he names as “the monstrous” (Das Monströse): the fact that it is possible to exterminate millions of humans at an industrial scale and with factory-like processes; the fact that other humans become leaders, henchmen and handmaidens of this process – many “stubborn, dishonourable, greedy, cowardly ‘Eichmen’”; and the fact that millions of other humans remain ignorant of this great horror, because it was possible to remain so – “passive ‘Eichmen’”, so to speak (Anders 1988, 19-20). Anders offers ‘the monstrous’ up for close examination because without doing so we are blind to the actual roots that make the existence of the monstrous possible. These roots have not ceased to exist after the collapse of Nazi terror, quite the contrary. They are not only political, they are deeply woven into all aspects of the modern technologized world we have crafted. In fact, one of the roots that makes the monstrous possible, Anders diagnoses, is that “we have become creatures of a technical world” (ibid. 14), in which we are fashioning our lives, and our selves in the image of the technological products we create.
The suspicion of cheating continues to haunt all automated systems produced since the Duck and the Turk. In 1997, during the rematch between Deep Blue and Kasparov, a controversy arose in the 2nd game. Kasparov resigned and accused IBM of cheating, alleging that a human grandmaster had been playing one certain move (36.axb5! axb5 37.Be4!). Kasparov requested the logs of the machine, but IBM refused to provide them and dismantled the machine.[43]
While these are its latest instantiations, the data-driven targeting imaginary has a longer history dating back at least to US operations in Southeast Asia, articulated in 1969 by General William Westmoreland, then Chief of Staff of the US Army, who offered these thoughts over lunch: “On the battlefield of the future, enemy forces will be located, tracked, and targeted almost instantaneously through the use of data links, computer assisted intelligence evaluation, and automated fire control. … I see battlefields on which we can destroy anything we locate through instant communications and the almost instantaneous application of highly lethal firepower. … With cooperative effort, no more than 10 years should separate us from the automated battlefield” (Westmoreland 1969). While the US effort to implement this fantasy on the Ho Chi Minh trail in Vietnam, Operation Igloo White, was an infamous failure, the holy grail of data-driven omniscience and weapon systems automation lives on. Some five decades after General Westmoreland’s vision, the US Department of Defense has built out its infrastructures of surveillance beyond its capacity to render the data generated as usable information.
No peaceful future can be built on the systematic production and elimination of targets as a primary tactic of warfare. If this ethos is not challenged more widely, we risk becoming the direct inheritors of Eichmann’s legacy. Indeed, if this ethos is instead celebrated and more AI targeting finds its way into more conflicts – as seems to be the case in our darkening times – then, as Anders warned us, it is not only possible that “the monstrous” may be repeated. It may already be on the near horizon.