Source: https://carrier-bag.net/path/motionless-microchip-robot
Date: 31 Aug 2025 11:14

#infrastructure

On the level of networked computing infrastructure, the circumstance/the point that Generative AI is a continuation of a long-standing trend of centralization and concentration of power is best illustrated by the fact that the five big tech companies came to dominate the second phase of the development: Alphabet, Amazon, Apple, Meta, and Microsoft. The cloud is the link that provides continuity between the two phases. By owning the data-centers, as capacity to run largest-scale computing and as a place to gather, store and process massive data sets, they are strategically placed to control the hardware layer below and the software layers above. As Nick Srnicek showed, not only are many of them branching into development of hardware, in particular AI-optimized CPUs (Google started developing their own Tensor Processing Units (TPUs) in 2015). In addition most of the foundation models, both proprietary and open source, are controlled by these companies to take advantage of their massive data hoards for training, and give them a strategic influence over all the applications built on top of them.

This short commentary offers a simple corrective. I chronicle how the embrace of automated warfare go hand in hand with the steady rise of hard-core political conservatism and nationalism, in Israel and Palestine and worldwide. Orienting towards these political conditions expands understandings of AI as sociotechnical systems, enunciated through people and technologies as well as the ideological and material infrastructures that give them form (Seaver 2018).

In April of 2017, the DoD announced plans for its flagship AI project, the Algorithmic Warfare Cross-Functional Team, code-named Project Maven. The announcement of Project Maven by then Deputy Secretary of Defense Robert Work asserted the urgent need to incorporate artificial intelligence and machine learning across DoD operations. Project Maven’s aim more specifically, Work states in his memorandum, is “to turn the enormous volume of data available to [the] DoD in the form of full-motion video into actionable intelligence and insights at speed” (Work 2017). The plan as Work sets it out includes an initial project focused on the task of labelling data within full motion video images generated by US drone surveillance operations, as a first step toward establishing the algorithms and computational infrastructures needed to automate object detection and classification in support of military operations.

If we look at the material basis of generative (and analytical) AI, we see a transcontinental, industrial infrastructure for networked computing, anchored in ever larger, now even ‘hyper-scale’ data centers, with thousands of physical servers and millions of virtual machines. Its dynamics entail strong elements of centralization because the underlying economies of scale – regarding data, models and infrastructure – create a positive feedback loop favoring the largest players, making new entries into the field progressively more difficult. To bring this into view, it’s useful to understand AI not as a set of specific, stand-alone applications, say image generation, self-driving cars, pattern recognition for detecting anomalies or making prediction, but as an integrated, multi-modal, multipurpose infrastructure, much like the Internet itself. In less than four decades, the Internet, as we all know, has become a core layer of the operating system even for processes that remain predominantly analog, such as train systems or flood levies. AI is now being aggressively pushed into all aspects of computing, that is, into the entire infrastructure at large. Once it's embedded, it will be impossible to remove, as the processes will have adapted to the possibilities of the new technology and society will come to accept its downsides (much as we seem to have resigned ourselves to near daily occurrences of large-scale security breaches in conventional networked systems, or the toxic onslaught of daily communication through email and messenger apps). On the consumer end, this can be seen with, often unwanted, AI applications being added to existing programs. On the institutional side this is most visible with Elon Musk's hacking the American federal bureaucracy, firing people en masse, extract public data and remold the administration through the pervasive use of AI. While the bleeding out and reconstitution of the federal bureaucracy in the US is spectacular, similar processes are occurring in Europe as well, by thousands of small cuts.

If we understand AI as the next phase in the built-out of the networked computing infrastructure, we can divide this history into, broadly speaking, three phases. The first phase started in the early 1980s, when the first group of major Internet standards consolidated, turning networked computing from an experimental playground into an infrastructure people and institutions came to rely on. During that time, the infrastructure was largely decentralized, both in terms of technologies and communication patterns. While not all layers and aspects of the early internet were decentralized, its defining technologies, such as E-mail, IRC (Internet Relay Chat), and the WorldWideWeb, were. They were based on open protocols that enabled independent machines to exchange data across institutional and technical boundaries. People could directly communicate with other users, no matters whose machines they were using. Every mail server could, and still can, exchange messages with other mail servers, no matter who owned it or whether it was part of a public, private, profit or non-profit infrastructure. This focus of open exchange and collaboration reflected the non-commercial and research ethos that shaped much of the culture of this period.

One of the substantial attempts to map the infrastructures of AI is the work Anatomy of an AI System by Kate Crawford and Vladan Joler.

The visual analysis of the Amazon Echo infrastructure traces the device's life cycle from the extraction of raw materials (birth), its operation through physical and cognitive layers (life), to its eventual disposal (death). It all begins and ends with geology: from the mines, which take minerals that were created over millions of years, to the e-waste dumps where pollution stays in the ground from for hundreds if not thousands of years. All for a brief lifetime of planned obsolescence. What becomes visible are the concrete steps through which the exploitation of labor and nature in mines, factories, outsourced office-cubicles, and society at large takes place. Impressive in this work is not just the detailed research that went into it. This is also presented in Crawford's Book Altas of AI (2021). What is unique is the aesthetics, the ability to produce an account of the entire system, that balances the need for detail with the need to see the system in its entirety. Of course, a holistic analysis cannot show everything. So the focus on exploitation in service of convenience for the few is key to be able to tame complexity in favor of narration. This is highlighted by including, on the right-hand side, a scale of hourly wages of the different professions involved. It would take a person working in an improvised mine in the Democratic Republic of Congo, 7000 years of non-stop dangerous and back-breaking work to earn as much as Jeff Bezos, sitting at top of the pyramid, earns in a single day. While the black cloud is the form of obfuscation that the new regime takes on, the map presents an aesthetic that allows to find places to mount challenges and perhaps even see their interconnections. If the shared experience on the factory floor provided ground for solidarity against the first form of capitalist exploitation, then aesthetics of systems, and the ability to locate oneself within it, will play a role in finding new sources of solidarity.

Your path through #infrastructure

carrier-bag.net/path/motionless-microchip-robot/?noaiallowed