Source: https://carrier-bag.net/path/intelligent-blueprint-knowledge
Date: 30 Aug 2025 23:31

#infrastructure

On the level of networked computing infrastructure, the circumstance/the point that Generative AI is a continuation of a long-standing trend of centralization and concentration of power is best illustrated by the fact that the five big tech companies came to dominate the second phase of the development: Alphabet, Amazon, Apple, Meta, and Microsoft. The cloud is the link that provides continuity between the two phases. By owning the data-centers, as capacity to run largest-scale computing and as a place to gather, store and process massive data sets, they are strategically placed to control the hardware layer below and the software layers above. As Nick Srnicek showed, not only are many of them branching into development of hardware, in particular AI-optimized CPUs (Google started developing their own Tensor Processing Units (TPUs) in 2015). In addition most of the foundation models, both proprietary and open source, are controlled by these companies to take advantage of their massive data hoards for training, and give them a strategic influence over all the applications built on top of them.

In April of 2017, the DoD announced plans for its flagship AI project, the Algorithmic Warfare Cross-Functional Team, code-named Project Maven. The announcement of Project Maven by then Deputy Secretary of Defense Robert Work asserted the urgent need to incorporate artificial intelligence and machine learning across DoD operations. Project Maven’s aim more specifically, Work states in his memorandum, is “to turn the enormous volume of data available to [the] DoD in the form of full-motion video into actionable intelligence and insights at speed” (Work 2017). The plan as Work sets it out includes an initial project focused on the task of labelling data within full motion video images generated by US drone surveillance operations, as a first step toward establishing the algorithms and computational infrastructures needed to automate object detection and classification in support of military operations.

If we look at the material basis of generative (and analytical) AI, we see a transcontinental, industrial infrastructure for networked computing, anchored in ever larger, now even ‘hyper-scale’ data centers, with thousands of physical servers and millions of virtual machines. Its dynamics entail strong elements of centralization because the underlying economies of scale – regarding data, models and infrastructure – create a positive feedback loop favoring the largest players, making new entries into the field progressively more difficult. To bring this into view, it’s useful to understand AI not as a set of specific, stand-alone applications, say image generation, self-driving cars, pattern recognition for detecting anomalies or making prediction, but as an integrated, multi-modal, multipurpose infrastructure, much like the Internet itself. In less than four decades, the Internet, as we all know, has become a core layer of the operating system even for processes that remain predominantly analog, such as train systems or flood levies. AI is now being aggressively pushed into all aspects of computing, that is, into the entire infrastructure at large. Once it's embedded, it will be impossible to remove, as the processes will have adapted to the possibilities of the new technology and society will come to accept its downsides (much as we seem to have resigned ourselves to near daily occurrences of large-scale security breaches in conventional networked systems, or the toxic onslaught of daily communication through email and messenger apps). On the consumer end, this can be seen with, often unwanted, AI applications being added to existing programs. On the institutional side this is most visible with Elon Musk's hacking the American federal bureaucracy, firing people en masse, extract public data and remold the administration through the pervasive use of AI. While the bleeding out and reconstitution of the federal bureaucracy in the US is spectacular, similar processes are occurring in Europe as well, by thousands of small cuts.

If we understand AI as the next phase in the built-out of the networked computing infrastructure, we can divide this history into, broadly speaking, three phases. The first phase started in the early 1980s, when the first group of major Internet standards consolidated, turning networked computing from an experimental playground into an infrastructure people and institutions came to rely on. During that time, the infrastructure was largely decentralized, both in terms of technologies and communication patterns. While not all layers and aspects of the early internet were decentralized, its defining technologies, such as E-mail, IRC (Internet Relay Chat), and the WorldWideWeb, were. They were based on open protocols that enabled independent machines to exchange data across institutional and technical boundaries. People could directly communicate with other users, no matters whose machines they were using. Every mail server could, and still can, exchange messages with other mail servers, no matter who owned it or whether it was part of a public, private, profit or non-profit infrastructure. This focus of open exchange and collaboration reflected the non-commercial and research ethos that shaped much of the culture of this period.

One of the substantial attempts to map the infrastructures of AI is the work Anatomy of an AI System by Kate Crawford and Vladan Joler.

There are powerful synergies between the types of premonition by generated content, the centralization infrastructure, and unaccountability of the power in the black cloud. Together, they create a contemporary version of reactionary modernism that has given itself many different names, such as Techno-Optimism or effective accelerationism. Infrastructural and political logics are closely aligned and the race for ever larger data-centers, ever more data, ever more energy, and more mineral furthers entrenched these mutually reinforcing dynamics. But there is no technological determinism at work here, as much as the merchants of doom and hype want us to believe. Other worlds, compatible with a renewed sense of democracy and common possibility, can be premonitioned. The black cloud can be rendered legible. And it's not even necessary to achieve full transparency before we can begin to act. The identification of already existing conflicts embedded in the constitution of the infrastructure open up spaces of agency. New aesthetics, new ways of seeing the world, are key to this endeavor.

There are, of course, limits to what alternative image generation approaches can do, not simply because of their scale, but because the generative aspects of generative AI are not limited to the screen. Indeed, they extend far beyond the screen. To approach this other dimension of generativity, it is useful to do what Geoffrey Bowker called “infrastructural inversion”, that is, to shift the focus from the figure (the image in the screen) to the ground (the vast infrastructure that produces it).

Your path through #infrastructure

carrier-bag.net/path/intelligent-blueprint-knowledge/?noaiallowed