Source: https://carrier-bag.net/video/speculative-nets
Date: 21 Mar 2026 09:42

Speculative Nets – Artificial Intelligence, Finance, and Reactionary Politics

Orit Halpern
Cite as
Halpern, Orit: "Speculative Nets – Artificial Intelligence, Finance, and Reactionary Politics". Carrier Bag, 5. November 2024. https://carrier-bag.net/video/speculative-nets/.
Import as

The talk will examine the relationship between psychology, neo-liberal economic thought, and technology since the 1970’s. I will discuss how ideas of democracy, freedom, agency, and decision making were reconfigured in terms of self-organizing systems, communication, and non-consciousness. I argue this change continues to inform contemporary politics, and shape how we understand institutions, ‘the human’ and technology.

Lecture at AdBK Munich / Emergent Digital Media class, Nov 5, 2024


Read full transcript (generated by Whisper)

I'm a historian of science, so this is not going to be about my art practice, but about history. That would be good, too. So I'm just going to give a lecture. So this is some stuff that is grounded in a recent book I did with Rob Mitchell, but it's also some new things I'm thinking about. And I changed the title a little bit because I like mandates and doctrines, and there seems to be so much force with AI, and how did we come to believe in all these things that we believe in? So today I'm going to talk about AI finance and reactionary politics. I don't know if I'm quite going to get to it, as Hito mentioned, and also I do want to say thank you, Hito, Francis, Anja, all the students and people who brought me here. I'm really . Thank you. Thank you. Actually, I'm really honored that you wanted to include me in this event that's thinking about art and AI, and also just how we can be imaginary about AI, which hopefully history might contribute or maybe not. So anyway, today I'm going to take you through this territory that links, so I'm going to be discussing the links between Friedrich Hayek, the famous neoliberal, Austrian-American economist.

The emergence of the neural network is kind of both as a technology, but also as an ideology, and its links to psychology and maybe attention. So that's what I'm going to talk about today. So I'm going to take you through basically this relationship between neoliberal economic theory, psychology, and Hebbian networks, and artificial intelligence, or perceptrons and neural networks that you heard most recently with RBYN. And everyone else. And I'm going to do this to tell a story about how automated decision became a virtue, a moral virtue, and a mandate, and a, oh, sorry, a mandate. You have to do this. And this is, of course, quite pressing today. So I want us to be thinking about everything from our attention economies to the relationship to outright and post-truth politics, and particularly the perhaps, not so mysterious connection between the outright and the latest tech person, such as Peter Thiel's super support of JD Vance, which hopefully did not succeed, but we don't know yet. Anyway, so I'm an American, so there's a lot of anxiety today. Anyway, so to start this story out that's going to link AI or brains and outright politics and money, I'm going to open with a famous quote that the economist Friedrich Haig began his battle for neoliberalism with in 1945.

So in 1945, the economist wrote this famous essay and began his battle, as I mentioned, to rethink knowledge itself. In an essay that limbs large over the history of contemporary conservative and libertarian economic thought, Haig inaugurated a new concept of the market, the peculiar character of the problem of a rational economic order is determined by the fact that the knowledge never exists. The economic problem of society, Haig argued, is a problem set by these data. It is a problem of the utilization of knowledge not given to anyone in its totality. And it's really hard to animate the rather non-humorous or non-aesthetic words of these economists. I'm going to read a few of them. I'm a bit nervous, but I'll try. While this may look quite whatever, this was no small claim. When situated within the broader context of Haig's engagements with the science and technologies of the time, the seemingly theoretical statement gestures to grand aspiration, a fervent dream for a new world governed by data. At the heart of Haig's conception of a market was the idea that no single subject, mind, or central authority has complete knowledge of the world. This emerging neoliberalism, the neoliberal imaginary of course, did not operate in isolation.

As historians of science have noted, Cold War rationality did not conform to the dictates of enlightenment reason. The specter of technologically induced planetary destruction through nuclear war and the memory of global wars, you can see these radar operators, created a critique of human decision making. This critique fostered the production of a formal, repeatable, and algorithmic model of decision making, one that perhaps mirrored the emerging new computer technologies of the time and perhaps best embodied in the sort of cold calculating expert at Rand Corporation, kind of linked to the US military. But if this rational decision maker was still an expert in area studies or science, the intelligence Haig proposed, I argue, was somewhat different. The rational technocrat was capable of objectivity, planning, and predicting the future. The subjective and ignorant figure Haig provides us with was not. And I want to mention that it was not just the ideas of networked intelligence didn't just start with Haig. In fact, just as Haig was saying these things about economics, we see in psychology a similar discourse emerging. So becoming neural, so how did everything become networked? Through neurons. In 1948, Donald Alden Hebb, a Canadian psychologist, whose prominent claim to fame is through Naomi Klein's shock doctrine, introduced the idea that neurons that wire together fire together.

And any of you in computer science or anything like that might have heard of these Hebbian networks, yeah. So working with individuals injured by war, he noticed that the brain developed new, new capacities to deal with injuries, such as loss of hearing or vision or a limb. The brains could be, in short, programmable, changeable, and networked. His concept, so fittingly, I'm just going to demonstrate this, neurons firing together, wiring together, with a nice, cute AI-generated image, which is what all this firing ends up leading to. Anyway, so let's take the example of a baby. Babies don't need to know what cats are. The cat isn't represented to the baby, let's say. Rather, in this theory, the baby sees cats. The more cats the baby sees, the more a certain set of stimuli repeat and set off particular neurons. The more times the neurons are fired, the more likely they are to fire again together. They are being trained. So the idea was that our brains are, don't have in them an infinite memory repository, which is kind of interesting conversation with the previous talk, but rather, only store these patterns. That certain sets of stimuli re-trigger my brain, and then they fire together and basically regenerate the image.

It's not that I have a storehouse of images. I don't remember every face. They re-trigger and it comes back. So the brain doesn't store every image of every cat. Rather, it stores patterns of net that certain stimuli activate to fire together and generate the idea of cat. So nets are stochastic. So there's a kind of temporality and probability here. The more they are stimulated, the more they are likely to fire again. And of course, so this is a probabilistic and networked brain. And also a programmable one. If nets can be trained to fire together, they can also be re-trained. So this is kind of a new idea of neuroplasticity. And if all of this sounds really familiar, that's because Hebbian networks that first came in 1948 out of psychology then became the template for contemporary learning in neural networks. And his now famous sensory deprivation study is illustrative of this new neuroplastic or programmable, reprogrammable world. While this research has gained infamy as the progenitor of soft torture in the CIA in the 1960s and 70s, its initial goal was far more banal. It was to examine the monotony of contemporary work environments and their impacts on attention.

So it was well known that radar operators and other people working in the newly electronic world, the radar operators, were known to suffer extreme boredom and attention depression. To test the monotony of the modern work environment, the US military kind of sponsored this study with Donald Hebb to induce perceptual isolation, except for a couple subliminal messages. So people were kind of supposed to, so in this study, too much data and no data are kind of equivalent. Or should we say too much or too much data. Too little experience, I guess. So implicitly, boredom and information overload are assumed to be related, which is to say too much data given in certain environments might be the same as none at all. So everybody got really excited about this study. They signed up all these male psychology students who were going to get paid $5 a day, and everyone imagined they were going to lie there for three months and just be super rich and go on vacation forever. No one lasted more than five, four days actually. And they were kind of streaming some subliminal messages, and all these kind of scientists came out believing in ghosts, the supernatural.

This is stuff that they told this magazine that they saw while they were there. People in bathtubs, spaceships, squirrels, Canada. Anyway, so like these are the kind of things people, I guess, jump to their, kind of were imagining or hallucinating in this. So the study appeared to demonstrate a way to actually impact people thinking without ever touching their bodies. So you're changing people's minds. You're having hallucinate. You didn't have to touch them at all supposedly. I mean they're obviously pretty bound up here. This is not a, but anyway. When adjoined to theories of network cognition and neuroplasticity it appeared that brains could be remotely programmed from afar through suggestion and the environmental manipulation of data. Hebb himself decided to do this. He was a scientist. He decided to go against this thing. And he called it torture and said, you know, we ethically have to stop this. But that didn't stop anyone. And they went on to work with this throughout the CIA, with the CIA's support throughout the Cold War. But it wasn't just psychologists. But nonetheless this got super popular and this became the most important textbook in psychology or is considered one of them in the 20th century.

So it was not just psychologists. However, they were interested in networked minds. Along with our neural brains we now have also neural markets. In a famous book that's now getting a lot more attention in 1952 titled The Sensory Order, Hayek cites Hebb and also people like McCullough and Pitts and other people. And introduces his own model of a networked mind to parallel the idea of a networked market. So and we know that like throughout history there's been a long history of kind of mirror visions between the economic agent and decision maker and sort of how we're understanding technology and obviously psychology. So what is this sensory order? The phenomena, he argued, with which we are concerned here, commonly discussed in psychology and in the heading of discrimination. This term is somewhat misleading because it suggests a sort of recognition of phenomena. It suggests physical differences between the events which it discriminates while we are concerned with a process that creates the distinctions in question. The same is true of most of the other available words which might be used such as to sort out, to differentiate, to classify. So why does this matter? Well, aside from the glaring matter that of course we are now understanding all of cognitive processing and market processing as being largely about discrimination and creation of distinction, something that lots of people talking about AI and algorithms say all the time.

For Hayek, the important thing is a world is not comprised of clearly discriminating things or recognizable entities. Rather, it's a world of processing. So there's no clear cut subjects or objects. Rather, they are a process which creates these relations. So everything's relational and in some sense kind of algorithmic. What is key in a statement is that perception creates distinctions. The world is not full of preexisting, already known objects or subjects to recognize but also to discriminate is always a relation for human beings. One that Hayek insists cannot be known by ourselves. So it's really important here that again, the idea is always that things have to be put into relational networks in order to happen. The individual cannot in some sense be conscious. In being a process, this perception also unfolds in time and this temporality lent mind a kind of self-organizing property. So nothing is preexisting in stable form. It's always kind of self-organizing, which is kind of the fantasy of neoliberalism. That denoted that human cognition could not be reduced to its material basis. So minds like markets could basically never be fully calculated. Even if you knew where every neuron was, it wouldn't be able to predict the action of the network as a whole.

So this post-modernist, this idea of the network decision maker, would only be the market. And so this is a very interesting question. This is obviously not a conscious individual making enlightened decisions like these founding fathers of the United States here are making. So this is not an enlightenment vision of liberal reason. Nor, however, is it the calculative reason or rationality or expertise of the Rand expert. Rather, I argue, it anticipates the idea of a network decision maker where only the market. Can optimize decisions at scale by networking all these like unknown, all these sort of processes of perception and cognition. However, of course, we can ask how these older histories, right, of ideas of freedom, agency, and liberalism are going to intersect with this new vision of non-conscious networked self-organization. And that indeed is the question of the age. But of course, this would leave us with some questions about where. . Where in this formulation of kind of non-conscious but directed little agents all networking together to come up with the world really where the state sits. And here, well, Hayek had some very interesting things to say. He said equality of the general rules of law and conduct is the only kind of equality conducive to liberty.

And the only equality which we can secure without destruction. And the only equality which we can secure without destroying liberty. So what is interesting is for Hayek, the only function of the state is like an archive literally with no organization, time, or planning. And it's solely to provide a kind of library of operations, if you will, for markets or people to network together to make their choices. And in that, he also suddenly produces an incredible reduction in the idea of law. Laws are here only in the sense of protocol or algorithm. Something to be decontextualized like a tool that can be replicated. So it's general rules of the law and a conduct are the only ones conducive to liberty and equality. So not interpretation or kind of translation. But rather laws being like algorithms that sort of set the stage for processes. But somehow not regulatory laws, of course. So the state is a repository. Or operating system for generalizable processes that infrastructure markets and other systems. But does not guide them with any will or plan. It's kind of like a library in computer languages. And so these are like a critical reformulation of the very idea of the state.

And in fact, it's one that Hayek quite loved. We discover, he said, that many of the institutions on which human achievements have arisen and are functioning without a designing and directing mind. They're just the spontaneous collaboration of free, and they're usually men, often creates things which are greater than their individual minds can ever fully comprehend. So join the network as the route to freedom. So while Hayek initially deferred from making direct analogies between minds and markets, by 1977 he's openly saying that each member, neuron, or buyer or seller that is not planned or centralized or even conscious, and that the price and market system is in that sense a system of communication. So to summarize this section of the talk, we have a series of fairly important and pivotal transformations in concepts of the market. So there are three major changes that these forms of thought are manifold, and I want to accentuate that while I'm focusing on Hayek, this was a broader tendency in neoliberal economic thought, right? So first, Hayek poses that markets are about coordinating information. Markets are about processing information, not matching supply and demand. That is a fundamental break and interesting question for labor experts here about this kind of move away from producing value from the older indicators like GDP and GNP.

Second, Hayek's model of learning and using knowledge is grounded in the ideas of a networked intelligence. The model of learning and using knowledge, both in psychology and economics here, is grounded in the idea of a networked intelligence embodied by network systems like markets. And finally, it's an environmental intelligence. Markets can allow the creation of knowledge outside of and beyond the purview of individual humans. Notions of cognition and decision making are dispersed into the world and possessed, as I mentioned, by these networks. The market, the infamous neoliberal Milton Friedman once famously is paraphrased as saying, is an engine, not a camera. But if it's an engine, what form of machine would it be? In 1956, a series of computer scientists, psychologists, and related scientists embarked on a new version of computing that they titled artificial intelligence, and I won't go too far into that since I'm not going to go into that. I'm going to go into that. I'm going to go into that. This is obviously a room of pretty expert people on this one. One model, however, aspired not to be about representation or language, but like the market itself, a machine. The model was the process of learning, mainly Frank Rosenblatt's perception, which here you're seeing.

Sorry, I jumped. In his initial paper detailing the idea of the perception, he mentioned specifically a small number, have attempted to integrate specific physiological and psychological information in their theories. They have set a descriptive model, which suggests the possible principles of organization for guess what, statistical learning with layers of nuts. So a central tenet as you know this approach is that neurons are mere switches or nodes in a network that make small little logic decisions, yes, no, true, false. And they operate just like the little cat example I showed you earlier, where essentially, you know, you have a cat, it does like what I some people call white messages, that kid dad that think he's a brains. Who can just taste the programmer and say, you know, here's aейah me ah, shren ff. You're not going to catch me. Yeah, well, that's not right, that's not the moral principle that here but. And I think he was right, that basically these sort ofidirations of themes are a kind of user experience for humans. And the problem is, it's rather bonkers than basic. bile. Yes. He's right. Yeah. Well. can by contrast drawn data that are the result of judgments and experience of not one individual but rather large populations.

Intelligence here is reformulated in analogy to the market as networked and capable of evolution through population level coordination of data. And of course the key feature of the Rosenblatt's perceptrons is the fact that in theory you don't have to represent, you don't need to explain to the computer the ontology of something before it can recognize it. So there's kind of a fantasy here. I'm not saying it's a reality of no representation, no planning, coming out with automated self-organization. The hope being that these small operations might culminate in producing more sophisticated thoughts while evading the problem of actually having to describe or represent a solution. And while this is of course not possible, this contradictory and unique concept of the human brain, the need to evade representation continues to fuel our desire for things like unsupervised learning and the agglomerate. And while now we're kind of running, we have too much data so we have new problems, but the kind of fantasy of being able to get beyond the ontology of the database or the archive persists. So there's some critical little summary things to think about here. So one of these ideas is, one of the key things that's being forwarded once situated in relationship to Hayek's ideal of economy is that now non-conscious or automated decision making has come to be equated with freedom and liberty and of course wealth.

And in fact, since for Hayek, all freedom from coercion is the only freedom that can be called liberty and coercion equated with any attempt to impose upon a society a deliberately chosen pattern of distribution, for example, for forcing integration of schools or reallocating wealth to particular, to different populations by plan would also equate with a challenge to human liberty and freedom. So this is a fairly amazing thing. I mean, maybe it seems really obvious right now, freedom, freedom, we all love freedom, certainly in the United States of America. But this is a fairly, fairly amazing transformation for any liberal conception of freedom to freedom to be, to join networks essentially in non-conscious decision making as a dominant ideology. So while Hayek can be interpreted for allowing for the possibility of things like redistributing wealth or healthcare might be necessary conditions to enable individuals to exercise non-coerced choices in the market, neoliberal discourse of course has focused and said on asserting the seemingly natural and never calculatable nature of market processes, obviously, in relationship to racial, sexual, and other kind of contests over power and equity in the United States. But wait, so before we sink into this, I'm going to, there seem to be a few little problems in our networked world with constantly emerging intelligence.

So the very system, the very features that made such systems evolutionary and emergent, and emergent, and emergent, and emergent, were also their terminal points of failure. The stochastic property of nets went to unpredictability and potential increases in entropy. So just like if human brains could be trained, and everyone went kind of psychotic in these studies, so how did human beings themselves maintain their stability in the face of environmental stresses? So how do nets know if they're being trained on errors or manipulated? In short, how do you know if a signal is coming from inside or outside a net, and from when? And if a system is always adapting, how does it not mutate to the point of extinction and psychosis? And neural network researchers quickly discovered that errors in waiting might propagate and exacerbate error in general, and positive feedback might lead to oscillation and instability. And of course, things like back propagation were critical on finally making these technologies work in order to kind of correct for these problems. So we're constantly having to struggle with the computer's inability, right, to tell the fake image from the real one, or when the image is real, or when the image is real.

And so we're constantly having to struggle with the fact that we're not really sure when the image was taken, or any of these other questions that, again, we've just talked about. So neural network researchers only reflected a broader discourse repeated by cyberneticians, politicians, scientists, social science and communists. What if a networked feedback loops fed the wrong positive feedback, for example, with nuclear confrontations leading to network instability? And in the post-war period, if not, you know, extinction, for real, in the post-war period, economists also obsessed about how to avoid the fact that the data is not being used. They wanted to avoid the sort of market failures or shocks, if we will, that led to the rise of totalitarian regimes in Europe after the First World War, and within the context of the Cold War, such historical memories of market failure came and joined with new concerns about the future survival of democratic and capitalist societies. The question, however, was about decision making. Were populations sound decision makers? A history of populist democratic fascism, a rabid anti-communism, such as here Senator McCarthy, might suggest otherwise. So Richard Hofstetter's path-breaking analysis of Senator McCarthy's anti-communism stands out in this regard.

The paranoid style, he argued at the time, understands the world in terms of behavior among targeted groups overstating the possibility or like fantasizing you could still predict and control when in fact you're in these network systems, you can never do that. So in short, too much data might also provide ecological fallacies, false patterns, and drive these paranoias. However, these paranoias provoke real problems for the concept of the invisible hand. Very basically, how do you know if the market's working? Is it like the invisible hand just doing what it's doing? Or is it the deep state manipulating the prices? Like you can never tell. You have like a, you're just set up for paranoia, right? Like economists, like technocrats, had to provide new decision making that might evade this problem of conspiracy and paranoia. Because after all, if there's no representation, then how do we even know what the truth is, right? Like we're all stuck in these nets. And the solution is maybe our machines will just tell us. So ironically, even as this was happening, the solution neoliberals argued was to increase the automation and introduction of more reflexive data analytics, more computing, more cybernetics in the theory is going to somehow rectify this problem of potential paranoid delusion about we can't tell if patterns or the emergence of some sort of planning, which is what you never want, of course, versus simply the market supposedly self-organizing.

So to recap, to sum it, this concept of an intelligence that's autopoietic, self-organizing, and cybernetic found itself caught on the problem of what would happen if all the messages circulated were false and noise. And now the two weren't really clearly separated. So what would truth be? So just to remember where we've come. So for Hayek, the individual recognizes the limitation of the powers of individual reason. So all of us subsequently cannot know the truth. We cannot know our world. And we cannot make objective decisions. And consequently, because of this limitation, we advocate freedom. So this inevitable universal ignorance is the ground of our liberty. So it is only freedom without coercion that will allow self-organization. And that freedom is now equivalent to the absence, absence of conscious decision-making by organized. So freedom equals non-consciousness, but does that equal truth? Wait, I'm confused. So freedom equals non-coerce, but not reasoned decisions. And if markets equal many free decisions, then freedom equals market participation. And truth equals market participation. So if the market or the networks action, we can extrapolate social networks now, then decision-making at scale is made only by markets that are now networked machines.

And of course, all planning is coercion, i.e., affirmative action, and thus against the self-organizing system and evolution, and certainly against life and liberty in this new formulation. What's even more amazing is this concern about truth, false paranoia. By the 1980s, does it even matter? So in the 1980s, at the height of the rise of algorithmic trading and derivative instruments to the market, computer scientists turned financial guru, Fisher Black, one of the creators of the automated derivative markets and winner of a Nobel Prize, openly says, and he was a student of Marvin Minsky's, actually he was not an economist, he just invented these technologies, but he also was a real fan of Hayek. He says, the effects of noise on the world and our view of the world are profound. Noise in the sense of a large number of small events is often a causal factor much more powerful than a small number of large events can be, i.e., plans. Noise makes trading and finance markets possible, and thus allows us to observe prices for financial assets. Awesome. The head economist, the master of sign-ins, soon to be winner of the Nobel Prize says, we make money off of fake news and data and noise.

All right. And even builds a financial technology. I'm going to give you an example of a financial technology to do that that you can ask me all about later, which is the Black-Skulls derivative pricing equation. So you can make dollars from fake news and noise for some, of course. Of course. If discrimination is the key to computation, then finances realizes the monetization of difference through the automation of decision-making at scale through financial instruments. And it's important to highlight that the very purpose of these instruments that were actually highly compatible with computers and actually introduced computing into the market at scale in many ways was, of course, to automate decision-making at this point in terms of how we buy and sell futures. So all of this bring, but the rise of these instruments, of course, came with incredible rise of income disparity, something that I'm not, you know, a lot of people have definitely demonstrated regularly. So you can make money from noise but not for everyone. So Hayek himself espoused an imaginary about this data rich world that would be increasingly calculated without human consciousness. He was arguably very fond of quoting Alfred North Whitehead's remark, and this is one of his favorite things to repeat and something he said in his original famous essay.

It is a profoundly erroneous truism that we should cultivate the habit of thinking. What we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations we can perform without thinking about them, the less thought the better. All right. So now we're making money off of fake news, thinking less. Society is on an upward trajectory. The percept, so this is a key feature linking together to which neural networks but also calculative technologies like derivative pricing become the ideology and technologies for finding freedom through non-conscious decision making and self-organization. These are vehicles to automate decision making at scale for populations. The neural network joined to finance becomes the ideology and technology by which to automate the problem between self-organization and paranoia. More data, feedback, self-organization, more information being free would alleviate the problem of ecological fallacy and technicize the political. So. All right. So. Rather than having the political will to control and predict populations, supposedly now, of course, networks will happen. So this becomes this ideology that scales all the way up into our environments and now has taken over in mass prevalence. So today as we kind of think about our attention economies as linked into these alt-right post-truth economies, I'll conclude with this.

If we can't find a common joint through the ideology and technology of the neural network, where does that leave us? The emergence of computationally driven markets and all of this have transformed our very ideas, or many of our ideas, of freedom and liberty and have now kind of transformed also, of course, the nature of politics. But, of course, we should not only despair, even though this is a very despairing talk. But. But as theorist Randy Martin has demonstrated, but I think also it comes at the end of a kind of whole day of people talking about ways that we actually use our artificial intelligence also reflexively. That these, that the ideology of the network can also be actually excavated or used to demonstrate the fact that it is indeed socially produced and not natural. It's not, our brains don't just work like this and our markets don't just work like this and this is not just sort of a seemingly naturalized order of things and in fact the collapse between the biological and the technical is a key feature of discourse. So in short, this does not mean evading, this history doesn't mean evading the power we've also bequeathed from our machines.

The apparatus of the epistemology of the neural network has opened to both positive potentials, so some of the stories I couldn't tell today are about things like neurodiversity and neuroplasticity. And maybe even the new forms of collectivity that have emerged. Even as of course it's enhanced the financialization of life and the necropolitics of neoliberal economics. As cultural theorist Randy Martin has argued, rather than separating itself from social processes of production and reproduction, algorithmic finance actually demonstrates the increased interrelatedness, globalization and socialization of debt and precarity by tying together disparate actions and objects into single assembled bodies. Thank you. So just like the way that people are joined together in the subprime, both middle class and lower class, offering at that point it didn't become a social movement, but these moments of crash, and I guess RBYN talks about this, offer moments also of visibility. In many ways AI is, and many of you have demonstrated, very capable of also demonstrating and diagnosing reflexively the kind of sociological nature of our societies. And of course as many people like Paul Edwards, and I don't love Benjamin Bratton, but you know, have suggested our new kind of networks also allow and make things visible that were not there before.

But to just end, I'm just going to end on one statement made by Hayek that we need to complicate. Hayek once said, from the fact that people are very different it follows that if we treat them equally the result must be inequality in their actual position, and that the only way to place them in equal position would be to treat them differently. Equality before the law and material equality are therefore not only different, but are in conflict with each other, and we can achieve either one or the other, but not both at the same time. The response of neoliberal discourse. So essentially neoliberalism, with these words, he states the fundamental dilemma of neoliberalism. To be free we actually have to be in relationship to each other. We have to be in these networks. So you can only gain freedom through relationality. But he also wavers. Does liberty denote equal treatment and therefore a generic law or differential and situated treatment which might denote recognizing history or, for example, planning or some sort of like redistribution project, right? The response of neoliberal discourse has been to automate the relationship essentially between inequality and the law, thus obscuring its social character, and extract value from the differences between humans while maintaining that such relations emerge evolutionarily and are thus nonintentional but natural and necessary.

And might this discourse be disrupted? Recalling the argument the difference is the foundation for freedom or liberty, can we push this neoliberal imagining until it folds? This tension might be a source of a possible freedom through relations, but only if we retrieve these other questions of historical situatedness, of the necessity of translation and interpretation. The fantasy of an archive of processes of differentiation might then be mobilized to new ends, mainly to recognize the permeable political and situated nature of social orders rather than their self-organizing and biological in-between. The future, I argue, lies in recognizing what our machines have perhaps made visible with every single crash, with every glitch, with every time we kind of replay them and recognize their machineness, which has perhaps always been there, mainly the sociopolitical nature of our seemingly natural thoughts and perceptions in that all computer systems are programmed and therefore planned, also forced to contend with the intentional and therefore changeable nature of how we think and perceive our world. Historical consciousness lies outside these systems and therefore might always be a rare source for reimagining and planning our technologies. Thank you, Orit. Just to close the gap to generative AI, I would like to mention that the Black-Scholes formula which you mentioned, which is, of course, a device to predict option prices, future prices, et cetera, is a very close relative of the diffusion formula which is utilized in diffusion models which are basically the most prevalent models to generate images and video.

Both are based on Fourier's heat formula which basically maps the spread of heat through a medium and both can be transformed back into this core, a thermo-dynamical formula that is a very complex material. So basically, this also underlines what I maybe tried to suggest before that the so-called archive of machine learning is a thoroughly financially, financialized construction and that I think that basically generating something from this archive is almost like a financial operation, right, where you treat the archive or the latent or the repository of preexisting material almost as the underlying material. Right. Right. Right. So, you know, I think that's the way that we're trying in finance, right, to draw derivatives on. And this is why I think that, you know, the process of generation is less generative than derivative. These are derivative images. It's also something that Jonathan Beller has been talking about in his discussion of AI and finance. And to add one further layer to it, I think they are subprime images because they treat basically the assets as things to, to draw derivatives from. Yeah. And I know you did work on liquidity, which I also think, or you had an older piece, it was back a couple years ago, right, on liquidity and the subprime situation in Asia.

Yeah. Yeah. It's kind of linked. But key terms, but also concepts that bring things together. So thank you. I have a question. The way you, you know, you talk about the, you know, the way you now also presented Hayek as a kind of information theorist has obviously very strong resemblances to cybernetics, but somehow Hayek really disliked the cybernetics. Can you explain where the difference between the two are? You may know more than me, but from my point of view, I think the difference between the two is, I think the difference between the two is that from my knowledge of this, or at least from reading Philip Morawski and a couple other people who have worked on him, he didn't dislike cybernetics at all. I mean, he liked systems theory. He liked, in fact, he got more and more into it as, because for a while, you know, it's really important to remember that neoliberalism wasn't a very successful idea for very long. I mean, in 1945, people thought these people were crazy. The Keynesian state was the way to go. You know, they didn't really start, you know, they didn't really start, you know, they didn't really start, you know, they didn't really start, you know, they didn't really start, you know, they didn't really start, you know, they didn't really start, you know, they didn't really start becoming a popular economic idea until the 70s and 80s, really.

And as the collapse of the, particularly the U.S. and U.K. didn't happen, Hayek increasingly buffers, but not just Hayek. Like, in general, you see kind of a move where people start appropriating discourses from systems, systems theory and cybernetics, and particularly biology, to kind of sort of support or buff up their theories against what they see as a kind of very socialized Keynesianism, but also to explain why their, why their predictions hadn't happened. Because the U.S. and the U.K. didn't collapse in the 50s and 60s. You know, like, that didn't happen. So, you see a move on that, and it kind of speaks as, in the previous panel, I was thinking a lot also about the, the relation between evolution and history. And that's a different thing I like to work on. But the fact is, evolution isn't, isn't really think, we don't think about them the same way. We think of history, this is very reductive, as the realm of human action, and not that we can create these separations anymore with the Anthropocene and so forth. But, but evolution, of course, has this very idea, this kind of constant idea of just kind of emergence and forces and sort of actions in the way that it's so critical for people in A.I.

to really incorporate evolution. Right. Right. So, I think, you know, if you think about evolutionary biology, these are the discourses of A.I. They're not discourses anymore of technology or kind of socio-technical systems. They're, they're kind of very much biologized, and often, and also you see symbiotic with transferring ideas of evolution as well. So, I don't know if that helps, but I'm not, I'm not sure he disliked cybernetics, actually. Yes. And also, if you start, if you start viewing these kind of, of bets, of certain nature as truths, then you invite crashes. You invite not only financial crashes, but I thought, in relation to your comments about conspiracy theories, also of truth systems and methodology, methodologies, you know, to produce and verify truths. So, if you start thinking that truth is produced in a way, that basically evolves on its own by kind of market self organization, then you're going to end up with crashes of these truth systems, which are very similar to financial crashes. Exactly, but the interesting thing with financial crashes is that for, for certain perspective, they're not crises. Like, the instrument is doing what it's supposed to do. It's, it's, it's creating a, creating, it's creating a moment of redistribution.

Yeah. It's an opportunity to redistribute wealth and a new, and a new frontier for finance. So, it's not, like, from within the instrument, that crash isn't catastrophic. It's only if you're a poor homeowner that it's a catastrophe and you're out of a house, right? Or, or whatever other horrible things happen. It's like thinking that evolution, evolution proceeds by way of extinction. Right. Yeah. But, yeah, exactly. It's, it's, it's particular. Thank you. Thank you very much for the talk and for the call out. I really enjoyed it. I think we sorely needed it. We needed this moment of finance. Not only as a metaphor. I mean, we are, we are constantly facing these beautifully poetic questions from the audience, especially from your students here about the alienated labor and why is this always happening. I, I love how historical students of fine arts are. Just something I miss. So, now I'm thinking, okay, there's this, this other elephant in the room, which is when finance is not just a symbolic model, but that one answer to these questions is profit. And I feel like this is the stupid remark to give towards the end of a conference that I didn't attend all the, in full.

But I feel like we miss that element when we just make metaphors. So we had Jameson talk about derivatives, for example, and he also made the conclusion that contemporary artwork is itself functioning as a derivative. And now we have kind of the opposite situation where you're talking about generative art becoming derivative. And we keep on having this discussion about is the medium new or is this a recurrent cyclical topic? Is this a new form of production? Or actually is AI just the realization of what in the 80s, or maybe an amalgamation of derivatives, financial markets, and so on? Maybe a more shorter question would be, you know, is AI free even in the neoliberal definition? Or is it a more liberal definition of freedom? So in terms of ideology critique, who is free if nobody is free after the end of this talk? That would be for the talk. But then as a response to maybe just these last two panels, I just want to, because we were talking about things outside our grasp, which was very beautiful, I thought, when we were talking about Kevin's presentation and about disaggregation of technology in order to get at some kind of a critical artistic.

And so we have things outside of our grasp. We have your lovely quote that nobody bothered reading until the end where the human is outside of history. There's always something outside whenever we talk about AI. And then my question would also be, since you're a finance expert, so maybe I can hypothesize, is it also what was outside of capital that's becoming somehow insourced within the operation of capital, and should we include that in our answer to maybe the question? Yes. I think it's a question about why is poetic labor, why does it keep on being alienated out of production and then brought back into house, this continuous question of history? So we have things outside of our grasp as human subjects, human subjects outside of history, but then we go back to the operation of not just finance, but of the social system, the power structure and so on, which keeps on insourcing something. And AI seems to be doing this as sort of an artist in a contemporary society. So. So two questions. That one and is AI free? I'm sorry, was that the place of the artist in contemporary system? Was that one of the questions?

I don't know if I can. I'm trying to sublate. I'm doing a very poor impression of what Hito is doing much more marvelously, which is switching the task of contemporary artist and capital, or contemporary artist and the derivative. And they seem to switch positions. I mean, that's an interesting observation, and I know about that discourse. I'm not an expert. So I thought you asked a couple brilliant questions. I'm not sure I have an answer. But one way to think about it is a, I think a bigger question that all of us think about with computing, whether it's thinking about coloniality, history, other things that have come up, which is what stays the same and what changes in the field of history? And how do we specifically identify that? Of course, the 80s aren't that long ago, even though they seem long ago. So I would say some of this is fairly new. But I think what we don't, but I'm a historian, so I know that I, and I guess I've learned some humility from Hayek. I know that I don't actually know everything that's going on now. And actually, I think that there are still things for us to battle over.

And I think what you're talking about above all is the question of economy. I'm not going to talk about capitalism of the big C, but I am going to ask, I think we do have battles over where do we produce value. Yeah. Yeah. And what do we find valuable and where value is produced, even if we're using certain types of financial instruments. You know, I mean, there are people, you know, I'm not saying it's good, but you know, obviously lots of, there's lots of problems with green capital, but you know, people, you know, the city of Houston creates a bond market in order to guarantee actually poor people get insurance coverage rather than just rich people. And so, it's hard to get people evaluated for their property. I mean, there's, you can assign value in lots of ways in these instruments, right? Obviously, that's not a dominant market approach, but that is, that's politics, and we have to decide that. So, the relationship, I think historically we can obviously understand the different place of artistic practices in different social and cultural and what kind of value is assigned to them and where people, you know, can get their money.

And so, I think that's a really important question, and how that relates to the question of finance. And I think that the question of how the art itself is derivative, I mean, that's about how you, right now, as you're using AI, or thinking about how you're going to use it, you know, that's part of what the struggle is to what economies are we going to produce and how is the material that the AI is being trained on, going to be evaluated, and then how are we ourselves going to be evaluated? And how are we just using that in terms of these, in terms of economy? And that's not a specific answer because I'm not informed enough, but I think that is something that appears here. And what I can say here is that it is fairly recent that this limited understanding of freedom has emerged to replace the idea of civil rights, which I take this moment, at least in the United States, you know, was only, these acts were only passed in the 1960s. I mean, you only have these movements for enfranchisement and legal recognition at certain periods of time. And for me, I guess one of the questions is about diversity.

Diversity of institutions, diversity of legal, like diversity in economy. And one of the questions is whether or not, just like the sixth extinction, we're going through some sort of like extinction in economies where everything's becoming just one, or whether or not we're going to have a new world, you know, whether there's still a diversity there in ways we're producing value. But I think that's also linked to things like diversity in institutions, like that, you know, corporations, or corporations, they do certain things. Maybe they should be kept separate from universities. You know, these are the battles that we're currently fighting in the political field, and that's my only answer is kind of that I don't know yet, but I know we're fighting it. Or that this is the conflict. Okay. So I guess there is no further question. Then thanks a lot, first of all. Thank you so much for joining us today on Incareth, and as much for inspiring people to witnessed today. Thank you.