Source: https://carrier-bag.net/video/the-opticality-unconscious
Date: 18 Mar 2026 08:36

The Opticality Unconscious

Amanda Wasielewski
Cite as
Wasielewski, Amanda: "The Opticality Unconscious". Carrier Bag, 28. May 2024. https://carrier-bag.net/video/the-opticality-unconscious/.
Import as

Lecture by Amanda Wasielewski

Inspired by Rosalind Krauss’s book The Optical Unconscious, Amanda Wasielewskis lecture explores how a hybrid practice of writing and image creation can shed light on the often-derided strictures of mid-century modernism in New York, namely the ideas of opticality and visual autonomy.

The lecture investigates the relationships between Modernist key figures such as Krauss and seeks to generate “new” images that elucidate these theoretical perspectives. Wasielewski uses both AI image and text generation techniques as a means not only of executing the project but also as tools of knowledge creation. Can AI methods refresh, renew, or otherwise add new insights to art theory of the past? 

Recorded on May 28, 2024 at AdBK München / Emergent Digital Media class


Read full transcript (generated by Whisper)

Amanda Wasilewski is an artist and academic interested in the histories of art, technology, media and architecture and urbanism. She is associate professor for art history in the department of archives, libraries and museums at Uppsala University and she currently, ongoingly writes for both academic and non-academic publications. Her artistic work is shown internationally and recently deals a lot with the technological image production. Today's lecture is titled the Opticality Unconscious. It's new material so we are very happy to get a glimpse into this new material. It will look into modernism as far as I understood until now we will see of the mid-century New York while using generative media as an explorative tool. Thank you. Thank you so much Francis and Hito and thank you guys for inviting me here. I'm really honored to be part of this. Okay, so yeah, the title of my talk is the Opticality Unconscious and now I've added this little subtitle in case that seems sort of impossibly. So how to pointed out, think about what you're thinking aboutати as mysterious, but not necessarily impossible and possibly opaque and academicized or how to explore the imperceptible and perceptible in formalist art theory through generative AI.

So this talk is kind of the fear you can think of this like the theoretical background to a kind of method development, perhaps, but yeah, like Francis said, it's new material so I hope it kind of holds together and make sense for you. Sorry to the experts in the room, but I will explain a little bit, and if I don't explain enough, feel free to interrupt me or just ask. Because this talk revolves around a kind of theoretical context that to me is quite familiar as coming out of a sort of American educational context, but I'm not sure how familiar it might be to you all. So this talk is inspired by this book, or this is the cover of the book, from 1993 by Rosalind Krauss called The Optical Unconscious. And Krauss is an art critic and art historian who was one of the founding editors of October magazine. And I was really, I've become really fascinated with this book because it's such a weird book, actually, and it was maybe Krauss' only kind of big experiment in terms of the context of the world. And Krauss is the kind of form of her academic writing, which is very tight and very rational and circumscribed and neat a lot of the time.

Argumentatively very kind of, what do you say, just kind of tied together. So this book is not that. She calls it The Optical Unconscious, which borrows a term from Walter Benjamin, which he first used in his essay, Little History of Photography. But then also he alludes to it or kind of drops it in in The Work of Art in the Age of Mechanical Reproducibility. But he never really develops the concept. He kind of mentions it, this term, but he doesn't really develop it at length. So this is the quote from Work of Art where he writes, The camera introduces us to the unconscious optics as does psychoanalysis. To unconscious impulses. So what he's really getting at here is that there's an idea of the camera or photography capturing something or exposing something that, you know, is not really perceptible to us. It's kind of decoding something the way that psychoanalysis kind of decodes the mind with unconscious impulses. But, you know, Krauss kind of picks up this term, but then doesn't really. She kind of picks it up irreverently. She doesn't really, it seems, care so much what Benjamin meant by it or even in exploring that.

But she kind of uses it and adapts it to her own purposes over the length of the book. And so that was something that also kind of intrigued me here. I call this talk opticality rather than optical because opticality, of course, is a term from the mid-century New York milieu that Krauss was part of. Was kind of led or said to be led. But it's not. It's not led by Clement Greenberg and his theory of art. And so this word opticality is Greenberg's word, which he used to describe mid-century painting in New York and its kind of purity to its visual medium. So painting as pure to its visual medium. And then his protege, Michael Fried, later also used the descriptor. So in this talk, I'm interested in medium specificity and the kind of purity of mediums in relation. To our contemporary multimodal models, which I'm sure all of you are familiar with. The, you know, the big three here, Dali, mid-journey and stable diffusion. And then, you know, any variations we might find within that. So in particular, I'm interested in text to image models, but also interested in how large language models are used in general.

So I'm going to start with Krauss. And throughout this talk, I'll kind of intersperse passages from her book and try to reflect on that in terms of this contemporary field. So her book begins like this. This is the index or this is the table of contents for the book. And she's already sort of experimenting with form here where she's using little icons, little pictures instead of text to label each of the chapters. And the book is a meditation on formalism in general on one side. The kind of art theory that revolves around, you know, the study of forms in art and what those mean. But then also the idea of vision and visuality on the other side. So she's really kind of doing a meta reflection on not only her own thoughts on this sort of opticality, the field of opticality and vision in, you know, the sort of late modernist work. But also she is looking at, you know, her kind of colleagues. And so it's a kind of experimental. Has anyone read it, by the way? Anyone in here? One person at least. Which it has these sort of passages where it sort of veers from kind of personal reflections or slightly poetic or literary segments.

And then kind of jumps back into her academic writing. And you get less and less of that as the book goes on. So you could say it's not a very consistent book or that it kind of increasingly just goes into her mainstream academic writing. But I was interested in these kind of personal anecdotes. These kind of personal reflections which she usually signals with a kind of italics writing. So I'm going to read a passage in a moment from the book. But I wanted to first introduce this because this is what she's discussing in this passage, which is a work by Max Ernst called Un Segment de Bonté from 1933, which was a kind of series of pamphlets. That was often called a novel. Although, I mean, you could question whether it is a novel, actually. Which consists of images like this, which are kind of remixed Victorian imagery. Often, you know, as, you know, in a surrealist mode, strange juxtapositions. And each book is, as you can see here, color coded. So Krauss writes, And in the grip of the art historical imagination, that imagination is determined to, quote unquote, read Ernst's novel, to narrativize it, to give it a shape, a story.

To give it a storyline. It has chapters, after all, does it not? It is a bildungsroman, goes one explanation. Conception, infancy, childhood, adolescence, adulthood, senescence. The life cycle patiently traced, elaborated, returned to its beginnings. Each of Ernst's novels is mined for its compositional principle. Un Segment de Bonté is seen as following Sade's 120 Days of Sodom or L'Autre Monde des Chants de Maldoror. All of this itself woven on the loom of the Seven Days of Creation. It's an alchemy. Chemical novel, one of them insists, to which another rejoins, that the only alchemy in question is Rimbaud's Alchemy du Verbe, since the designation of a different hue for each section of the book recalls the poet's imperious, coloristic baptism of the vowels. A noir, e blanc, e rouge, e bleu, e vert. So what I'm interested, I'm interested in all the citations in this quote, so I'll go through them kind of one by one. But the first off, to start with the last mentioned, this kind of assigning colors to letters is a very common form of synesthesia, which, synesthesia is a kind of psychological or physiological condition where people have sensory experiences that don't match what we think of as something that is perceptible by a sense, so you could taste color or you could smell sound.

But one of the most common forms of synesthesia is graphene color synesthesia, which is what's described in that last line, where certain people, have in their head, in their mind, letters, or numbers, or even concepts that are attached to a color. So you can see it illustrated here, people might think, oh yeah, three is always green, and there's no real sort of explanation for it, but they just know, three is green. And so I'm interested in this concept of synesthesia in relation to multimodal models, because they also connect to different. So I'm very interested in how you can connect them. And so I'm interested in how you can connect them. I'm interested in how you can connect them. two seemingly incommunicable modes, text and image, right? So they're not just text and image in machine learning models, are not sort of exchangeable tokens, but instead they're conceptually tied together, just like a three may be for a synesthete, you know, always green. And so we, you know, tend to think of synesthesia as a kind of, as a disorder, or some people think of it as a kind of disorder, that it doesn't make any sense, it's not rational, it doesn't make any sense to smell color, right?

It seems an irrational exercise. But I want to kind of propose that maybe it's, there's something to that and there's something that we can use from that to understand multimodal AI models. And so I've also proposed this term ekphrastic synesthesia, which is another kind of heavy, academic-y word, but I'll make a case for using this together, because the word ekphrasis is a kind of a word for a certain poetic form, which describes art. So if you write a sort of this type of poetry, ekphrastic poetry, you describe things that are visual, you describe works of art. But it comes from the Greek word for description, and etymologically also to sort of explain or interpret or show. And so I want to make a case for, you know, that this kind of synesthesia is a kind of pathway to making known that something is exposed, just the way that Benjamin thought about the optical unconscious, the camera as a tool to expose something hidden from view, that was in view but hidden from view, that multimodal AI models can perhaps also expose something in this way, in this synesthesia way. Okay. So, you know, one of the, you know, for those who aren't familiar with this, and maybe some of you are, the sort of multidimensional latent space of text-to-image models based on CLIP combine text and image semantically in one model.

So as this diagram sort of simply illustrates, you know, the word apple and the image for apple are considered sort of semantically indistinguishable concepts for the purposes of this space. So, you know, the way that the image sort of formulates them to a certain extent, no longer kind of separates them as one being descriptive of the other in a certain sense. So the question is, can text or a prompt produce an image? Or can an image produce a text? Well, here is a longer quote from Rombaud's Alchemy of the Word in Seasons of Hell where he writes, I turned silences and nights into words. What was unutterable I wrote down. world stand still. The worn-out ideas of old-fashioned poetry played an important part in my alchemy of the word. I got used to elementary hallucination, and so I explained my magical sophistries by turning words into visions. So, you know, Rambo being quite, you know, credibly a synesthete, and then explaining how he's turning his words into visions, I thought was intriguing here in this context of generative AI. So I think, you know, there's a possibility here that generative AI can open up a kind of thinking outside the rationalist paradigm that Krauss describes in her book, because she seems to sit on the fence between the sort of standards of modernism, modern formalism, and something more, something that is not kind of delimited by the purity of certain mediums for certain purposes. In Les Chantes de Maldoror, by the Comté de l'Autrement, there's this quote that inspired the surrealist, which is often repeated, many of you are probably familiar with it, called the chance, that says the chance encounter of umbrella and a sewing machine on an operating table. And so for this, I tried to sort

of recreate this famous phrase in Mid-Journey, and I came up with this image, which I thought was fairly interesting, in terms of, so, you know, we get sort of the objects we have expected, there's an umbrella, maybe not the kind of umbrella you might have predicted, but there's an umbrella, there's a sewing machine, and there's a table. But there's not an operating table or a surgical table, which is what the quote alludes to, which, you know, you could think of this in terms of, you know, sewing machines are overwhelmingly presented in the training data on a certain kind of table. And so, you know, it's very difficult to kind of statistically exit the kind of table that is not a table. And so, you know, I think that's a very important point. And so, you know, I think that's a very important point. And so, you know, I think that's a very important point. And so, you get here what looks like a sewing machine table. But then, the part that was interesting to me is you get this other object, this other sort of piece of furniture that kind of like wandered into the scene. And that is sort of, it's a completely irrational object. It's very difficult to describe or conceptualize. It looks like some kind of piece of furniture, maybe a stool or something like that, kind of draped with fabric that looks like a kind of, you know, a piece of furniture that looks like a kind of doctor's clothing, surgeon's clothing. And then some kind of like a surgical pan maybe on top of it. But I was interested in this sort of chance encounter with this object that came into this scene, that came into this scene because of associations, pixel combinations

that are in the same corner of the latent space that was harnessed by writing this prompt. And understanding sort of what this, what the appearance of this means in terms of this kind of, you know, kind of synesthesia that I'm describing. Okay, so back to Kraus's book. Kraus begins the book with an image of John Ruskin. Well, with a little exercise doing ekphrasis where she doesn't actually reproduce this painting, but she quite clearly describes it in the opening lines of the book. She writes, and what about little John Ruskin with his blonde curls and his blue sash and shoes to match. But above all else, his obedience to the silence and his fixed stare. Deprived of toys, he fondles the light, glinting off a bunch of keys, is fascinated by the burl of the floorboards, counts the bricks in the houses opposite. He becomes an infant fetishist of patchwork. The carpet, he confesses about his playthings and what patterns I could find in bed covers, dresses, or wallpaper to be examined were my chief resources. So John Ruskin was an art critic in the mid-19th century who was famous for kind of reviving interest in medieval art and architecture and also was interested in modern painters and associated with modern painters during that time who were similarly interested in medieval revivalist movements. And Krauss treats him fairly kind of snidely in this text.

He, you know, describing how he has, he wrote the, you know, kind of abusive childhood he had where he wasn't allowed to sort of interact with his environment. So he became this kind of I who could just sit there and observe. He wasn't interacting with anything. He didn't have toys. He could just watch. He looked. And he also, it just so happens, became one of the first thinkers who had later kind of come up with an idea that similar to Greenbergian formalism, namely that artists should be true to the materials that they use in their work. In the Stones of Venice from 1851, he wrote, the workman has not done his duty and has not worked on the work of the workman. He is working on safe principles unless he honors the materials with which he is working. If he is working in marble, he should insist upon and exhibit its transparency and solidity. If in iron, its strength and tenacity. If gold, its ductility. So there was this kind of truth to materials essence of Ruskin, which came, according to Krauss in this passage, from this kind of childhood of only being able to watch, not being able to sort of interact. She writes, the sea is, and he also watched the sea. The sea is a special kind of medium for the art of art. It is a special kind of medium for modernism because of its perfect isolation. Its detachment from the social, its sense of self-enclosure, and above all, its opening onto a visual plenitude that is somehow heightened and pure, both a limitless expanse and a sameness, flattening it into nothing, into the no space of sensory deprivation. The optical and its limits watch John watching the sea. So there's this idea

that, which she writes, Ruskin's view hunting is a means of transforming the world into a kind of machine for producing images. So that you, as a person, he becomes this kind of view hunter who captures images almost like a camera because that's what he is. He's just an eye. He's an eye that sees. And we all know that our experience of the world typically is not that. It's not just as a singular viewer, but it's a multisensory experience, right? So what about our machines for producing images? Are we view hunting? You know, when you write a prompt into a model, when you use a GAN image, you know, you're not just a viewer. You're a viewer. You're a viewer. Are you a message creator? Are you just view hunting? Or are you performing some kind of reverse ekphrasis? Is it an alchemy or is it a transformation? So Krauss, in kind of one of her more personal passages, talks about this and talks about the breakdown of modern rationalism. She writes, and she's referring here to Michael Fried, a critic who wrote the essay Art and Objecthood in 1967, which famously, kind of argued against minimalism as an art movement. She writes, I remember Michael's last sentence, and she calls him Michael here at first, which I thought was sort of interesting. Presentness is grace with a dizzying sense of disbelief. It seemed to shake everything I thought I'd understood, the healthy, enlightenment-like contempt for piety, the faith instead in the intellect's coming into an ever-purer self-possession, the oath that modernism had sworn with rationalism, and to show that the final sentence was no accident, Michael Fried had prepared for it from the first with a passage about Jonathan Edwards' faith that each moment places us before the world as though in the very presence of God in the act of creating it.

It didn't seem to me that anything about this could be squared with the robustness of most of Michael's earlier talk about modernism, like the time we were speaking about Frank Stella, and Michael asked me, do you know who Frank thinks is the greatest living American? Of course I didn't. Ted Williams. And Michael covered my silence with his own glee. Ted Williams sees faster than any other living human. He sees so fast that when the ball comes over the plate, 90 miles an hour, he can see the stitches. So he hits the ball right out of the park. That's why Frank thinks he's a genius. This was by way, of course, of inducting me onto the team, Michael's team, Frank's team, Greenberg's team, major players in the 60s formulation of modernism. So I was thinking about this idea of, you know, the ideal of seeing, seeing faster. But what is seeing faster in this? It's also kind of seeing slower. It's seeing more detail. It's seeing with more precision somehow, but it's also seeing faster. So how does this maybe relate to the world of AI today? I was thinking about headlines like this, open AI launches GPT-4.0, a faster AI model that hears and sees better. So what does it actually mean to hear and see better?

What, you know, what's better about this experience? You know, you could say something like this sort of computer vision, standard computer vision, but it's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. It's not. at all. I think if you're using an AI a little bit, then kind of seeing this today. And then, of course, with the AI, it's a woman to do work with. And if you're using control vs review, that makes sense. And the thing I think I really like to note here is that an AI is not only about �를 spectrometer or the wore enormous object scale and just talking about counting things that I can happen to swallow. That's really, really important. But actually, the thing is it's only open, it's not just interoperable. What is it? Does an AI, if it can do anything or Strider it can do, and then the other thing is, instead of a large immortal world called automated context, it's being controlled by space. you know, resonance with that, those questions also in Krauss's book. She writes, the Gestalt psychologists have told us that if no figure detached from ground, then no vision. So the idea is that, you know, if you can't differentiate a figure from ground, then you're not seeing at all, actually. And this is also the principle that computer vision researchers have taken up when sort of trying to replicate the idea of vision from a computer's perspective.

So in other words, object recognition has been a large foundation of the research in computer vision and also the applications of computer vision. So the idea that we must recognize what objects are in a scene in order for it to count as vision. Now we might say, okay, we think actually vision might be more than that, but it's nevertheless become kind of the, you know, standard form for computer vision research. Including, you know, these kind of questions, these kind of memes that came on Twitter where people got these Google reCAPTCHA things asking for bicycles. But, you know, it's like, is the drawing of a bicycle a bicycle? The famous Magritte's kind of conundrum. Okay, so this reminded me of a famous story, an apocryphal story, so not true, called the tank classifier problem, where it's a kind of a cautionary tale, which is that during the Cold War all the Pegasus existed, say maybeы буд they, but it was front variant домаы emoten an except for the experimental dr from foreground. Or it couldn't tell what we were interested in, which we weren't interested in the background, we were interested in the foreground. We were interested in the objects, not what came behind them. So that story not true but it used as a kind of cautionary tale and in fact in more recent times there have been a number of studies that have just kind of illustrated this in real life around COVID-19 and lung scans. So for example researchers in this, with this image, found that they were trying to sort of automatically detect COVID cases from lung scans but they found that actually it was categorizing based on what was around the outside of the box. And why that was categorizable or

relevant for the model was because so people laying down tend to have more severe cases of COVID from what the scans are and it was actually categorizing based on body position which was the discovered from the external, from the kind of border areas. Also there were other studies that accidentally categorized by children versus adults or by the kind of imaging equipment that was used. So there are real-life examples of this tank classifier problem and it raises the question, you know, what does the machine learning model actually understand and can computer vision see if it can't actually, if it doesn't understand a figure from ground. Okay, so, you know, returning to Benjamin for a minute, this is the longer quote from Little History of Photography where he kind of explains this concept, the optical unconscious. And the idea here is that, he says, photography with its devices of slow motion and enlargement reveals the secret. It is through photography that we first discover the existence of the optical unconscious. Just as a matter of fact, it's a very interesting concept. It's a very interesting concept, just as we discover the instinctual unconscious through psychoanalysis. So it's the idea of, you know, is it seeing faster? Is it seeing slower?

It's revealing something in any case. And it has something to do with time. So, technology has a kind of means or visual investigation of time of what's happening beyond perception as a kind of symbolic decoder technology. So, Pamela Lee and her book, Chronophobia, talks about, kind of critiques this and discusses this in relation to Michael Fried's art and objecthood text, which she calls chronophobic, afraid of time. And she writes, grace may not be forthcoming after all, for redemption is hardly possible without an end. And so we had this idea, you know, to return to this concept of the sea, you know, the sea as this good medium for modernism, because it's limitless, it's endless. But the sea isn't endless, it's cyclical, it's not flat, it's multidimensional. And so this kind of misunderstanding of Ruskin standing there as the eye looking out over the sea, yes, it's flat. Yes, it's expansive. Yes, it's limitless. But he's not having an experience of it. He's just an eye. And so then we get to this idea of the purity of the eye versus distance, again, and maybe we're learning more through a kind of a , you know, multisensory experience, a polluted experience, you might call it.

Okay, so Krauss, I think, I'm not going to promise this is the last quote from Krauss, but she wrote, I started calling the hare I was chasing over this historical terrain anti-vision, but that anti sounded too much like the opposite of a pro, the all too obvious choice for which would be pro-text, which was not at all the case of what I was tracking. The name that gradually took over was the optical unconscious. And so in this talk, I'm trying to maybe expose a little bit of this, the method, the way that Krauss does to keep going over it, because she continually goes over these terms. And she discusses what she considered first, and then changes. She discusses the kind of methods she's used previously and questions them. She writes, then it's language, one might say. It's text that's the refusal of vision. It's the symbolic, the social, the law. But that makes no sense. One would have to add, because modernist visuality wants nothing more than to be the display of reason, of the rationalized, the coded, the abstracted, the law, the opposition that pits language against vision poses no challenge to modernist logic. For modernism, staking everything on form is obedient to the terms of the symbolic. No problem, it would say. So is there something nonsensical under the symbolic then? And this is where I return to synesthesia. So the symbolic, the law, the opposition, the language, the vision, the opposition, this combined text and image. I'm returning to this idea of the combined text and image. And our sort of perceptual understanding across these different forms. You know, what is the disobedient in terms of the symbolic here? You know, we could go to like Krauss's figure ground diagram, or even this classic Peirce and semiotic diagram, and think, how are these categories actually being

compressed, conflated? How is the three green actually being compressed? And so, I think, you know, I think it's a very, very important field for us. And if we explore often just to see this idea of comp sistemitta, like a combination of context, ahí, al Felipe v onde se trata? What was this idea expecting 西 tá más de leavingou na historia? And what does that mean, essentially, in a generative ai model? Does it expose a kind of conflation between concepts? Does it expose the sort of mechanismsofi ome comiaction behind why we might separate text and image? The optical unconscious will claim for itself the dimension of 혹시, of repetition of time, it will something cyclical. What's going on? So I'm going to return to my little weird objects from the image, this surgical stool, fabric, pan. Yes, you could say it's very logical. It's just the pixels, this kind of like pixel combinations that were in this little bit of the latent space of the model. And they came in because the words you used were associated with these pixels. Simple, very rational. But what if we think about it in terms of its sort of anti-opticality or what Krauss is like toying with this idea of anti-vision? It's not pro-text, but it's sort of a hybridity. It's as to do with time and repetition. It is this kind of chance encounter. What are the consequences of generative AI's ekphrotic synesthesia? It's a kind of collapse that's happening.

So a lot of you are probably familiar with a this Joseph Beuys piece where he cradled dead hair and showed the hair his art exhibition. The audience wasn't allowed in it, in the exhibition, and he explained pictures to a dead hair, sometimes, you know, lifting the hair's paw to kind of touch the works, kind of puppeting the hair. Well, so I was thinking about this in relation to explaining pictures to a generative AI model. And, you know, this idea of how do we teach computers to see artworks or images? The first step is to kind of data, datify it or codify it. But then, you know, what happens on the other side? What happens on the generative side? How do we explain pictures? But what does the model actually know about pictures? What is the sort of liveliness of this hair? You know, we know it's dead. We know that you know, we know that AI is not conscious. It's not sentient. It's not alive. But that doesn't mean it doesn't have some kind of liveliness somehow embedded in it. So, you know, as UC Perica says, so what is an image if it's lost somewhere inside the machinations of photography and data sets and machine learning?

Some decades ago, scholars could still have responded by saying that images are like language. But I think, and I hope, some of the theoretical threads from this talk have pointed to ways that images aren't like language. And I think that's a good way to put it. And I think that's a good way to put it. And I think that's a good way to put it. It is the three is green in a certain sense. But they could be associated with each other. And so Emily Bender, who is a computational linguist who's maybe familiar with her, she's been a big activist against the kind of AI hype. With Alexander Kohler, she wrote this famous paper about, you know, an octopus who intercepts a conversation, and it was supposed to reveal the idea that AI models, large language models, don't actually understand. And their basic point was that they don't understand text or conversation because they don't have any communicative intent. And so by Bender's assessment, large language models are kind of meaningless, even laughable sometimes, because they have no communicative intent. But I'd like to sort of propose that they actually reveal something else that's not about communication, that points to this concept of the optical unconscious, or maybe even the opticality unconscious.

Or ekphrastic synesthesia. The opposite of that being a kind of pollution or corruption. Is it productive or is it negative? So how to navigate the sea of generative AI in terms of method? Well, one way to think about it is in terms of command. Are we commanding the models through text prompts, entering a text like a computer command sets in motion a process? But if something different comes up, each time, then it's not truly like a classic computer command. And then many theorists have talked about it recently being a kind of search. Roland Meyer, who maybe you guys heard recently, has called it a sort of generative search, or talked about text image models as generative search. But, you know, even though the design is kind of reminiscent of search, I think as like recent examples have shown with the kind of failures of Google's automated AI, search methods, there's something else happening here. It's not simply looking in a data, in a body of data for certain concepts, but it's combining, it's mishing, mashing, and creating something different actually from that data. It's creating things that are wrong, yes, but you know, maybe there's something also interesting in how those things are wrong.

Okay, I'm going to end with this. Roger Kimball, who is a very conservative, arch conservative critic, around the, yeah, the time of Krauss. He wrote for the New Criterion. He wrote a really scathing review of her book at the time, which was headlined, Feeling Sorry for Rosalind Krauss, where he says one problem with Professor Krauss' new history of modernism is that as a history, it isn't very accurate. And yes, generative AI as history is not very accurate. It's not rational. But as a kind of ekphrastic synesthesia, or a kind of irrational topography, I would, like, like, to sort of propose that maybe there's something generative there. Thank you. Let's start the discussion. Thank you so much, Ivanda. Maybe the art historians could go first. This is very, I feel honored. Thank you very much for the very thought-provoking lecture. I thought it made lots of sense, even though we need to wrap up. We need to fill in the blanks, I feel. But I think this move to combine generative images with an art historical argument is very logical. And I think using Krauss like this makes sense. What I started to wonder when the more you were showing us about these generative potential of images was about what kind of structure do they have? But maybe there's a much more basic question, like, how would your project look like if you were working with the other books from Krauss?

Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. This was my main idea listening to you talk because I couldn't quite remember fully all the arguments from this book but I remember the memory cards and I immediately started to think okay if AI is producing the optical unconscious of today for us all to see or to face, isn't it more like a test? So when she's using memory cards in the book after she has a stroke, they are used as a test It's a test image to which we react. We either remember how we got to this picture or we don't, or she's going through this test. And I'm thinking, is this part of your argument, basically, that this is what AI images produce for us? And maybe I'll stop there. Yeah, I mean, I think, yeah, thank you for that. Yes, in a way, I think, because Krauss isn't really, maybe I'm sort of not being generous enough to her, but she doesn't seem to really care what Benjamin meant or in, like, pulling apart.

She just, like, likes this term. It's a cool term, optical unconscious. It's a really cool term. So I wanted to kind of go back and sort of be like, okay, this is kind of what he meant, and this is what she's doing, and I think it has to do with that. It has to do with this idea of, you know, it's almost like a kind of experiment. It's an experiment. It's an experiment that reveals something. If you could say that, you know, a kind of psychoanalytical exercise, like, say, the exquisite corpse, kind of, you know, the foldable surrealist game, that's a kind of test. It's a test of, you know, how do I see the unconscious, actually, which I can't picture because my rational mind keeps getting in the way. So, you know, this to me is also, yeah, a kind of test or a kind of experiment, like an exercise, at getting to that. And, yeah, very similar to what you described with the memory cards. Yeah, I was thinking, when it comes to the optical unconscious, you made the relationship to time, and I wonder how much you support that, and would there be an optical unconscious without time?

Because I was playing with it, and I could see it myself, but I was wondering how you would feel about it. Yeah, I think that's a really good question, And I guess it's something that's a little bit unresolved here. But I think that I've been thinking a lot about how time factors into machine learning models or our foundation models, the big models. Because those, in a way, they're like time capsules captured at a certain moment. And we can't make a new foundation model every day. We could argue that you can't ever make a foundation model again in the same way because it scraped so much data from the web that that required so much effort and money and time. And now that data online is all polluted with AI-generated data, which means you get model recursion, which could be a huge catastrophic problem for building a model. So the idea is that our models are going to be stuck in 2019 forever. It's a question that I've been thinking about. But also, in relation to this talk, I was thinking about the question of how we can make a foundation model I was thinking about the endlessness versus the time, the chronophobia versus the conception of time and multidimensionality and whether it could fit in.

Because there is something about this, at least with Benjamin and the camera, that if you think about, if anyone's familiar with Edward Muybridge, he famously did these photographs of the horse running. And it, for the first time, captured the horse's hooves off the ground. People didn't know that. They didn't know that the hooves necessarily all lifted off the ground together. And it was a way to see something that was completely artificial because no one could see this, actually. But it was supposed to sort of like reveal. And I kind of thought reading Benjamin that that's what he was kind of thinking of when he came up with this in a way. Revealing something which is true, but not true in the sense that nobody sees it like that. So it's a kind of slowing down. It's a speeding up. There's something to do with time. And I haven't quite honed in on it. But yeah, I think it's there somehow in AI models, too. There's some idea of time that's present with us. I was wondering what kind of definition of the unconscious are you using? What kind of version of psychoanalysis? I was assuming it is a Freudian one.

But there have been several other updates. And one could also argue… I think convincingly that the contemporary unconscious is completely differently formatted also by technology. If you're using an idea of the unconscious, which is related to a technology called a magic writing pad, then it may not capture a contemporary unconscious, which probably has moved on quite a bit. And I think that one of the very interesting questions that could be drawn from this, is to ask how are human unconscious states altered by statistical machines? And I think that there will be vast changes. There have been already vast changes through social media, all sorts of other digital media. And for me, if I were to apply any sort of variation of psychoanalysis to statistical models, then it would be the Lacanian, right? That is the real. Right. It is something that is absolutely foreclosed, which we can know nothing about, and which is a traumatic heap of shit, right? So, but I think we have already moved beyond Lacan. But where are we now in terms of the unconscious? Yeah, I mean, Krauss obviously references Lacan quite extensively. And there's also the kind of Jungian concepts that come in there as well.

But people have kind of bandied about the kind of idea of collective unconscious. As a sort of, as the machine learning model kind of, yeah, underlying premise. But for me, I like the idea, I think, or what I'm getting a hint at from Benjamin actually, is again, this kind of like, it's almost like not caring for context. That unconscious, I think for him, is kind of a metaphor. Like, and it's not actually about the unconscious at all, but the idea of something underlying, something that is sort of imperceptible. But not necessarily the psychoanalytical concept of the unconscious. So, thinking about it as a kind of… I think you just caught me at the notion of the underlying, because you know it's also a financial term. And that would be so interesting, because the underlying has more or less disappeared from contemporary finance, right? Has been replaced by all sorts of derivatives, etc. Yeah. So, what would be the equivalent of the underlying brackets in finance within a statistical model? That would be really interesting. Yeah. Are we already dealing with derivative, basically, encodings of reality within a model? Yeah. I'm not sure if there is an underlying, honestly.

I mean, I don't know. No, I mean, I've just like… In another essay recently, I've just sort of tried to resurrect Baudrillard in kind of this, that direction, I guess. In terms of, yeah, is there a kind of… I mean, I was also just at a computer vision conference where everyone was talking about the ground truth. And I was the only humanist in the room being like, the ground truth? What do you mean by that? They're like, what do you mean ground truth is ground truth? We're scientists. Yes. Thank you for that. That's really interesting. I was also wondering in this context, actually, about authenticity. Like, if I think of AI, I think of feeling uncomfortable for some reason. And what do you think of, or how is the term authenticity in this… What can we trust or can we still trust? Yeah. I think that… I think that that's a really sort of important question in terms of, you know, because we have… You know, what relationship to the data do the images that are produced have, actually? And what kind of model of authenticity is present there? I think that there is something…

You know… Not necessarily in terms of a kind of reality, but there is a kind of reality of the model that is not necessarily matching. It is a model. It's like the same as if you have a globe, you know, of the world. That's a model of the world. It's not the world. But it has a resemblance to it. So that resemblance, I think there's something interesting that's happening there. And you could say the globe is the fake world, right? That's not the authentic world. But that doesn't mean it doesn't still have some meaning in it as a model, right? That there's something to be learned from this globe. And we can say, oh, this is a fake globe. You should get rid of this. It's not the real world. It's missing all these things, right? It's missing the river here. It's missing these kind of geographical figures, you know. So I think in that way, I think there's something… The model, yes, is not authentic if you want to talk about it in terms of being like an exact replica or an exact reality, but I think… And so it creates fake things, right?

But maybe those fake things aren't fake, but they're just models. I don't know. Yeah, but models are also determined. For example, I mean, just sticking with my former point about finance, diffusion models are based on the exact same mathematics, namely so-called SDEs, stochastic differential equations, than a lot of financial instruments, right? That's also how options, option prices are. It's basically the exact same mathematics. Now one could ask themselves what kind of reality or tempest or whatever is the price of options based on. What is the relation to reality? And I think that probably one could suggest that it may be quite similar than in finance, but also in climate prediction models, et cetera, et cetera. Yeah. Yeah, I mean, I think that that… I mean, this is like, this is the sort of, this is the goal of a lot of science, is to create a model that is accurate, right? No, no, it needs to be plausible. Yeah. This is not accurate. No, no, accurate is once 19th century, maybe. Yeah. Okay. Yeah. Yeah, maybe to steer back the discussion a little bit, I would like to ask again for the background, again for the basic concept of the opticality, because for me it's still, or if you could a little bit work for us on this point, why did you choose opticality as your focal point?

How does it maybe differentiate from the visual or maybe even the image in relation to generative media? Yeah, I chose opticality because what I'm really, interested in is Frank and Rosalind and Clem and Michael. I'm interested in this sort of group of individuals who came up with this term. And I want, I guess the sort of next step is I want them to sort of productively talk to one another. And so that's why I'm using opticality, because that's their term. And so for me, I'm sorry, I'm not sure if I'm saying it right, but I'm not sure if I'm saying it right. But I'm sorry, that's not a very good theoretical reason, but that's the kind of, that's the methodological reason for choosing that term. Because I'm interested in that particular brand of formalism as being kind of this last like kind of obsession with the visual to the extent of, you know, the extent to Michael Fried going to this quasi-religious, you know, moment where, you know, it's about a kind of a kind of a modernist experience, which kind of doubles back again. It kind of implodes the modernist project at that moment.

They kind of over-intellectualize themselves. And I don't know, I feel like that might be what Krauss is trying to deal with in this book. I'm not sure yet, but I think that's kind of, yeah. Because by 93, of course, you think about the context of art in 1993. This was all, who cares? You know, it was, formalism was a dirty word. Maybe still is. So I don't know. Yeah. Thank you so much for the lecture. I have two questions for you. So at first be interested in knowing if you've worked around other interpretations or working of Benjamin's essay, particularly the work of art in the age of mechanical reproducibility. For example, from 1992, there's Susan Buck Morris who gets kind of quite close to, you know, close to what Krauss-Alin was doing in aesthetics and anesthetics. And there are quite a few examples as well. And so that would be my first question, if you've also worked around with other interpretations. And my second question would kind of go towards your methodology, basically, so what you were outlining at the beginning of your talk. And particularly thinking about what, how you were outlining also kind of the development of our usage of time or our usage of history now and how this is kind of climaxed to a point that one could even think of as a historical or post historical or.

Yeah. And I'd be quite interested to have your take on how one would then approach this in the sense of, you know, thinking about how to write about it or how to kind of envelop it. And if our tools kind of, or if you'd say that our tools would need to change or kind of thinking about. I mean, I'm wondering if it's even enough to go about these terms in the same way that one would have done back in Krauss-Alin's time. So that's kind of the question and kind of going beyond consciousness or going beyond outlining specific parameters and thinking about how to engage with them. And yeah, that's my question. Okay, thanks. Yeah, I mean, I suppose that in terms of method, I'm really interested in the sort of carrying on use. And, you know, you could possibly say abuse of these concepts that are not kind of, you know, as like, I think through Krauss's whole book, you know, she's sort of questioning the concepts, but also kind of just being playing fast and loose with them. And so I guess I'm kind of doing the same thing. I think that there's something kind of, yeah, I mean, there's something imprecise about our understanding or the type of understanding where we want to get out of the products of generative AI.

And so I'm trying to think around it rather than directly kind of attack it. In terms of Benjamin, I think I've definitely, yeah, I mean, I've definitely looked at the other sort of texts on his writing. And I think that there, you know, is potentially like productive uses there. But I think because he wrote kind of so, he didn't develop much or he wrote so sparsely, he's become a kind of talisman for a lot of people to sort of use to their own, for their own purposes. And I like that as well. I think that's really kind of this kind of, you know, orphan theories that find a home like long after he lived that, you know, get used and reused and maybe used in ways that they shouldn't be used. At all, actually. Yeah, I hope that kind of addresses your points. Maybe the question is a little bit off topic, but do you see something that could be called as optical consciousness in terms of AI? Optical consciousness. I mean, optical consciousness, I guess, is, you know, maybe you could think of optical consciousness as, you know, when we get what we want from AI, which doesn't, you know, maybe happen all the time.

But it's the reason a lot of these tools were developed is so that we could use them and we could get exactly what we asked from them, you know, to produce. Like, you know, a photograph of Clement Greenberg and Rosalind Krauss at an art exhibition. I mean, I don't think I quite got it, but, you know, something close enough, I guess. Maybe that. I don't know. I'd have to think about it. So thank you. You're welcome. Do you have a thought? No, I just lost my mind. Yeah. I just, I don't know if you meant this, what, I don't know in English, but when the Bundeswehr, I think, was practicing image recognition or something, and then there was not a Panzer recognition, but something else or something that made me actually thinking of, I mean, we put in, we train the machine into learning about images. And yeah, but in the end, it also depends on the images. This is which kind of material we fed the machine to learn and to recognize something. And the bias, which also can happen. And I think that was also before when I asked you about authenticity that I was saying that I'm recently thinking.

Yeah, like, or as far as you probably already know, like the context also makes what kind of images are put into it. Like, where are the images from Middle Ages? Are they also represented? What about do we also can gain a truth about the Middle Ages then? Or? I mean, or do we really need images for everything? That is also a question. Yeah, I mean, I think like there is a different sort of I think there's a difference between the kind of unsupervised learning experiments where, you know, it's it's trying to, you know, get around the problem of what is the object and what is the background? Because that's the kind of purely visual problem in a way. But the in terms of Yeah, the what's what's left out or what's lacking? There's there's a lot. And I mean, one other thing that's like important to me, but really not important to all of the people working in computer vision that I meet is the fact that we're dealing with digital. Every everything is digital photographs, you know, so when they say we're categorizing a whole data set of human faces, I said, No, you're categorizing a whole data set of images.

You're categorizing a whole data set of photographs, digital photographs of human faces. And they're like, Oh, God. But I think that's important. So, yeah, I'm sure many of you do, too. The output is not very optical at all. So I'm wondering what happens with opticality in the age when the stage of optics has been left behind by those contraptions. Because there is no more linear relation. You know, between the indexes of reality and the photographic representation. It's basically a statistical approximation. But it's a statistical. There was at some point, though, that in the learning there is the sort of there's the artifacts of that because the paradigm of vision wouldn't exist without it. So the output is not optical. That's what I meant. Yeah. So in a way, it's. It's like like money laundering. But with images. But the output is no more money. So it's just money, but no more image. Yeah. No, I mean, yeah, I think you're right. And I think but I think that, you know, you can kind of turn back and forth on on this question because, you know, a lot of people, a lot of artists are really angry about, you know, the use of their works as, you know, being trained into models and, you know, their names being used in prompts.

And they're saying this is very derivative. This is my work that's being replicated. Yeah. Yeah. I mean, in the sort of like legal setting. Right. Like you're going to if you take a copyright case, right, they're going to talk about how derivative or not the work is. But like, yeah, I think you can't get away from the fact that there's a relationship there between the output image and the training data. But that relationship is so processed. So I'll try to combine a few questions if I can. But just in one, I hope thinking about history that you raise and finance that he is raising. So I'm thinking like this. If if with AI generated images in these models, the image finally enters the Bretton Woods agreement. So if you don't have a guarantee of gold anymore behind behind the dollar. So now we have this finally. Does this change? Can the conservative critics assessment of Rosalind Krauss's methodology? So if it was not applicable to the history of modernism, then like, would this be your innuendo to reevaluate it? How would we think about her argument now? And I'm thinking similarly, I was always thinking about after seeing AI images, I was thinking to what extent is, you know, Jameson wrong when he makes arguments about derivatives or rather ahead of his time.

And only now we have proper proof. Right. And I think that's a postmodern image. But this is in the back of my head when I make this leap. And then I'm thinking the causality must be somewhere there. And there's a financial crisis. We have causality reassert in itself somehow over the model. But this is an exception in our rules. I assume that the question of history can be raised again. But I asked the humble of art history first. Yeah. I mean, I think that that's really interesting. And I think that. Yeah. What can I say? I mean, the sort of like the idea of stochasticity. I mean, that's the sort of Bender stochastic parrot argument is like the sort of random the randomness that I mean, but it's not it's it's like, you know, the idea of chance is in here the whole time. The idea of a certain kind of chance, though, it's like there is a chance, a very, very small chance that like all of the oxygen molecules in this room move to that side of the room. And that everyone over here suffocates. But it's not going to happen. It's it's a statistical chance.

But it's it's not probable. So there's there's something to do here with this brand. Yes. Randomness on one side, but also a kind of the weight of probability. I think that's going on. And also, I think to your point about postmodernism, I think that there's really an opportunity here to rethink a lot of this like 80s and 90s postmodern thinking. I think to your point about postmodernism, I think that there's really an opportunity here to rethink a lot of this like 80s and 90s postmodern thinking. So that's kind of the way that we've been thinking about it. theory in light of these things because I think that they're I don't know I feel yeah you know 10 years ago maybe everyone was like you know it's you know it's it's old it's outdated it's it's not you know maybe relevant to our world but I I do think that there's an opportunity there to look back at that stuff too and I'm yeah thank you for the talk uh I wondered also as an art historian when you uh showed the picture of the sewing machine and then this strange new object close to it um what would be your opinion what it's like very practically reveals because always when I look at these uh generated images I feel like mostly uh it reveals maybe some like correlation in the data but also it kind of seems to reveal to me that it tries to communicate with me maybe also in the way of selling it tries to satisfy my vision um based on the prompt which I think is feel like is a difference to the globe model because the globe does not really try to sell me anything actively maybe it tries to sell

me some sort of depiction maybe it shows one country smaller than it is actually but there's always this kind of action I feel like it's a little bit more of a challenge to me than it is actually but there's always this kind of action I feel like it's a little bit more of a challenge to me than it is actually uh in the way these like strange objects happen yeah yeah I mean I would disagree I think maps uh do I mean if by selling you mean they have a very particular kind of ideological and political point of view that they're uh embodying um in one way or another um but I also don't think that I don't know if I would assign that quite that much agency to prompt writing either I think I think that there's and there's a lot of layers as well like with commercial models too that you know things like the the kind of filters on racist imagery or racist language and stuff like that that are happening also or or those kind of attempts to just you know give more diverse imagery and the output and those there's all those other layers that are happening kind of programmatically after the behind the model or on top of the model and so I think that's a lot of things that we're seeing in the world so yeah so I mean I I'm not sure I'm not sure that the model is is a kind of beast that wants to satisfy its master necessarily but um but you know indeed there's something programmatically happening that is you know trying to be a good product somebody somebody who you know who's making a commercial AI model you want your product to be

useful for people you don't want we might like to see all the weird images but most you know if you're making you need stock images you don't want weird stuff you know you want what you expect to see um so that's like I mean what's the boundary between you know I mean the models will get more and more normative I assume they'll there'll be layers on added you know to these commercial models that make them give you what you want you know so I mean I think that um yeah I think that they the model itself is still you know the weird thing it is with all of its embedded ideologies and we might uh try to massage certain aspects of that away but it's still kind of in there somehow this thing also for you of what's for the talk and it was like the um also like like this idea when it was like this like that the outcome is like not objected anymore and but that the the like I think like at some point we are like just like this like stored brain somewhere and we just have like avatars running around but I think also like you showed this showed this picture with like this like kovitz code um chests yes exactly and of course like there's like they're like these like like some patterns that run completely in the wrong direction of course because like but like it's still it's like a pattern but like also like then there was like this like um if you do like an IQ test and then they were like these like people that made an IQ test and then they had to just like follow a route of numbers and it's just

now what has to be the next number that has to come out and they were like genius people that came to like a number that were like uh what's what's what is this and after this it was like proven that this pattern that they chose was way more complex and pattern but still like logical and like proven so I think like I think like to like um let's every AI like run into a direction and like just like at first like to prove um uh to prove a pattern and I think like this like a I think about like a like a like to have an AI checking another AI to check another AI check another AI like let's just like go go bonkers and like like let's let's just check each other I think it's like a um I I I'm I think I'm not afraid of this I think I would be like um like it's like a this one shoots it's like a kind of like very large adversarial Network where all you know Gans have different neural networks like testing checking each other basically so you know everything checking everything um but then I assume that there is you know depending on which direction things go there's a kind of logic that wouldn't be a logic we would plan that would kind of come out of these like vast sort of pattern recognition pattern recognition of pattern recognition recursion thing that you're describing um I mean yeah yeah intriguing idea thank you so much yeah thanks thanks everyone Kalon Thank you.