Source: https://carrier-bag.net/video/mid-life-mid-art-mid-journey
Date: 21 Mar 2026 09:41

mid-life, mid-art, mid-journey

Cite as
: "mid-life, mid-art, mid-journey". Carrier Bag, 15. January 2026. https://carrier-bag.net/video/mid-life-mid-art-mid-journey/.
Import as

Students of the Emergent Digital Media class discuss aspects of mid-life, mid-art, mid-journey and provide often surprising analysis of the most current generative artificial ‘intelligence’ applications.

With the participation of Guillaume Menguy, Angelika Lepper, Kristina Cyan, Lisa Meinig, Vasilii Vikhliaev, Vincent Entekhabi, Anja Lekavski, Mira Schienagel, Andre Bagh und Sofian Biadsi.


Read full transcript (generated by Whisper)

All right, so we're going to be doing these shorter format presentations with students of the ADM class. We are the first group, so Christina, Angelica, and myself, Guillaume. And we've been assigned these topics with the word mid every time, so mid, this new kind of internet slang for mediocre, lukewarm, somehow devoid of value. And I will start by just showing some mid art, right? Just showing some slop, you know. So this is the front page of the Lexica website in 2022 and 2025. Of course, there's some kind of progression in the aspect of this image. So there's a lot of work to be done. So there's a lot of work to be done. So there's a lot of work to be done. So there's a lot of work to be done. That I'm interested in. I want to maybe talk about some characteristics of these images. 2022 is an archived version, so many of the images have disappeared, which is also, I think, quite interesting. But maybe if we pay attention and actually look at the images themselves, we can kind of begin to see what they're made of and also how they're evolving. I mean, okay, three years ago, it's like, basically, 30 years ago. When we look at some of these kind of properties, a few of them are very striking.

So the contemporary kind of slop image has this quality of being smooth to the point of being basically ungraspable, right? It's completely slick. And this is a artifact of just the noise being removed to such a degree, right? There's really no asperities on the surface of the image, right? This is one first point that I want to bring up. Then there's other points. I'm in the corner there. And this is this idea that these images, of course, are diffused from patterns of noise. And the noise is repeated throughout the plane. As you can see, maybe if you squint your eyes, the image on the left, this noise is repeated. Which used to create these duplication effects inside the image. And this could sometimes be just an annoying effect. In the case of the car, it doesn't look too good. But maybe it was also a production of some kind of an accidental aesthetic. And this is what happens when you increase too much the resolution of the image beyond that which it has been trained. So it was these five. So you have these 512 by 512 pixel patterns of noise. And when you tile them across the space, it produces these doubles, these kind of duplications. This is another example of some kind of visual characteristics of these images. Usually maybe a better example would have been deep dream, actually. But essentially the way also this diffusion process, this denoising happens is it happens more or less uniformly across this initial noise.

And the more it goes, the more it attempts to fill the space with details somehow. And so in many of the slop image, at least a few years ago, we used to see this very kind of fractal quality, these recurring motifs that happen again and again. These are of course more visible when we're looking at moving images. And maybe this is the next point to make. But I just wanted to also put it next to this very ancient, very old, very old, very old, very old, very old, very old, very old, very old, very old, very ancient, right? Aristotelian idea of like horror vacui, right? I didn't write it on the slide, but this idea that, yeah, nature somehow, some kind of entropic force attempts to fill the whole space with matter distributed more or less uniformly. So this is kind of a problematic situation now because we're going from spam, which is supposed to be a kind of a, you know, a kind of a, you know, a kind of a, you know, a which is supposed to be these kind of discrete packets of information, usually text, but that have this very defined boundary to this slop, which is a very kind of sludgy quality. But yet at the same time, it seems somehow that the characteristics of the images that are supposed to be slop are tending towards the disappearance of the properties that I mentioned, right?

Because this liquidity, sorry, kind of catching a stray bullet here. This kind of, yeah, anyway, this kind of liquidity is actually disappearing from the newer models, right? This has been mentioned already, but there's kind of eugenics happening where many of the effects that I mentioned, right, these duplications, these fractals, these kind of . . . . . . motifs which could somehow constitute like, you know, the grain of a piece of wood, which could constitute a materiality that the artist could insert himself into and kind of try to do something with, these properties are being removed from the more contemporary models and especially of course on the platforms, right, on the mid journeys and Dali's, right. And so slop somehow now it refers mostly to that which doesn't look any more like slop. It fails to kind of emulate the actual intrinsic properties of this technical way of producing images which does have interesting and totally new, yeah, it does produce new images in this way and of course as I mentioned this is mostly visible when we move across latent space so the moving image allows us usually to more accurately perceive some of these strange kind of perturbations. This one is a quite an old work of course since again so in the GAN aesthetic we see a lot more of this kind of sloppy quality. Somehow it's more liquid but yeah in any case I guess that's kind of the end of my presentation. What I just wanted to point out is that the kind of contemporary sloppy image that is actually becoming kind of a fixed, it solved into this fixed identity and where you know all these strange perturbations are being eliminated is maybe not the final kind of say and that there's still some room within you know,

open source models with all kinds of fine tunings or even just messing with parameters that are directly available. there's still ways to produce images that are actually unsettling and are actually somehow vectors to escape the slop. And with this, I will leave it to Christina, I think. Thank you. Thank you, Guillaume. I would continue to talk about the slop, but maybe another quality of slop or at least their social role, maybe. I would like to focus on the aspect that the slop is an emerged term for the carelessly automated noise outputs of AI, like web pages, images and designs. It's kind of a junk of the Internet, actually, like generative spam, as we already mentioned. And the content that looks like something, sounds like something, but isn't really anything, at least anything meaningful or something mediocre. Unnecessary and impossibly unmemorable. Interesting in nature of slop is rooted in the fact that it reflects the average statistical result. And a result that fits the expected form. But not only that, I would like to talk about how generated content turns the median result into a standardization. And I would say it's a key point of… Yeah, of this presentation.

This standardized presence of slop garbage already has a potential to shape the way we live and the way we see. And it's becoming the mainstream aesthetics. And actually, I wonder what does the rise of slop mean for generative art and further for the visual culture. Imagine the nightmare when you're having the access to a super powerful platform. And you're having the access to a super powerful platform. And you're having the access to a super powerful platform. And you're having the access to a super powerful tool which can produce you any result, anything you can imagine. And still you end up with something which actually super average, mediocre and constantly frustrating that it's repeating the same pattern. I think that's the worst what could happen in art at all. End up with a mishmash of cliches. But I would point it out that this standardization is a very important part of the way we live. And I think that's the key point. And I think that's the key point. And one of the things– The mysteries that are happening are not only by themselves but because we don't really pay attention to them, but the result– the result does something.

And this is happening— and this concerns me not only by the existence of these images, but also the numerical supremacy of these images and their dominance over all of the rest. So the statistics of this year show that not only are they creating what's important in the global context, an average of 35 million images per day and I guess there's quite numerous slope in it. So generative content including slope is no longer the exception but it's becoming the expectation somehow. This overwhelming presence of slope is the mechanism of standardization and it starts to define the new norm now. And it's not just about quantity, it's about saturation, it's about dissolving into the visual landscape and shaping standards by accumulation. Another fact that shapes the new norm is wording, a new vocabulary. And for that result I actually came across the English teacher recently who helps their students to improve their language skills by using precise vocabulary. So among these words you can meet terms like misleading, scrutinize, error trust, wait through and so on and so on. Referring to the fact that words describe reality and existing phenomena often became recognizable only once we have language to describe it.

We could conclude that emergence of generative images and I'm referring mainly to the slope images. Now, has brought with the vocabulary that defined and understand them but also it's bringing to us the new world or new shape of the world. Thus slope integrates into the tangible part of the visual culture. But there's something more in it. And there's a really peculiar phenomenon further. Here I would like to show how. Yeah. So slope was the attempt to follow how machines see the people. And this is the historical observation made the analyze of the paintings from yeah, middle ages till now. But I would like to speak about something further. I would like to speak about how AI depicts people now. And I guess slope plays a pretty big role in it. So. So how does the dominance of mediocre images affects our perception and how does this kind of content carry a political agenda? Image generation might appear to be a neutral tool on the first glance. But because the generative process reduces to the statistical averages it ends up creating a standard. I would argue that standardization carries its own political agenda. And it's also carries.

It's also carries. How do we deviateس than. Strong absolute? Conspiracy. Since there wasn't any по- yerde имейств был показан вдеяния альбомажн п meteor50 misperceivable of singer coming from the modern age. reminds me something, something really precise. And it reminds me of society where everyone looks the same, consumes the same, and strives for the common goal. I would refer to another word, because I guess visual information plus words forming certain ideologies, I would refer to a really particular word, stroi. And this word, unfortunately, I didn't find equivalent on any other languages, but this word stroi appeared constantly, especially in the military, education, and social context. It's describing something really homogenous, but it always refers to a people who aligned in one stroi. And it's really applied to domesticated situations, but also highly military content at the same time. So this alignment has, in a way, the supremacy of generative slope creates a new kind of stroi, not enforced by ideologies, but created by statistical repetition. And it's not a system of rules, but averages. And standardization equals here a certain kind of totalitarianism. So it doesn't oppress, it's alliance. And totalitarian order comes not only from person or political agency, but right now it's supported and enforced by algorithmic system structure.

So how are we getting this average result in the generative process? I want to come back to the same of standard normal distribution and talk about what does mean for doing art this moment, like how we can go through that. Therefore, I would like to say that, in the sense of the standard normal distribution, I would like to speak maybe more about the strategies. And I would like to first consider not only human-made strategies, but the technological environment and their applications. So we have to be aware about the context or about the tools when we creating art. Speaking about this distribution as an example, I would like, I would imagine where is the future of generated art can fall. And let me exaggerate here a bit, bringing to the dystopian perspective. So what if one day, all non-art, all non-generated art, as you see that belongs to behind this gray area, will push to a really niche, or to a really, classified minority, and classified minority category. So this label is already emerged. These days I call it degenerative art. So something beyond generative art. And I would consider this as an act of resistance, not to be drawn in the seamless and statistical noise.

I would like to maybe speak more, I would like to speak the last one thing, sorry, how artists can critically approach technological specific and importance to question where this tool might develop. I want to open up a question, how would the degenerative art looks like, by rethinking the way we operate technologies on a practical level, and would it be less machine, or less censored, and more surprising, I don't know. I would like to still contain some disturbance, and probably image that half done, that unperfect, that contain the noise, as we pointed out as well. And pictures that also produce trace of honest visibility of the process this picture was made. And I don't know yet how, but I encourage everyone, to stay weird and stay human. Thank you. Hi, my name is Angelika, and I'm an artist researcher and also a film editor. So that's why I will start with a little example from film editing. In its version 16.1, DaVinci Resolve implemented the book, The Boring Detector, a function that scans through the editing timeline to find clips that are longer than a set length, for example 10 seconds. The function was heavily discussed on Reddit, I posted a little quote from Reddit, as being a misconception of the role of the film editor's labor in the editing room.

The function seems to be a sign of contemporary filmmaking being increasingly shaped by algorithmic and computational interventions and conceptions. Raising questions about the fate of artistic singularity and the proliferation of what might be termed mid-art in that context. Works that are neither objectively commercial nor radically experimental, but rather occupy something like a median space of cultural production here in moving images. Yeah, a really famous book, Jonathan Beller held a talk just short in this semester, which I find really electrifying. In his book from 2006, The Cinematic Mode of Production, Attention Economy in the Society of Spectacle, he analyzes how cinema and visual media have transformed human attention into a form of labor, integral to capitalist production. Beller posits that in the regime of computational capital, attention itself has been subsumed as a form of labor. The screen becomes the site of value, extraction, where viewing is no longer passive but productive. Each interaction of the viewer by scrolling, glancing, clicking, and sharing contributes to the circuits of capital accumulation. And let me just add that in terms of the editing room, it's often referred to the film editor as the first viewer of a moving image work.

In Yves Citon's The Ecology of Attention from 2006, Citon extends Beller's critique by arguing that our attentional environments are increasingly engineered to privilege certain modes of perception, namely those that are most easily commodified and measured. The ecology of attention, once characterized by a plurality of modes and rhythms, is now subject to a process of homogenization as algorithmic systems optimize for what is most likely to sustain engagement. Yeah, I shortened it a bit. Well, in the context of nowadays, 2025 contemporary filmmaking, this logic is operationalized through predictive analytics, audience segmentation, and algorithmically informed narrative design. And, um… Sorry. Okay. Sorry. It should start, but, um… It doesn't. The ascendancy of AI-driven tools in script writing, automated editing, and even previsualization has further entrenched these tendencies. Studios, as well as independent creators alike, are incentivized to replicate proven formulas as predictive models forecast human audience preferences and thus the mediocre as the one most in the audience, most people in the audience, would follow the prediction. Um… The consequences are recursive feedback loop in which novelty is subsumed by the logic of optimization and the field of cinematic possibility contracts around the median of past successes. Or, as Ray Naylor puts it in his article, AI and the Rise of Mediocrity in the Time Magazine, just short before this conference, I quote, because they cannot truly innovate, everything that predictive language and image models will produce will be a sequel to what came before.

Not an original idea, but a mashup of our old tropes, repackaged for our consumption. This was already a dominant tendency in our commercial industries to simply take what has been done before, tweak it a little, rebrand it, and call it new. As a result, AI will fill the world with grindingly average texts, passable but derivative illustration and video, and unoriginal but functional new product designs. End of quote. Let me remind us of the technical conditions of the moving images, of a moving image. Such an image is ultimately the conversion of a light measurement. With streaming services, above all, Netflix as the platform with the most far-reaching streaming service globally, as of early 2025, with over 300 million paying subscribers, we are confronted with a, I quote, platform-governed economy of visual desire. End of quote. As Simon Rothöhler put it in his lecture on Monday this week here at the Academy, visual desire is at the heart of every technical image, or this above-mentioned light measurement. The infrastructure made available by Netflix is based on adaptive technology, adaptive bitrate streaming technology, and this refers to the idea of compression that we were talking about earlier today.

This means, as you might know, that films are not stored on a central server somewhere in the United States, but are instead distributed, according to a logic of anticipatory demand across so-called open connect servers, which store fragmented data packets in close proximity to potential users. A key condition of Netflix's movement to the digital world is that the un-interrupted visual stream that is immanent to its infrastructure is premised on the illusion of individualized streaming. Its seamless accessibility relies on predictive mechanisms of data retrieval. The algorithms that ensure the un-interrupted availability of the digital image are the most important. As Simon Rothöhler phrased it, I quote, the image pipelines, that is, the infrastructural systems required to stream moving image content largely without buffering, glitches, or resolution loss, are already immanent to the digital image itself. If, as Bella and Citon argue, our capacities for perception and attention are being systemically reorganized by computational capital, and as discussed through Simon Rothöhler's predictability of the viewer's visual desire as an immanent quality of the technical image, isn't it the task for artists and scholars to theorize and enact modes of resistance, practices that reclaim the plurality, unpredictability, and criticality of attention against the gravitational pull of the average?

I think that's why we're here today. And I finish with that. Thank you. Hi, I'm Lisa, and I'm in the group Midlife with Vasily and Vincent. And, yeah, I'm starting. Thank you. How Johannes Zielinska described to train in so-called AI systems, a semiotic flattening of images has to happen. Referring to Nicolas Merlewe and Katharina Sluis, this contains a move from representation, which is the mediation through photographic technology, to representativeness, which presents a new form of representation, which is the representation of images, and samples of images. An image of a house has to resemble other images of houses on a certain constitutive level, so that its houseness can be filtered out as a sequence of data points. So this outcome is not a surprise, and I could repeat this step with other generative AI. The outcome is not wrong, but it results in averaging or flattening how we perceive our world. If an object represents characteristics that are not considered relevant or do not resemble enough, it's a higher chance its representation and visibility will fall out. This also results in creating stereotypes and biases, as we all know, and has real effects on our daily lives.

Of course, maybe the problem will be solved with a better data set, but we can also read these outcomes to continue working on the problems of representation in our society that AI can't solve for us. Education and critical thinking is very important for that. Human learning is complex,