Source: https://carrier-bag.net/video/surrogate-data-and-ungovernable-data
Date: 21 Mar 2026 09:38

Surrogate Data and Ungovernable Data

Elisa Giardina Papa
Cite as
Giardina Papa, Elisa: "Surrogate Data and Ungovernable Data". Carrier Bag, 5. November 2024. https://carrier-bag.net/video/surrogate-data-and-ungovernable-data/.
Import as

In this talk, Elisa Giardina Papa will outline the theoretical and archiva research which informs two of her video installations, Technology of Care and Cleaning Emotional Data. Presenting images she collected while working as a “data cleaner” for various AI systems, she will address the ways in which machines are disciplined and trained to see. Tracing, bounding-boxing, and labeling are key operations used to teach machines to separate Data from data, signal from noise, and orderly things from disorderly ones. They are also, Giardina Papa argues, the onto-epistemological operations of modern imperial and colonial conquest. Ultimately, this talk will be an invitation to reflect on modes of seeing otherwise which remain radically unruly, irreducible, and incomputable.


Read full transcript (generated by Whisper)

Thank you so much, Francis. Thank you, Ito, for inviting me. I've been in conversation with Francis for a couple of years now. We started during the pandemic, so I'm finally happy to meet in person and happy to be also in conversation with all the panelists of some of them I know. So, in this talk, I will discuss some of my artworks, specifically one, outlining some of the questions, a reflection that guides my practice, and I hope that these insights might contribute to the topic of the conference, which is, if AI is the answer, what was the question again? Okay. So, I'd like to summarize my work as an ongoing commitment to retrieve and reactivate forms of knowledge, perception, and desires that have been disqualified and rendered nonsensical by hegemonic demands of order eligibility, demands of the past and also demands of the present. So, in my project, I usually sift through this card the AI training data sets, but also sensor cinema repositories and the AI training data sets, but also, for example, a reticle inquisitorial trials of the 15th century. In this case, the hegemonic ordering device that I was investigating was the Spanish Inquisition, which was a device active in Sicily that is where I'm from, while also active in the conquest of the new world, the Americas.

So, I, and I'm doing some slides with text, also, because I'm in jet lag. I arrived yesterday from New York, so bear with me. So, I need some text not to lose my path. So, the tension that I'm trying to analyze, usually with my work, is the tension between order and extractions versus excess and unruliness or disorder. That is, I trace, I try to trace how past and present forms of extractive capitalism and colonialism or imperialism and their past and present technology have strained our capacity for living, labouring, imagining, and desire in common. So, I try to do attention to those parts of our lives which, nonetheless, I believe, remains radically, unruly, untranslatable, undepletable, and incomputable or uncomputable. So, this is the tension through which I also look at the technology, which is one of the topics of this symposium. Right? AI generative models. And I believe that instead of calling them generative, and Hito was talking just one second ago about the etymology of generative, right? So I think maybe we should start referring to them as consuptive models, right? These models are structured around consuptive behaviors that function as means of extraction or depletion rather than renewal or creation that is what the third generative should point to.

So the deplete natural resources, the deplete labor, the deplete users such effective and creative qualities without generating anything that restores what is extracted. So there is no redistribution, either in terms of economy and commons, and there is no regeneration for how I see it. Okay, so now to go back to the title of the call, the talk which I call Surrogate Data and Ungovernable Data, I will outline, I will use one of my artwork which is called Cleaning Emotional Data and outline some of the research that is underpinning the work. Okay, so this, it was a video installation, it's called Cleaning Emotional Data, I did it in 2022. And it's a work that addresses the ways in which machines are disciplined, and trained to see. So a visual account that documents the methods currently used to teach AI to capture, reduce, and order, and render, order and render the world, but instead consider everything in our lives, embodiments, and desire that defies these normative modes of categorization. The tease is a work in which I try to pause to reflect upon that which keeps flickering in and out of any possible taxonomization, and exists only at the very edge of definition.

The work started actually as a researcher for my PhD program. I was a Berkeley at the time I was doing my PhD, and I was also back in Palermo, which is the city where I'm from. I was writing a paper on machine vision, and I wanted to understand more about the training of machine vision. And so while doing the preliminary research, I ended up working for about three months as a human scientist, and a human trainer for AI systems. I thought that that was the best way for me, the only way really to really try to understand that this work of training data and training AI systems. So I work for so-called human in the loop companies to provide clean data for the training of AI systems. So I'm using a lot of technical jargon, like human in the loop and data cleaning, but what I mean by that is that I'm not just a human, I'm a human being. And what I mean by clean data is data which are reduced, optimized, sanitized, tamed, and made behave to sustain the fiction that everything can be computed. And specifically, I want to talk about the relationship between clean data and surrogate labor, clean data and the onto-hypothemological flattening, and finally, dirty, disorderly, erratical, excessive, incomputable, and queer data.

So let's start with the relationship between clean data and surrogate labor. Here the question is how extractive computation, why extractive computation needs surrogate, invisible labor in order to function and maintain the fiction of progress, autonomy, and automation. So clean data can be understood as a further instantiation of what, for example, Farruki has termed the operational image. So building on Roland Barthes' notion of operational language, Farruki defines operational images as images that labor. So images that fulfill a task within a technological operation. So there are images that labor. The notion of operational image then is, for example, further elaborated by Tribal Plugin in what he has termed the invisible image. So Plugin depicts, we say, quite a dystopian scenario according to which images are now made by machines and machine for machines, with the human rarely in the loop, to the extent in which the human is not looking anymore looking at images, but is more the images that are looking at the human. And this is true. But one could also ask which human exactly is excluded from this loop, right? Because thousands and thousands of workers are actually needed, and we know this, not since a long time, but we know this, are actually needed to support the illusion and the fiction of automation.

And what you see here are some fragments of many of the cleaning tasks that I did when I was working as a data cleaner. So thousands and thousands of data cleaners, for example, are needed to continuously label, categorize, annotate, and validate massive amounts of data, thereby enabling machine vision to function. The human is not excluded from the loop. It's only invisibilized within the loop. So what becomes invisible is not the image, but it is the laboring human within the image. And this human-fueled automation or photomation, so fake automation, as AstroTaylor, for example, defines it, is part of what has been termed the gig economy. So the gig economy, a live rating-based marketplace with in-app payment system in which workers are able to work as freelancers, super-form, under minimum wage work. And the gig economy maximizes labor arbitrage by moving quickly to the geopolitical landscape to extract work from the most economically troubled country at that specific moment, and likewise the most economically vulnerable members of a population within a given country. And this new technical differentialization, this new technical differential exploitation is facilitated, of course, by the process of abstraction that platform capitalism imposes upon its workers.

That is, in the gig economy, the workers are reorganized as an assembly, a disembodied, a divisible packet of time that can be activated according to demand. The problem, of course, is that a disembodied packet of time doesn't have any demands, doesn't have any workers' rights. So, this is the problem. So, the work that's unfolded in the invisibilized social-technical assemblage, which is bolstering AI, are often described by those who contract them as temporary placeholders for a future of full automation, right? So, a future in which forms of labor that have been historically considered unskilled or non-creative will eventually be replaced by machines. But once again, one could ask, what is this particular understanding of technology a proxy for, right? And, for example, according to Kalin D'Avor and Ed Antonovsky, they claim that automation technology can act as a surrogate for historically devalued work and workers is based on gender and rationalized imaginary that have been used to separate the human from the less than or not quite human other. And D'Avor and Antonovsky mobilized the concept of surrogate humanity to inscribe today automation into a longer history of unfree and invisible labor. And in doing so, they clarify that automation as a proxy for devalued work recapitulates history of disappearance, history of erasure and elimination that are necessary to maintain the transcendental man, so the man with a capital M, as the only agent of historical progress.

So here we see the ways in which patriarchal racial capitalism keeps hiding beyond the naturalization of technology in the form of a prosthetic extension of the autonomous transcendental man. So within this framework of surrogacy, the invisibilization of the data cleaner can be seen maybe as a process of double surrogacy. That is, via mirroring effect, machines are increasingly posing as surrogate for invisible or devalued work, while devalued workers are increasingly posing as surrogate for the machine. So in other words, if science fiction capitalism keeps promising a future in which intelligence machine will eventually take over papyrus labor, this form of capitalism also keeps hiding, but will continue reproducing those papyrus labor to fulfill its initial promise. That is, the promise of autonomy and estimation is based on differential exploitation. So it is a promise made to the few, but then based on the programmed exploitation of the many. Okay. Okay. Okay. So, here finally, if we are to finally open the, for example, black box of algorithmic enchanted machine, which AI capitalism claims essential for this constant re-origination of the idea of progress in the autonomous man, we may find inside, I think, nothing more than an intensification of the old entanglements of capitalism and colonialism, technology and surrogacy, the autonomy of the few and the programmed exploitation of the many.

Okay. So now I move to the second point, clean data and epistemological flattening. So the question here is why extractive computation needs an epistemological flattening of the complexity of the world in order to function. Or we could say within the framework, maybe, of generative AI systems models, how extractive computation, why extractive computation needs the construction of the probable average out of the complexity of the world in order to function. And what are the default assumptions and axioms upon which this reduction is executed? Right? So let's consider, for example, AI-based emotion recognition systems. So the attempt to extract the hidden truths of interior emotional states, using machine vision. So the attempt to make face behave so that emotion can be extracted. So to give you some concrete examples, these are some of the tasks that I did specifically for affective computing models. For instance, the labeling and rating of emotion in composite computer-generated faces, which are quite reminiscent of Galois. This is a Galtonian experiment. This is a… I mean, I don't have to probably synthesize the Galtonian experiments, but just to give you like a visual reference. And also the labeling of micro-expressions in videos according to the strict categories, seven categories of anger, contempt, disgust, fear, happiness, sadness, and surprise.

Or the rating, for example, of the confidence of a person based on their countenances. Also the animation of 3D avatars with my own emotions. This was a zebra in which I was supposed to perform a surprise affect. This is a bat in which I was supposed to perform a disgust emotion. But also one of the tasks that I did, and actually is interesting because this task in which I… using your face and your body are actually the tasks that are more remunerated. They pay you more because maybe the extraction is more. I mean, of course, the extraction is more. So, for example, I also did the production of videos of my own emotional expression as a contribution to, I guess, an effective computing database. And I want to focus one second on this last task. Because this was actually for me a turning point in the research. Because right at the beginning when I created this video, so my facial expressions, some of them were rejected. And it seems… So it seems that, I don't know, my expression was not fitting the crude categories in which it was supposed to fit in. Right? My face was not happy enough, nor happy in the right way.

I'm actually not sure. And I'm actually not sure if the rejection came from an algorithm, or, for example, from another worker, so maybe due to cultural difference, understood my facial expression in a different way. And because of this rejection, I became more and more interested in the assumption upon which AI emotion recognition is based. That is, how would a machine infer an emotion from my face? Right? Or how could a machine label my face as happy or not happy in the right way? And compared to which ideas would I have a real or average and probable happy face is my face, or my happy face evaluated? And more generally, who gets to emote properly, and who does not? And to understand what is called the ground truth, so the ideal expected result of AI emotion recognition, we actually need to go back to the 19th century. And the so-called Duchenne smile, which is a smile that crinkles the muscle around one eye, and this can be scientifically proven as true. That is, we need to go back to the myth of universality, transparency, and truth, and objective measurements. According to which emotions are universal, they're finite, and they can be fully revealed, made transparent, reduced, and measured within an ideal scale that can provide the ground to make comparisons and judgment.

So as, for example, Edouard Glinson would say, no opacity is allowed in this kind of system of detection. So we need to go back to the distressing medical and technological experiment that the French neurologist, Duchenne de Boulogne, teacher of Charcot, made in the 19th century, Al de Saint-Péthier. And for clarity, Charcot, that was a Duchenne student, is the one who invented female hysteria, right? And the mechanism of human physiognomy, published in 1862, Duchenne links modern physiology and psychology with older ideas on physiognomy, the pseudo-science according to which it is believed that it is possible to access interiority from exteriority, personality type from face type. So he combined old theories with new technology, new technology at the time, to present what he calls a universal map of emotion. And in this collection of photographs, which are also reproduced in Darwin, the expression of emotions in men and animals, Duchenne triggers with electronic probes the muscular contraction of faces of his patients to produce what he believed to be the universal taxonomy of the true emotional states. And I'm not showing here the full photos, because of course there is a huge problem of consent between Duchenne and his patients, but I'm just showing the technology.

So Duchenne with his muscular contraction electro technology. So similar to what Foucault, for example, analyzed in the Charcot and the Salpeterre Affair, what is essential here is the construction of an apparatus for the production of truth. So an apparatus which is constructed via old pseudoscience of physiognomy, modern theory of physiology and psychology, and what at the time was understood as a new technology of the time, which was short exposure photography. So once again, technology that was understood as able to reveal and to read something that the human eye was not able to read. And here I want to pause for a second, because I'm wondering if the construction of an apparatus for the production of truth seems to be still key in AI identification systems, right? But I'm not sure it's still key in AI, generative AI systems, because rather than the construction of truth, this model seems to perform an approximation of a probable average of a simulation of. So I may dispose, because maybe we can explore this further in the question and discussion moment. But now, to go back to the history of emotional recognition, I'm tracing this history because 100 years later, the psychologist Paul Ekman, with funds from the ARPA, which is the research arm of the United States Department of Defense, builds upon Duchenne experiments to claim, once again, that emotions are finite and universal.

And of course, there is a strong scientific controversy around this method, which is considered by the scientific community a best incomplete and a worst bogus. But this method has been used, and in some cases still used today, only slightly revised by the CIA, the FBI, the TSA, of which Ekman has been a consultant, and also used for AI-based image recognition systems. Also used for CGI animations, Ekman has been a consultant also for Pixar, and likewise used in the animology implemented in your smartphone. So it is a method that is crossing the military-industrial surveillance spectacle tech industry complex from the 90s center of physiognomy to Hollywood, to your phone, and then to the FBI. So of course, we are seeing a number of emerging startups, as always, that are promising to fix the problem, by increasing data diversity and currency, that is basically to implement, enlarge the extractive zone, and also to accelerate the extractive zone as a way to solve this problem of currency. But the point is that it doesn't matter how big data can become, so how much the extractive zone can accelerate, the operation of these systems as I see it are the violent epistemological operation of trying to reduce life into clean, computable, docile, well-behaved data.

So the question then that we keep proposing, I think in the last few years, in the last decade, is still the same, right? Why are contested scientific claims and discriminatory dispossessing imaginary bolstering so much facet of AI industry? And by asking this question, I think we're also pointing to the fact that most of the time, while we're looking at the technology of today and of the future, we're not actually looking into the future, but we're looking into the past. It's something I think we need to start to do, is like to try to disengage from the forever new of new media, and re-scribe AI operation into the longer history of capitalist, imperial, and colonial technologies. So give up on the newness of the new. And the newness of new media. Because the tantalizing bid of AI industry relies on the fantasy that technology will be forever new by remaining forever pastless, right? So an artificial intelligence is, after all, an intelligence with no markers and no past. A transparent disembodied reason, unfettered by any physical, economic, or class, geographical markers, and likewise devoid of any history of exploitation or dispossession. And yet, when you're looking at the operation of machine vision specifically, segmenting, tracing, bounding, boxing, and labeling are the key operations used to teach machine to see.

To teach machine to separate data from data, signal from noise, and the orderly things from the disorderly ones. So as I said, these are also the operation of a recursive normative hegemonic ordering of the world. And this can be felt almost in a granular way when you spend some time cleaning data, right? And the two operations that I perform over and over again as a data cleaner actually were those of bounding and naming. That is, to trace a border and define the traced enclosure. But for example, as Mezzadra and Nelson tell us, borders are, of course, spatial arrangement, but they're also identitarian and cognitive processes, right? So it's spatial borders, right? So these spatial borders structure the movement of bodies, cognitive borders, by establishing conceptual hierarchies of identity and difference, also structure the movement of thought. So borders as both cognitive and spatial arrangement, for example, can also be seen as one of the primary tools of conquest or settler colonialism. Okay, so now to the last point, which is disorderly, heretical, excessive, incomputable, queer data, which nonetheless, I believe, persists beyond extractive computation. So while repeating over and over this operation of bounding and naming during the period of data cleaning, I also started to collect images that to me seemed to resist AI orderly impulse.

For example, that of a couch woman. This is one, was an image on one of the batch of images that I was supposed to clean and segment. So in this instance, the algorithm, for example, I believe, could not identify where the piece of furniture began and where the woman ended, right? So the image of the woman was leaking into the couch, and that of the couch was leaking into the woman. So it started to look to me as this unbounded subject that leaks onto and within things. And to find out, to facilitate algorithm detection, I had to outline the controls of both the couch and the woman and then label them as couch and woman. That is, via, I had to pronounce them via a performative utterance, you're a couch and you're a woman. But while doing that, I also started to ask myself, what if I could make the algorithm unlearn? What if I could tell the algorithm that the only things that we should start to do now is to unlearn how to know the world, unlearn how to know the other? So I started to think in terms of machine unlearning. And this made me think, for example, of the writing of Ariel Lazzulai and their proposal of unlearning imperialism.

How do we do that, right? And how do we make the algorithm unlearn? And this image also made me think of how Mel Chen in Animacy's Biopolitics, and Racial Mattering, and Queer Affect defines queerness. He defines this in terms of the social and cultural formation of improper affiliation. So the queerness starts to describe an array of subjectivity, intimacy, being, and spaces located beyond the normative. Of course, the heteronormative, but the normative in general. As I started to think about these images, I was collecting as glimmers of improper affiliations, persisting beyond the normative, reordering the vases of AI. And it's quite interesting how generative models now seem to actually break, to fail, and to doubt, specifically in those operational separations, right? In those moments of encounters of object with subject, subject with subject, object with foreground with background. And in these encounters, it seems that instead of reaffirming separation as in the Cartesian logic, and instead of reaffirming solidity as in the Newton physics, they seem to propose something else. Something that made me think, for example, about difference without separability. Difference without separability. Which, for example, is what Denise Ferreira da Silva is asking us to consider.

Proposing it as this experiment that we need to do in order to reimagine the social without the deadly distinction and little, little reordering of the devices of modernity. Devices of modernity which are leaking and bleeding into our AI systems, even if they're proposed as new. So I'm now saying that we should start to imagine a political project from the moments in which the logic and the static of the instrumental reason that is generating these images is failing. But maybe what if these images are telling us that it is time to imagine an otherwise. So it is time to do without the deadly distinction and little, little reordering devices of modernity, which are leaking. And they are going to keep leaking into what we are calling the new AI systems. I was wondering, you know, I mean, two questions maybe, or one comment. First of all, it seems to me as if the face of, you know, needing humans to train machines to see is in a way over. So that knowledge has been ingested already. The processes of training now almost do not require a sort of human guidance anymore, at least when it comes to machine vision.

I think in a way it burned already through that face of even having to require on human input. So it seems to me we have reached another stage in which this layer of human knowledge, this layer of human contribution has been made obsolete already. I see it when, for example, heaps of micro workers are being dismissed because they are simply no longer necessary because basically machines are now doing zero-shot training, etc., etc. So maybe you would like to comment on this most recent phase. That's number one. And number two is another question. It's more general. What if you showed basically in the last stage of your presentation the generative images of malformed limbs, for example, as examples of a sort of glitch which might indicate some sort of malfunctioning of the system. What if exactly this is precisely the point of the system? You know, to malform and to misproduce bodies in this way. What if AI systems are not something which people that want to resist them have to unlearn, but what if they are in themselves already vast systems of unlearning and de-skilling and basically of making human contribution, human thought and human labor as superfluous as possible?

So that would be the second question for you to comment on the idea of unlearning and that AI systems are already systems of unlearning. We don't need to unlearn them. They are doing it for us anyway. Well, I don't like the linear, like the progress, this idea that like I don't think now there is an AI system that will make the other one obsolete, right? So because I do think that AI system that works on identification and surveillance will keep being there, right? So we will have both systems. And in terms of the first question of if at this point these algorithms are already trained on our faces and they don't need to be trained anymore, I think they are keep trained, right? We are keep being source all the time for the training and the perfection of these systems. And in terms of at this point the human is obsolete and the worker is obsolete, it's partially true, it's partially not true because now there is this like new labor that is called the labor of the humanizer, right? Which is also interesting because the humanizer can be both a software and a man because now we start to have this problem of the synthetic data, right?

So they produce… the production of the synthetic data which are produced by a generative AI system which are then fed back into these systems to the point in which there is this kind of like impoverishment of the data so that… so then you do have the need of this new figure which is the human either that go back and kind of like input human new data to avoid the problem of the impoverishment of the synthetic data. So that is the kind of like answer to the first question and the second one about the glitch, what if this algorithm are already unlearning? It might be, it might be. And I'm not… I feel like I don't… I'm not again looking at glitches as political possibilities, right? But to me is more about… try to understand the kind of like episteme that is like coming to an end and of course we are not able to recognize it, right, because we're part of the episteme itself so it's difficult to recognize what is coming to an end. But I'm wondering if this idea of a strict categories of like order and distinction is coming to an end for the worst or for the good.

And if this like generative system, this is like exactly what they're doing in the bad and the good. Okay. So questions please from the audience. Thank you for the talk. I have a question about the work of a click worker or maybe the automated process now. So if a truth is being constructed through let's say the work of clicking whatsoever, is it possible even to disobey this process since maybe wrong answers or… answers that differ too much from the average or being sorted out either way and you're getting paid to follow a certain set of guidelines? Is it possible to disobey when you are the click worker or does it even make that much of a difference now if the same process is being automated? No. I don't think it's possible to disobey because you're not really looking at the same thing. You're not really like a worker. You're like a bucket of time. You're like a quick answer to a yes or no, to like a matching of a concept with an image and a concept, right? So somehow you're also becoming an average, right? That you're like the system is already designed to kind of like exclude.

So I mean it's probably as in politics, right? Either we disobey altogether, at least a lot of them, a lot of us, or the single act of disobeying will not change too much. Sorry to be negative. No, but it's true. But that is also, you know, like at one point when I was like cleaning this data, I was like, what if I start to like, you know, just like change like all these categories? What if I invent a new categories, even if we don't need a new categories? But, and we do, I mean, we do like, you know, change things little by little. We do lie all the time when we give our information on the internet. But how that then can like loop back into a change of this kind of systems, I'm not sure. Thank you. I have a question to the question of automated, automating labor and also automating creative labor. In a way, when we started to talk about creative industries in the 1990s, it kind of prefigured this notion of kind of industrial automation. And I think that's a very important thing. Yeah. And so, and I wonder whether you see this as a continuation where first manual labor was automated and then, you know, other forms of labor, cognitive labor, however you want to call it, automated.

Or whether you see in the way this is being done, you know, a real break in that logic. Yeah, exactly. I mean, I'm thinking now, for example, of like workerism and autonomy in Italy in the 70s, right? Because they also like imagine that machine and automation could be part of a political project, right? So machine will do the work that we don't want to do. And then we can devote our time to more creative endeavors and also to free time, to just spend time, to socialize, to think, to get together, to become a we in different moments. But that needs to become a political choice. Yeah. Because what we can automate and what we cannot automate. Yeah, I have the feeling that what's being automated right now is already the leisure time, right? The machines are already doing the leisure part for us. They are making the art for us. They are enjoying life for us, et cetera. So that's the part that's being automated right now. Not the labor. That's still for the humans. Yeah. The art. That's for the machines now. Exactly. Yes, please. Thank you for the presentation. I was really interested in what you were sort of saying about how you're wanting to draw attention onto the aspects of our lives that sort of are not computable or sort of like fall outside the bounds.

I found it really interesting how you sort of demonstrated this in the presentation, both from like your experience working as like a click worker, but also sort of like tracing like these histories. I'm really interested in how you sort of see then like the element of your practice and how I guess maybe your experience of work translates into practice or when you're creating a work, how that the experience that you're trying to give differs from like a presentation, for example. I guess like how drawing attention looks in practice versus it looks in a presentation, if that makes sense. Okay. Yeah. I mean, to me, the statement that like now everything can be extracted is important because I also think that, I mean, I just think that it's true, right? And I also think that the solution is not necessary in the future, but it is in the present. We are already doing things all the time that are not fully extractable. And in terms of my own practice, I have to say that I had to take a break a little bit from, you know, I had to take a break from my work. Yeah.

Yeah. Yeah. I had to take a break from my investment in technology. So the project that I did after this and the present is a project about the inquisition of the 15th century in which I tried to reactivate the tale of the Donna di Fora, which is this like Sicilian mythology of these women that are part human, part animal, part feminine, part masculine. Anyway, it is like a queer mythology that was important for me growing up, but is one of those mythologies that I think is important to recirculate. Yeah. It is important to recirculate into the world. So I guess in this period I'm doing like a work that is more about kind of like it's not about, yeah, it is maybe about regeneration. That is, I'm bringing like new dismissed forms of knowledge, desire, and stories into circulation, believing that that could do something. Yeah, it's interesting how inquisition work. I've worked about inquisition almost comes as a break, you know, from dealing with AI. Thank you very much for your presentation. This is probably more like a question to a question because you rose this question about the physiognomy and the emotional transparency, finding resurgence in biometric AI research.

Yeah. So I think it is a really important question. I think that I'm really interested in the AI recognition systems. And I find it super interesting. Of course, for me, it seems that this is a bit, works for the system as a feedback system. Of course, for robots, kind of giving feedback to a certain emotional state of a face, maybe for human. But I also find it interesting that, for example, in Facebook, the thumb kind of changed in the way that they are talking. Yeah. So. So you have a touch. In the meantime, you have like several different possibilities of choosing a certain emotional state. You want to respond with like a heart, a shit, whatever. Yeah. So. And I find it interesting, maybe wanted to ask what you think about it like this. On one hand, this feedback through physiognomy of the face. And the other one, like the feedback. Which is kind of a mass feedback, basically. Giving the company, in this case Facebook, or Meta, like an overview of kind of a state of emotion as a crowd system. Showing that in a certain area or because of a certain even political reality.

Can measure actually emotional feedback. Yeah. And I wanted to, I don't know, maybe you want to respond to this. What do you think? Because I thought you had this question, but then I maybe didn't follow enough what you thought about this question. Emotional feedback. Yeah. I mean, like I wanted to underline the relation between physiognomy and those experiences. And then, you know, the physiognomy, those experiments of the 19th century. And then, yeah, I wish a recognition to underline, or today, to underline how a lot of these systems are based on this oversimplification of over-simplification. And then the question, those kind of questions, like why are they using this, like, you know, pseudoscience. It is because it is made of calculation. It's something that can be, you know, calculable. It can be reduced and transformed into mathematics. And so because of that, then, like the industry doesn't pay any attention, like, to, you know, using this like crazy theories. Because there are theories that are based on reduction and mathematics. Then in terms of the other point of this kind of, sorry, also because today is election day in the United States. And I'm based there.

And I'm thinking about how. Yeah. So even politics right now are just reduced into, you know, when Kamala Harris, it is recorded. Anyway, when Kamala Harris like entered politics, it was understood as the joy, you know. Like everybody was just like talking about this kind of like, the affect that she's like creating. There is no program. There is just affect. There is hate and there is joy. So it's becoming this kind of like emoji politics, right? In which like, the kind of like rating. And try to understand what the emotions and the affect of the population is. Seems to be what politics is about. So I don't know if this is like a good answer to your question. But it's like what is in my mind also because today is election day in the United States. I just, I thought it interesting that you brought up this 19th century image of the electric impulse. And because basically also like Shoshana Zuboff I think argues is this, the question of changing behavior. Or that also the systems we use are changing our behavior. And I think this is kind of probably the wish to come close to the skin or to the muscle, you know.

To change the behavior in such a way that it kind of. Electrically, electronically basically changes your affect. Stupid fun fact, anecdote. A colleague of mine, Trevor Paglen once made a work where he was trying to diagnose emotion. Via, you know, system concoction, emotion analysis detection. Which he had downloaded at that point in time. It was many years ago. And it didn't work well. So he tried with his friends first who were all programmers. And he told them, look happy now. And the programmers were like, look sad. Look excited. So he had to ask me to do it. And I only was capable because my daughter at that point in time was still very small. So I was used to making all sorts of exaggerated faces, you know. And the machine was able to read it. But this was the inversion, right? The machine was eliciting the behavior for me or the affect or the performed affect or the emotion. It was not a real affect. But on the other hand, it's also I think completely naive or pointless to just hope for the possibility that those systems. Might not be able to recognize emotions. Because they improve at such breathtaking speed.

You know. And there are so many data points. Last week we saw a great work in Zurich. A woman made a work about the many ways in which a phone tries to extract from women the dates of their ovulation and period. You know. There are so many proxies how you can do it. And there are so many bio political knowledge basically extracted by phones now that emotion is easy by now. I guess. I mean one thing is this. Whether there's a relation between your emotion and your face. This kind of fundamental question. But in a way it doesn't really matter. Because these things are so normative. So if you want to do a video that can be read by the algorithm. You better use these expressions. And you can see that in TikTok videos. In YouTube videos. On and on and on. These exaggerated faces that you used to do to babies. Or all of that. What is happy. What is sad. This is being used. Right. So these are performative. Not just in terms of that they can be encoded. But they also set the boundaries in which people have to act. And they are.

And they know that. And they act accordingly. So these things. Even if they might be absurd. Or might be scientifically questionable. They become real. That's what I also meant before. First it's there. And then it's made real. I was wondering both in your presentation now and also in Felix's presentation before. When talking about these ongoing simplifications and putting stuff into categories. I feel like it often sounds as if there's something that is coming from the technology. And this need for computation ability. But then if we look at the political mirror and the political times we're going through. I'm actually wondering is this just feeding a need that was there before politically. Where people are looking for simpler systems. Simpler worlds. Being able to categorize people. Being able to tell people how to fit into one of these boxes. And yeah. So I wonder how you see this relationship between ongoing political changes that tend toward more categorization. And more clear cut wanting to put people in a box. And the technological needs where computation is often easier if you can label things specifically. Can you repeat the last part? So it is the relationship between the politics of categorization.

And like as a form of control and order. Like creating categories. Strictly categories where we can like reorganize society. And? And the computational and technical needs for categories. Like back when we had only a few categories. Like back when we had only a few classifiers. You would need to put as you were saying in labeling things. You could only select let's say the 80 categories of cocoa or this and that. So there was this need to have specific and unique identifiable properties for computation to be more efficient. And what I meant with politics is maybe not only the urge to control. But also this longing for simple graspable. So where a lot of these reactionary politics go back to stereotypes of the 19th century basically. Because people want to have these easy graspable worlds with good and bad. And men and women and all these things that are easily categorizable and binary in some level. I mean I think the maybe it's a question for both. I guess. I was not sure if you also want to answer. But something I will say quickly and of course. Like you know these like technological systems and the politics we are living since a long time.

Are going in the same direction of like order and not just categories but hierarchical categories. Because these technologies that we are using are coming from a specific point of view. From specific values. From like a specific history. And that's why I finished by thinking and is now fully solved in my mind. But I wanted to talk a little bit about what. For example Denise Ferreira da Silva is talking about in terms of like how do we do without this like deadly distinction. And later reordering devices. And the way in which she's looking at it is just like even like thinking about. Maybe we should start to think about physics in a different way. Like we need like the kind of like scientific knowledge to abandon even like certain scientific knowledge of modernity. Right? So. They do the same thing because they are coming from the same world view. And I think it is time to try to reimagine what instrumentalism could do. If we start to think it from a different set of values. Which are not those of order and hierarchical hierarchies. Okay. Thanks a lot. We will release you now from our conference inquisition.

You must be super jet-lagged. Thanks a lot. Thank you very much. Yes. Thank you. Thank you. Thank you. Added on-screen Ahh. Hello everyone. My name is This is theł What I was fighting for versus me being so aggressive in this war. So, I wanted to say,