est eng

Fresh issue still on sale! "I will leave this page open, looks like it's not loading. Let me know when it's working again." – Taavi Eelmaa & InferKit, "Triaad" (4/2022)


The work of art in the age of the synthetic reproduction of its aura

Stefan Peetri (3/2022)

Stefan Peetri's essay on artificial intelligence and deepfakes in contemporary visual culture.

"…the imaginary power and wealth of the double – the one in which the strangeness and at the same time the intimacy of the subject to itself are played out (heimlich/unheimlich) – rests on its immateriality, on the fact that it is and remains a phantasm. Everyone can dream, and must have dreamed his whole life, of a perfect duplication or multiplication of his being, but such copies only have the power of dreams, and are destroyed when one attempts to force the dream into the real." – Jean Baudrillard1

Art history is one of few fields that has treated copies as forgeries. It is hard to find institutions and individuals who care more about the authenticity and origin of art objects than art museums, insurance companies and art collectors. Paradoxically, however, we live in a culture in which people are increasingly expected to prove their authenticity and origin to machines. For example, through biometric mechanisms, such as facial recognition, captchas like "I'm not a robot" or "I'm a person", electronic identification applications, etc. The paradox lies in the fact that as humans, we are expected to authenticate ourselves to systems and machines designed to produce copies and clones.

In 1935, Walter Benjamin published his best-known essay "The Work of Art in the Age of Mechanical Reproduction" (Das Kunstwerk im Zeitalter seiner technischen Reproduzierbarkeit), which was mainly driven by a political agenda: the introduction of new serial mass technologies was seen as a democratic and communist advancement. While the political ideas of this text are by now outdated, Benjamin predicted that film and photography will see mass-reproduced works of art lose their relationship to the original. Film and photography were created with reproduction in mind.

Yesterday, for example, I went down to the market to buy some potatoes and vegetables with a plastic bag that carried the image of Jan Vermeer's "Girl with a Pearl Earring" (Meisje met de parel, 1665). To what extent is the unique "aura" of a work of art preserved in a plastic potato sack? For Benjamin, the "aura" of a work of art captivated us with its sublimity, with its unique here-and-now moment that was tied to a particular location, such as the Basilica of St. Francis of Assisi. Of course, a work of art could be transported, but it never physically appeared in two or more places at once.

The digital reproduction of artworks with the use of artificial intelligence (AI) or synthetic media turns Benjamin's idea of mechanical reproduction and the loss of the aura of a work of art on its head. Algorithms and digital platforms are increasingly personal (echo chambers with a sacred connection to the church where one is alone with God), making it possible to mass-reproduce authenticity and aura. This is not the authenticity and aura of Benjamin, but a machined authenticity, a synthetic aura. Today, we see an important shift in visual culture, where internet users can increasingly cooperate with AI (for example, Midjourney or DALL-E 2) to endlessly produce visual culture.

Below, I would like to discuss deepfakes as one of the most striking examples of the reproduction of synthetic authenticity and aura in the "post-truth" era. Deepfakes anticipated the current boom in post-anthropocentric visual culture. To understand this, let us embark on a journey through surrealism, computers, algorithms, surveillance structures, pornography, pareidolia and technoanimism, to see how deepfakes emerged through art history on the one hand, and through technologies used to control people on the other.


A brief history of deepfakes

Forging schemes rely on authenticity schemes. Umberto Eco has regarded the sign as the thing that makes lying possible. He claimed that "to prove that a fake is a fake, it is necessary to provide proof of authenticity for the presumptive original" – therefore, "saying something false is an alethic problem, meaning a problem related to the notion of aletheia – that is, truth".2

The Oxford Dictionary selected "post-truth" as its 2016 word of the year. In the same year, Microsoft used deep learning on Rembrandt's works, in turn deepfaking new "originals". The term "deepfakes" itself came into use a year later, when the Reddit user DeepFakes used this same technology in pornographic situations, attaching the faces of women celebrities like Gal Gadot, Emma Watson, Katy Perry or Taylor Swift on the bodies of various porn stars. He posted the resulting products on a Reddit forum. This led to the emergence of various forms of revenge porn, such as swapping the faces of ex-girlfriends with those of porn stars. That same year, US actor and director Jordan Peele published a deepfaked video of Barack Obama cussing at Donald Trump, meant as a warning about the ethical and moral dilemmas presented by new technologies. A total of 14,000 deepfakes were created by the following year, 98% of which were pornography and all of which involved the sexual objectification of women.

My aim here is to focus on the use of deepfake technology that contrasts with its dystopian potential. In other words, on works of art that rely on the same technology while managing to criticise post-truth epistemology and explore the cyborg nature of images. In one of his letters to Benjamin, Bertolt Brecht said that instead of dealing with what is old and good, we should deal with what is new and bad. Benjamin compared the cameraman to a surgeon who loses his natural distance from the world by penetrating "deep into the subject's tissue",3 hacking into her, zooming in until she is finally free of her shell. Deepfake technology has less to do with liberation and more with transformation, the (im)perfect cloning of someone, claustrophobic rather than emancipatory.

In the context of today's algorithmic or AI art, the concept of deepfakes can have a debilitating effect. In Mandarin, for example, deepfakes are known as huanlian ("changing faces"). This expands the post-anthropocentric principle of this technology and bears no traces of the moral and ethical techno-panic that the word "deepfake" has evoked. Transformation also helps to emphasise the process-driven nature of algorithmic or automated art practices, which move beyond the anthropocentric fetishism of authenticity and essence or substance that is characteristic of Western philosophy and that also links the artist to the idea of the romantic genius. More precisely, transformation, as the processual rethinking of the human body using high-tech computational techniques, could help us find a more current language for dealing with the digital representation of objects and forms of representation.

Although the original purpose of deepfakes was to produce pornography that degrades women or straightforward techno-panic, as in Peele's video, a few years earlier the artist Josh Kline released his video installation "Crying Games" (2015), in which he had digitally rendered the faces of the leading figures in the Iraq War (George W. Bush, Tony Blair, Dick Cheney, Donald Rumsfeld, Condoleezza Rice) on the bodies of actors. In the video, they appeared, like prisoners clad in grey, tearfully regretting their actions, telling the viewers how sorry they were. "Illusion is a weapon",4 writer William S. Burroughs wrote in 1970, and Kline seems to have understood this well, using the possibilities of the new simulacra to criticise the apologists for the Iraq War. Deepfakes are the perfect simulacral weapon that, even if failing as forgeries, produce an affective reaction because of their personal connections to real people. Appearing authentic in spite of being fakes.

Or were the wartime speeches by the real Bush or the real Blair somehow more "authentic"? In the dark of the current war and genocide against Ukraine, we have seen deepfakes from both sides of the front, and watching the "real" Vladimir Putin on the screen is not unlike watching Kline's video. Humiliating revenge porn, political propaganda and digitally "rejuvenated" Hollywood actors represent the ambivalent relationship of modern visual culture with animātiō (Latin "bestowing of life"), which is reminiscent of Victor Frankenstein's initial sense of fascination and horror (heimlich/unheimlich) toward the creature he gave life to.


Technoanimism as a high-tech aura

Ever since the 1950s, computer science and engineering have given rise to new movements in art: algorithmic art, digital and crypto art, bio art, interactive art, AI art or deep transformation art, post-internet art, etc. Nam June Paik, an artist associated with Fluxus, wrote in 1969 how he would like to shape the TV screen canvas as precisely as Leonardo da Vinci, as freely as Pablo Picasso, as colourfully as Pierre Renoir, as profoundly as Piet Mondrian, as violently as Jackson Pollock and as lyrically as Jasper Johns. Often considered the first example of an AI artwork, AARON (1972–2016) was a series of computer programmes developed by engineer and artist Harold Cohen (1972–2016), and their more than 40-year collaboration, i.e. four decades of techno-animist creative interaction with the output of algorithmic art.

Video artist Mark Leckey has said: "The more computed our environment becomes, the further back it returns us to our primitive past, [---] back to an animistic world view where everything has a spirit, rocks and lions and men. [---] The other thing that fascinates me is that the networks and devices we all use are written and produced by these very logical, mathematical processes – algorithms assembled by autists – which then generate the undisciplined and voluptuous excesses of the digital realm, whether it be video or music. Something vital and mortal emerges from something as cold and lifeless as code."5

Deepfakes are a phenomenon consisting of inanimate algorithms, cultural constructs, human bodies and other objects, a phenomenon known in philosophy as emergentism; a cyborg image that uses the human body and digital technologies for its growth, but is not reduced to them. What Leckey is saying is that digital objects establish aura in a post-anthropocentric way, creating their hic et nunc relationship through metadata and network relationships. Digital copies are part of the original's cultural journey through a collective visual culture. The authenticity of digital objects lies namely in their transformability. Memes and other imitations of digital objects add new layers to the original, equipping it with a kind of biography and geography, and the viewer experiences the uniqueness of each transformation, which leaves no room for identical copies.

As an example, let us think of Katja Novitskova's work. The artist treats images like quantum physicists treat matter: the Cartesian distinction between representation/reality and nature/culture disappears; in other words, there is a self-organising principle that exists in matter, and for Novitskova, this also exists in the visual culture that we produce. In her work, the artificial world of computers, with its brands, algorithms, androids and data sets, is as involved in creating our environment as stones, viruses, glaciers or tigers.


How do machines see us?

The archetype of machine learning and vision and synthetic media was made in 1957 at the Cornell Aeronautical Laboratory in Buffalo, New York, to create a pattern recognition system. Called the Perceptron by the head of the laboratory, Frank Rosenblatt, this was the first synthetic model to link together perception and vision. This also made it the first form of artificial vision – one of the most important forms of vision in today's society. It is used in the smartphones we all carry in our pockets, enabling functions like focusing or zooming. It is present in security cameras, automated production lines and missile defence systems; it is used in live broadcasting, etc.

In his book "The Vision Machine" (1994), philosopher Paul Virilio has called the Perceptron a historical turning point, marking a breakthrough in the autonomy, the "aura" of human perception – an invasion of human vision.6 It is worth mentioning that Virilio originally trained as a painter under Henri Matisse. This is similar to how Benjamin saw photography and film replacing the distance inherent in painting nearly 50 years earlier, only for Virilio, artificial vision and synthetic media have a much more far-reaching effect. If information can be transmitted from any point, concepts such as proximity, distance or range of vision lose their meaning. The eye is no longer confined, i.e. the object and subject become a matter of touch. For Virilio and Benjamin, it is the dimension of space or horizons that disappear, while deepfakes also "liberate" a person from their tissue.

Artificial vision also represents an important transformation of visual culture for photographer and theorist Trevor Paglen, and namely in the ways of seeing. Although Paglen acknowledges that the mass production of images by machines and for machines results in "invisible images" that constrict the functioning of public society.7 Machines now take more pictures than humans, and they donʹt do it for humans, but mostly for other machines. Since 2016, machines have produced more images than all of human culture up to that point in time. The human eye no longer has the last word in confirming the authenticity, content and origin of culture. From now on, the human intellect will have to rely on synthetic intellects to distinguish between originals and copies. But here lies Paglen's Frankensteinian moment, where he is both horrified and fascinated by this new technology.

Paglen has delved deep into artificial vision in his hypnotically fast-paced video "Behold These Glorious Times!" (2017). The video was made using artificial neural networks and images from ImageNet, the world's largest visual database. It shows things, animals and people. Paglen's video aims to demonstrate how artificial neural networks learn to "see". In the video, we see how the images are broken down through each level of the artificial neural network, becoming increasingly abstract until all that is left are black and white shapes reminiscent of abstract realism. We see how artificial vision contrasts with human vision, we are invited to acknowledge this post-anthropocentric condition or "machine realism"8 that contemporary visual culture is all about. It's a kind of black box, which is amusing, if not horrifying, because even Google programmers or artists working with machine learning are unable to explain exactly how neural networks create their output.

In many ways, this is reminiscent of Gilles Deleuze's take on the mechanical gaze of the camera, which is able to overcome the empirical limitations of the human gaze. As new ways of seeing, machine and computer vision should be regarded as species belonging to the phylum of this original machine vision, as representatives of the evolution of vision – the perception of photons – that began with photosynthesis. For Deleuze, the camera was able to reduce time to an inhuman "speed", such as the one we see in Étienne-Jules Marey's chronophotography or slow-motion films. This subjected the human body to a new kind of gaze, and time to the multiplicity and reproducibility of montage, making it possible to experience reality in an increasingly abstract and surreal way.


"A rose with any other face" – the face as a system of authenticity and aura

With deepfakes, which are undetectable to human perception, we see, for example, a flickering lip, a glimmer of sweat, a wrinkled forehead, a reddened face, just as we might see it in a real photo or video. While the people we see are not real, they seem real to us. A good example is the page, which produces a portrait of a new "real" person every time you hit the refresh button. The authenticity of a deepfake, its aura, is in its face, with its dynamic micro-details that are the hardest to reproduce through animation. The face is the aura that epitomises the uniqueness of various people, the genius of actors, the superficial beauty of divas, or their depth – such as in Andy Warhol's silkscreens of Marilyn Monroe –, or celestial grace, such as in early renaissance paintings of saints.







The surreal qualities of faces that have been rendered through psychedelic filters can be experienced with Google's Deep Dream Generator, an algorithmic image converter that uses software that preceded deepfakes. Deep Dream, which for some reason started dreaming up dog faces in the images uploaded by humans, was trained using the ImageNet visual database. Yet even the creator of the image generator, Alexander Mordvintsev, could not explain why the machine began "seeing" dog faces.

Art theorist Raivo Kelomees has pointed to the connection between surrealism and the art created with AI.9 He refers to surrealist art techniques such as frottage and grattage. The first of these was a method used by Max Ernst of rubbing the canvas on a structured surface to produce random patterns and landscapes, while the second was a method that involved scraping thick layers of paint to generate chaotic forms. On the one hand, this is a case of pareidolia; that is, the natural inclination to see meaningful connections in patterns.

Eyes and faces permeate Ernst's paintings in a duality of horror and psychedelia similar to some of the outputs of Deep Dream. Let us look at his "The Temptation of Saint Anthony" (1945). The resemblance to the evolutionary tendency of humans to see faces in power sockets, car panels, tree bark, coffee grounds, etc is striking. In a way, surrealism was the first art form of the "optical unconscious",10 or an attempt to express the emergentism of the unconscious through random patterns and shapes produced through automated artistic or writing techniques. Deepfakes are the apotheosis of such pareidolic culture. Faces as one of the main claustrophobic hidden hands of the unconscious.

Gilles Deleuze and Félix Guattari have considered social constructs, such as the human face, as abstract machines that produce binaries. Let us think of small villages where, in contrast to the anonymous big city, it is common to greet people you see on the street. In part, this practice serves as a collective social facial recognition system that produces binaries: outsider and insider, enemy and ally, male and female, human and animal, black and white, etc. Sometimes seen as the father of facial recognition, Cesare Lombroso became known for creating the first standardised techniques for recognising criminal facial features in humans. These techniques, it turned out, were deeply eugenic. By holding to a certain regard for facial features believed to represent a "more highly developed intellect", it was possible to discriminate against other kinds of faces considered either as primitive or degenerate.

Facial recognition is closely related to deepfaking, as both of these reflect the "face" of the prevailing cultural and political power. Let us think about the social stigmas that afflict those wanting to either hide their faces (e.g. the ban on wearing hijabs in the West) or disfigure them (various piercing and tattoo cultures). They are dangerous to the regime because their faces lose their Western and Christian uniqueness. It is possible that without this facial fetish (e.g. as an expression of desire or suffering in pornography, or as an expression of the soul, as Christianity depicted in Jesus Christ), artificial vision or facial transformation software that identifies, reconstructs and clones faces would not have such an important role. After all, the uniqueness and authenticity of a person is reproduced through their face.

At the same time, these face-hacking machines also present the opportunity to reverse this facial fetish, to make faces inhuman and unrecognisable, using the alienating gaze of artificial vision and the grotesque. As, for example, the artist duo Shinsheungback Kimyonghun managed with their painting "Nonfacial Portrait" (2018), which was made by distorting a face until artificial vision programmes could no longer recognise it as one.

In Gillian Wearing's video "Wearing Gillian" (2018), we see the artist constantly shifting from one body to the next in an exploration of the liminal space created between humans and machines by deepfake technology. To create the piece, the artist asked random strangers online if she could lend their bodies to her face. The plasticity of identity that can be seen in the video evokes a sense of estrangement. "Watching me being me alienates me from me, and I do not recognise myself" – this is how Wearing describes her experience in the video. The mechanical nature of the face as a mirror of the soul is highlighted as an object of recognition in the power and control matrix, much like a QR code in smartphone software. Self-simulacrum raises questions about how unique or authentic this real self really is and who controls it.

The interactive media installation "TRUST AI" (2020) by Bernd Lintermann and Florian Hertweck explores to this topic. The installation features a holographic image of a human face that looks at the viewer as it learns to reproduce their face in real time using a built-in facial recognition system. In other words, the AI, the simulacrum of the stranger with whom the viewer initially established contact, is ultimately the viewer himself. The work aimed to demonstrate the ability of the facial recognition mechanism to hijack human identity.

As Deleuze and Guattari note, we "donʹt so much have a face as slide into one"11 – like a USB stick sliding into a machine. In considering deepfaked faces, perhaps it would be more fitting to think of art created using synthetic media as "synth-reality" (a synthesis or a face of the synthetic and the real, like electronic music that is subject to the endless transformations of a synthesizer), rather than surrealism.


Forgery or transformation: the paradox of post-anthropocentric art

Alas, it is impossible to return to the time before deepfakes, just like it is impossible to return to a time before the internet or digital technologies, because the mass adoption of such technologies requires a fundamental change in the fabric and the technological foundation of society. You canʹt put the machine back into the bottle.

Following Karen Barad's quantum physics-steeped philosophy, Sabine Himmelsbach has called this "entangled reality".12 We can think of this entanglement as the interdependence of humans and machines. In other words, how artificial and synthetic intelligence need humans to expand their reach, and the other way around. In a "post-truth" world, representation and reality do not just become discussion points within a framework of absolute relativism. Rather, they call for a focus on how the various human and non-human factors are interwoven in our society.

The modern networked and globalised culture of simulacra, where the lines between science fiction and social reality appear as an optical illusion, dilutes binaries, creating "leaky distinctions" – first between "animal and human", then between "animal-human (organism) and machine" and finally between "physical and non-physical".13 This famous definition of the cyborg by Donna Haraway is compelling, as deepfakes embody all three transformations listed above. As embodiments of pure energy, living pixels, individuals on the electromagnetic spectrum, they are cyborg images that herald a new synthetic aura, a new (artificial) viewer (emancipated or enslaved?) and a new aesthetic sense. Let us wait for the time when the pixel becomes an embryonic cell that will bring forth a new organism.

Perhaps one conclusion we can draw from the deepfake phenomenon and its potential for modern culture more broadly is that artificial intelligence is not a creative or conscious being. It would also be damaging (and would ultimately grant all the power in this area to internet trolls like DeepFakes) to withdraw from the challenge posed by deepfakes altogether. It is worth remembering that people are not much more than fairly predictable mediators of images and texts who follow the prescribed cultural and social algorithms: gender, national, ethnic, racist, consumerist, etc. Deepfakes or transformations allow us to take a detached glimpse at humanity as it lives with its techno-addictions in the midst of ecocide, but from an estranged position, with a synthetic aura instead of a human one. As Baudrillard suggested – the double elicits a strangeness in that which is intimate.


1 Jean Baudrillard, Simulacra and Simulation. Ann Arbor: University of Michigan Press, 1995, p 95.

2 Umberto Eco, On the Shoulders of Giants. Cambridge, Massachusetts: The Belknap Press of Harvard University Press, 2019, pp 171–190.

3 Walter Benjamin, The Work of Art in the Age of Mechanical Reproduction. Ed. Hannah Arendt, Illuminations: Essays and Reflections. New York: Schocken, 1999 [1969], pp 217–252.

4 William S. Burroughs, Electronic Revolution. Ubu Classics: 2005 [1970], p 12.

5 See:

6 Paul Virilio, The Vision Machine. Bloomington & Indiana: Indiana University Press, 1994 [1988], p 70.

7 Trevor Paglen, Nähtamatud kujutised. – Väike fototeooria lugemik: Produktiivsest teadvusest reproduktiivsete kujutisteni. Eds. Neeme Lopp and Marge Monko. Tallinn: Eesti Kunstiakadeemia Kirjastus, 2021, pp 345–356.

8 Ibid.

9 Raivo Kelomees, Concept Transference in Art and AI. – The Meaning of Creativity in the Age of AI. Eds. Raivo Kelomees, Varvara Guljajeva, Oliver Laas. Tallinn: Eesti Kunstiakadeemia Kirjastus, 2022, pp 106–122.

10 Rosalind Krauss, The Optical Unconscious. Cambridge, Massachusetts: MIT Press, 1993, p 53.

11 Gilles Deleuze, Félix Guattari, A Thousand Plateaus. London: University of Minnesota Press, 2005 [1980], p 177.

12 See: Sabine Himmelsbach, Entangled Realities. How Artificial Intelligence is Shaping the World. – The Meaning of Creativity in the Age of AI. Eds. Raivo Kelomees, Varvara Guljajeva, Oliver Laas. Ann Arbor: Eesti Kunstiakadeemia Kirjastus, 2022.

13 Donna J. Haraway, Simians, Cyborgs, and Women: The Reinvention of Nature. New York: Routledge, 1991, pp 151–153.


Stefan Peetri is a cultural critic who explores the intersections of pop culture, cybernetics and contemporary art. He has studied philosophy and cultural studies at Tallinn University.

< back

Serverit teenindab EENet