logo
POST TIME: 8 November, 2019 00:00 00 AM
Can humanity withstand the assault of deepfakes?
Peter Isackson

Can humanity withstand the assault of deepfakes?

In early 2019, Foreign Affairs took a detailed look at the social and potentially political implication of the emergence of a phenomenon called “Deepfakes,” the result of recent advances in digital technology. The article describes the kind of manipulation of images and sound that now make it possible, starting with disparate components of reality, to fabricate convincingly realistic documents that appear to be totally natural. The perfect device for creating and maintaining an increasingly hyperreal world.

The article provides readers with the basic scientific principles concerning the technology used. It combines artificial intelligence, big data and digital manipulation of image and sound. The authors present deepfake’s practically limitless power of hyperrealistic simulation. Its emergence and future exploitation may require changing the traditional proverb to: “Fool me once, shame on you. Fool me twice…” and we both can be sure it’s deepfake technology that has pulled off the trick.

The article explores in considerable depth the potential for distortion and manipulation. It goes on to speculate at length about the measures that may eventually serve to combat its effects or neutralize its insidious power. The authors reach a largely pessimistic conclusion: There is no easily implementable defense.

Deepfake means that what everyone now recognizes as the contamination of our media — and particularly our social media — by “fake news” has taken one enormous and worrying step further. Unlike fake news, deepfake doesn’t just appear in the form of narrated information, which usually boils down to the biased interpretation of reported facts. Instead, it imposes itself as a substitute reality in the form of a far more convincing presentation of “alternative facts” than US President Donald Trump’s and other politicians’ brazen lies. It’s more than words. It resembles concrete reality but may be entirely imaginary.

The authors of the article, Robert Chesney and Danielle Citron, worry about its use in politics, a domain in which truth already has some difficulty in making itself known: “Deepfakes have the potential to be especially destructive because they are arriving at a time when it already is becoming harder to separate fact from fiction.”

Here is today’s 3D definition:

Deepfake:

The ultimate technological achievement of hyperreality that enshrines both narcissism and its opposite in highly realistic and difficult-to-detect digital manipulations of audio or video

Contextual Note

The last 10 words of our definition are borrowed directly from the Foreign Affairs article, which defines deepfake in these terms: “highly realistic and difficult-to-detect digital manipulations of audio or video.” Chesney and Citron describe the phenomenon of deepfakes as a direct result of the encounter of sophisticated digital technology and social media. It promises to amplify the already worrying potential of social media to validate false ideas and encourage the behaviors associated with the dissemination and consumption of fake news.

review our Privacy Policy and Terms of Use for further information.

The article stops short of analyzing the central component of social media culture that makes fake news not only effective but also inevitable: the narcissism that motivates its users to use the media. Social media works like a two-way mirror. First, it projects each individual’s image outward for maximum display, exposure and hoped-for feedback. At the same time, it serves to fix and confirm the values associated with that artificially constructed self-image.

One of the solutions that Chesney and Citron propose to neutralize the worst political effects of deepfakes is what they call “lifelogging,” which they define as the “practice of recording nearly every aspect of one’s life.” This would enable politicians to create a complete log of all their activities and thus to call out and contradict deepfakes of their actions and words. The authors even imagine commercial services delivering and managing a person’s lifelog. They appropriately dismiss the idea as unfeasible because it will inevitably be perceived as “invasive,” leading to the creation of “a massive peer-to-peer surveillance network.” As if that wasn’t already what the commercial internet has already become!

This nevertheless illustrates how easy it has become, even in our speculation, to slide into the acceptance of a form of hyperreality that replaces our perception of reality. In a society dominated by hyperreality, the spontaneity of our social existence exists only to be captured and codified in the form of big data. The image produced by the data and the logical (i.e., repetitive) patterns discerned by artificial intelligence end up replacing the dynamics of real social interaction. And we, in turn, end up accepting that as the basis of our social reality.

In other words, and despite the authors’ worries concerning political responsibility and decision-making, because the technology exists, attracts and may thus generate income and profit, deepfakes — some for entertainment, others for manipulative deception — will be a feature of the world we are now being invited to join. In a technology-dominated society, reality and hyperreality are converging. The boundary between them will disappear. Attempting to separate them will contradict our commitment to a free-market economy. To a great extent, reality and hyperreality have already merged. Deepfakes will simply add a new dimension to the game of illusions.

 Fair Observer