Hannah Jane Parkinson 

Let’s get real before deepfake videos corrupt our democracy

As Mark Zuckerberg has discovered, you can make anyone say anything, says Guardian columnist Hannah Jane Parkinson
  
  

US President Donald Trump speaks with reporters outside the White House prior to his departure aboard Marine One on October 7, 2017.
A Belgian political party created a deepfake of Donald Trump as a lighthearted way of debating climate breakdown. The clip went viral. Photograph: Alex Edelman/AFP/Getty Images

“Deepfake”: it sounds like the title of a James Bond film. But deepfakes are becoming more dangerous, and frequently more sexually explicit, than anything 007 might get up to. A deepfake (the deep is taken from machine “deep learning”) is a computer-manipulated video, and the next frontier of conspiracy theory and fake news.

Lots of us have been ringing the alarm bell for a while over deepfakes, but the nerdy language that explains their invention and evolution has perhaps meant that people haven’t paid attention. But this week an exquisitely realistic deepfake of Facebook’s founder, Mark Zuckerberg, uploaded to Instagram (a Facebook-owned platform), cut through.

The doctored video – a collaboration between artists and an advertising agency for the Sheffield Doc/Fest – plays to Zuckerberg’s increasingly autocratic reputation: “Imagine this for a second,” artificially generated Zuckerberg says. “One man with total control of billions of people’s stolen data, all their secrets, their lives, their futures …”

Unlike previous deepfake iterations, which have been glitchy or badly dubbed, the Zuckerberg one is sophisticated and smooth. And it presented the real Zuckerberg with a dilemma: would the platform remove it?

It has been allowed to remain. Instagram’s reasoning is that the deepfake did not break its content moderation policies. You might say this is a positive for free speech, given that the video portrays Zuckerberg in a negative light. But you could also say it goes against the tech titan’s recent promises to tackle disinformation and fake news online. Which it absolutely does.

Recently, Facebook also allowed a manipulated video of the US house speaker, Nancy Pelosi, to remain online – even though it had been edited to make her appear drunk and had been viewed millions of times. True to the current climate, this fake was shared across further social networks and picked up by dubious online news outlets. Facebook’s mitigation was that at least it hadn’t promoted it, instead pushing it to the bottom of users’ news feeds.

It should go without saying that manipulated videos are extremely dangerous. Originally, the tech was mostly put to use to insert celebrities into porn footage, often disseminated via onanistic Reddit communities. But as the tech involved has vastly improved and become increasingly accessible – there are now deepfake apps that any of us can download – the motivation behind these creations has changed.

Visual manipulation is nothing new; it has always been a powerful form of propaganda or a means to mislead. The most famous “photograph” of Abraham Lincoln – you know the one – is actually a print of Lincoln’s head grafted on to the body of somebody else. Many famous first world war photographs of brave Tommies going over the top were actually shot miles from the frontline, in relative safety.

There were also the pictures of “ghosts” attached to chain emails in the early 00s. But it’s a combination of the intense realism of deepfakes and the tools now at our disposal to share content so rapidly and indiscriminately that makes them a truly terrifying prospect.

It is common enough for people, especially older individuals online (so-called non-digital natives), to be taken in by fabricated tweets, very basic Photoshop editing, sockpuppet sites, or simply just be caught up in the online slipstream that rejects due diligence as an anachronism.

When many of us are already this naive online, these almost-indistinguishable-from-the-real-thing videos – and the higher skill threshold we assume for faking moving images – will certainly lead many astray.

In fact, they already have. Last year a Belgian political party created a deepfake of Donald Trump, which it intended as a light-hearted, attention-grabbing way of debating climate breakdown.

“It is clear from the lip movements that this is not a genuine speech by Trump,” a party spokesperson said. Except it wasn’t to a lot of people. The clip went viral. (With Trump, it does not help that so many of the real things he says are absurd to the point of parody and his enunciation so bizarre that genuine clips can seem computer-generated.)

Another manipulated video, made precisely as a warning to signal deepfake’s potential unleashing of chaos, was created by the director Jordan Peele alongside Buzzfeed, and showed Barack Obama calling Trump a “dipshit” – before the fake Obama went on to explain that he was, well, fake. (It was uploaded to YouTube under the title: “You Won’t Believe What Obama Says in This Video!”)

So how do we tackle this new tech, which differs so much from the fun AI of, say, swapping your face with a cat’s on Snapchat? Or the augmented reality of searching for a Pokémon around the back of a newsagent?

The first requirement is that, to put it bluntly, people have to actually want to tackle it. It’s increasingly concerning, in a polarising world, how few people seem to seek out opinions that diverge from their own, or scrutinise things that adhere to their already held views. The internet has cultivated supercharged confirmation bias.

We need to be willing to spoil our own fun – acknowledging that, no, Trump did not say that thing that, if he had, as forged, would be a great example of how moronic he is. And would probably earn us lots of retweets.

We need to give a toss about pushing back against less impressive hoaxes: footage and images that are authentic being repackaged as something entirely other (a photograph of a Liverpool FC parade in 2005 is frequently shared as an ostensible crowd at a Jeremy Corbyn 2018 appearance, to pick just one example).

But to do this, the general public needs to have the knowledge and tools to get better at deciphering what is real and what isn’t.

That is where the media must come in and, even more so, the tech companies that facilitate the wildfires of misinformation. Not for the first time, it strikes me that unless tech companies and the man-buns who run them are willing to sacrifice some profit for the prize of not spiralling headlong into a dystopian acid trip of a world, where democracy is something we can only squint at, their talk about battling fake news is in fact the most damaging fake news of all.

• Hannah Jane Parkinson is a Guardian columnist

 

Leave a Comment

Required fields are marked *

*

*