John Naughton 

JFK’s real message from beyond the grave – don’t believe everything you hear

In a world where every digital recording could be fake, our relationship with information has never been more precarious
  
  

John F Kennedy declares ‘Ich bin ein Berliner’ in Berlin on 26 June 1963.
John F Kennedy declares ‘Ich bin ein Berliner’ in Berlin on 26 June 1963. Audio from his old speeches has been used to recreate his final, unread oration. Photograph: Alamy Stock Photo

When John F Kennedy was assassinated in Dallas on 22 November 1963, he was on his way to deliver a speech to the assembled worthies of the city. A copy of his script for the ill-fated oration was later presented by Lyndon Johnson to Stanley Marcus, head of the department store chain Neiman Marcus, whose daughter was in the expectant audience that day.

The text has long been available on the internet and it makes for poignant reading, not just because of what happened at Dealey Plaza that day, but because large chunks of it look eerily prescient in the age of Trump. JFK was a terrific public speaker who employed superb speechwriters (especially Theodore Sorensen). His speeches were invariably elegant and memorable: he had a great eye for a good phrase, and his delivery was usually faultless. So his audience in Dallas knew that they were in for a treat – until Lee Harvey Oswald terminated the dream.

Last week, 55 years on, we finally got to hear what Kennedy’s audience might have heard. In an extraordinary piece of technical virtuosity, a team of sound engineers pulled 116,777 sound units from 831 of the president’s speeches and radio addresses. These units were then split in half and analysed for pitch and energy and used to create the best approximation of JFK reading the text that was achievable, given the recordings with which they had to work.

Listening to it is a truly eerie experience – especially if, like me, you once saw and heard JFK in the flesh. Although the synthesis seems broadly accurate, it’s not quite perfect: the “Boston Brahmin” accent comes through loud and clear, but the cadences are sometimes wrong, and there’s an occasional clumsiness in the phrasing that Kennedy would have ironed out before delivery. But overall, the engineers have done an astonishing job.

Which leads to an unsettling thought: if they can do this with variable-quality analogue materials, what could they not do with current-day near-perfect digital recordings? In a way, we already know the answer. With contemporary technology, faking audio is now almost as easy as Photoshopping digital images. Video is a bit harder – unless you have access to the CGI technology currently used by movie studios – but that, too, will be commodified in due course. Add to the mix the capability of AI software and we will soon reach the point where it will become impossible to determine whether something is real or fake.

If that sounds dystopian, then I’m afraid it is. Researchers at the University of Washington in Seattle have developed a neural network that solves a big challenge in the field of computer vision – turning audio clips into a realistic, lip-synced video of the person speaking those words. They demonstrated the results at a technical conference last August. They had generated amazingly realistic video of Barack Obama talking about terrorism, fatherhood, job creation and other topics using audio clips of speeches and existing weekly video addresses that were originally on different topics. They used a neural network to model the shape of Obama’s mouth and mapped the model to video and audio recordings of him. All they needed was 14 hours of digital recordings.

Synthesizing Obama

The direction of travel is therefore clear, and it puts our current ideas about dealing with “fake news” and disinformation into a more sobering context. Fact-checking and teaching media literacy won’t do the trick. In a world where people still say “I’ll believe it when I see it”, digital fakery will rule.

So we need to up our game. What’s happened is that – as Craig Silverman, the estimable media editor of BuzzFeed, put it recently – “the cues that people have used to determine the authenticity of information are in many cases no longer sufficient. Just about everything can be fabricated. We must keep this in mind as we look to establish new signals of trust, because they too will inevitably be fabricated and gamed.” Our cognitive abilities are being outwitted by the technology we have created, and we need to develop ways of navigating this ocean of digital misinformation.

Yes – but where to start? Well, says Silverman, how about a change of mindset? In the past we operated on the doctrine of “trust, then verify” – in other words, take things initially at face value and only check if we are suspicious. In a digital world, however, we have to turn this on its head: first verify, then – and only then – trust. This is all very well if we know how to verify, but as the University of Washington’s research shows, that’s getting harder with every passing day. Which explains why the pollution of our information ecosystem might pose an existential crisis for democracy.

What I’m reading

Picture perfect
Quartzy reports that New York’s Metropolitan Museum of Art has just made 400,000 high-res images available for public use. A treasure trove for presentations and browsing.

Let’s sleep on it…
Amazingly, people buy mattresses online – presumably relying on customer reviews to guide their purchases. Big mistake. An investigation by the tech news website Recode reveals that dodgy mattress review websites have replaced the sleazy salesman.

The ‘digital divide’
The NYT has reported that poor kids are spending more time looking at computer and television screens than their richer counterparts.

Time to unlike?
Is it time to delete your Facebook account, asks the Guardian’s Arwa Mahdawi…  

 

Leave a Comment

Required fields are marked *

*

*