Manisha Ganguly Investigations correspondent 

‘It’s not me, it’s just my face’: the models who found their likenesses had been used in AI propaganda

London-based Synthesia’s technology was employed to make deepfake videos for authoritarian regimes
  
  

Composite image
Models found they had been used as digital ‘puppets’. Composite: Guardian Design

The well-groomed young man dressed in a crisp, blue shirt speaking with a soft American accent seems an unlikely supporter of the junta leader of the west African state of Burkina Faso.

“We must support … President Ibrahim Traoré … Homeland or death we shall overcome!” he says in a video that began circulating in early 2023 on Telegram. It was just a few months after the dictator had come to power via a military coup.

Other videos fronted by different people, with a similar professional-looking appearance and repeating the exact same script in front of the Burkina Faso flag, cropped up around the same time.

On a verified account on X a few days later the same young man, in the same blue shirt, claimed to be Archie, the chief executive of a new cryptocurrency platform.

These videos are fake. They were generated with artificial intelligence (AI) developed by a startup based in east London. The company, Synthesia, has created a buzz in an industry racing to perfect lifelike AI videos. Investors have poured in cash, catapulting it into “unicorn” status – a label for a private company valued at more than $1bn.

Synthesia’s technology is aimed at clients looking to create marketing material or internal presentations, and any deepfakes are a breach of its terms of use. But this means little to the models whose likenesses are behind the digital “puppets” that were used in propaganda videos such as those apparently supporting Burkina Faso’s dictator. The Guardian tracked down five of them.

“I’m in shock, there are no words right now. I’ve been in the [creative] industry for over 20 years and I have never felt so violated and vulnerable,” said Mark Torres, a creative director based in London, who appears in the blue shirt in the fake videos.

“I don’t want anyone viewing me like that. Just the fact that my image is out there, could be saying anything – promoting military rule in a country I did not know existed. People will think I am involved in the coup,” Torres added after being shown the video by the Guardian for the first time.

The shoot

In the summer of 2022, Connor Yeates got a call from his agent offering the chance to be one of the first AI models for a new company.

Yeates had never heard of the company, but he had just moved to London, and was sleeping on a friend’s couch. The offer – nearly £4,000 for a day’s filming and the use of the images for a three-year period – felt like a “good opportunity”.

“I’ve been a model since university and that’s been my primary income ever since finishing. Then I moved to London to start doing standup,” said Yeates, who grew up in Bath.

The shoot took place in Synthesia’s studio in east London. First, he was led into hair and makeup. Half an hour later, he entered the recording room where a small crew was waiting.

Yeates was asked to read lines while looking directly into the camera, and wearing a variety of costumes: a lab coat, a construction hi-vis vest and hard hat, and a corporate suit.

“There’s a teleprompter in front of you with lines, and you say this so that they can capture gesticulations, and replicate the movements. They’d say be more enthusiastic, smile, scowl, be angry,” said Yeates.

The whole thing lasted three hours. Several days later, he received a contract and the link to his AI avatar.

“They paid promptly. I don’t have rich parents and needed the money,” said Yeates, who didn’t think much of it afterwards.

Like Torres’s, Yeates’s likeness was used in propaganda for Burkina Faso’s current leader.

A spokesperson for Synthesia said the company had banned the accounts that created the videos in 2023 and that it had strengthened its content review processes and “hired more content moderators, and improved our moderation capabilities and automated systems to better detect and prevent misuse of our technology”.

But neither Torres nor Yeates were made aware of the videos until the Guardian contacted them a few months ago.

The ‘unicorn’

Synthesia was founded in 2017 by Victor Riparbelli, Steffen Tjerrild and two academics from London and Munich.

It launched a dubbing tool a year later that allowed production companies to translate speech and sync an actor’s lips automatically using AI.

It was showcased on a BBC programme in which a news presenter who spoke only English was made to look as if he was magically speaking Mandarin, Hindi and Spanish.

What earned the company its coveted “unicorn” status was a pivot to the mass market digital avatar product available today. This allows a company or individual to create a presenter-led video in minutes for as little as £23 a month. There are dozens of characters to choose from offering different genders, ages, ethnicities and looks. Once selected, the digital puppets can be put in almost any setting and given a script, which they can then read in more than 120 languages and accents.

Synthesia now has a dominant share of the market, and lists Ernst & Young (EY), Zoom, Xerox and Microsoft among its clients.

The advances of the product led Time magazine in September to put Riparbelli among the 100 most influential people in AI.

But the technology has also been used to create videos linked to hostile states including Russia, China and others to spread misinformation and disinformation. Intelligence sources suggested to the Guardian that there was a high likelihood the Burkina Faso videos that circulated in 2023 had also been created by Russian state actors.

The personal impact

Around the same time as the Burkina Faso videos started circulating online, two pro-Venezuela videos featuring fake news segments presented by Synthesia avatars also appeared on YouTube and Facebook. In one, a blond-haired male presenter in a white shirt condemned “western media claims” of economic instability and poverty, portraying instead a highly misleading portrait of the country’s financial situation.

Dan Dewhirst, an actor based in London and a Synthesia model, whose likeness was used in the video, told the Guardian: “Countless people contacted me about it … But there were probably other people who saw it and didn’t say anything, or quietly judged me for it. I may have lost clients. But it’s not me, it’s just my face. But they’ll think I’ve agreed to it.”

“I was furious. It was really, really damaging to my mental health. [It caused] an overwhelming amount of anxiety,” he added.

Interactive

The Synthesia spokesperson said the company had been in touch with some of the actors whose likenesses had been used. “We sincerely regret the negative personal or professional impact these historical incidents have had on the people you’ve spoken to,” he said.

But once circulated, the harm from deepfakes is difficult to undo.

Dewhirst said seeing his face used to spread propaganda was the worst-case scenario, adding: “Our brains often catastrophise when we’re worrying. But then to actually see that worry realised … It was horrible.”

The ‘rollercoaster’

Last year, more than 100,000 unionised actors and performers in the US went on strike, protesting against the use of AI in the creative arts. The strike was called off last November after studios agreed to safeguards in contracts, such as informed consent before digital replication and fair compensation for any such use. Video games performers remain on strike for the same issue.

Last month, a bipartisan bill was introduced in the US, titled the NO FAKES Act and aiming to hold companies and individuals liable for damages for violations involving digital replicas.

However, there are virtually no practical mechanisms for redress for the artists themselves, outside AI-generated sexual content.

“These AI companies are inviting people on to a really dangerous rollercoaster,” said Kelsey Farish, a London-based media and entertainment lawyer specialising in generative AI and intellectual property. “And guess what? People keep going on to this rollercoaster, and now people are starting to get hurt.”

Under GDPR, models can technically request that Synthesia remove their data, including their likeness and image. In practice this is very difficult.

A former Synthesia employee, who wanted to remain anonymous for fear of reprisal, explained that the AI could not “unlearn” or delete what it may have gleaned from the model’s body language. To do so would require replacing the entire AI model.

The spokesperson for Synthesia said: “Many of the actors we work with re-engage with us for new shoots … At the start of our collaboration, we explain our terms of service to them and how our technology works so they are aware of what the platform can do and the safeguards we have in place.”

He said the company did not allow “the use of stock avatars for political content, including content that is factually accurate but may create polarisation”, and that its policies were designed to stop its avatars being used for “manipulation, deceptive practices, impersonations and false associations”.

“Even though our processes and systems may not be perfect, our founders are committed to continually improving them.”

When the Guardian tested Synthesia’s technology using a range of disinformation scripts, although it blocked attempts to use one of its avatars, it was possible to recreate the Burkina Faso propaganda video with a personally created avatar and download it, neither of which should be allowed, according to Synthesia’s policies. Synthesia said this was not a breach of its terms as it respected the right to express a personal political stance, but it later blocked the account.

The Guardian was also able to create and download clips from an audio-only avatar that said “heil Hitler” in several languages, and another audio-clip saying “Kamala Harris rigged the election” in an American accent.

Synthesia took down the free AI audio service after being contacted by the Guardian and said the technology behind the product was a third-party service.

The aftermath

The experience of learning his likeness had been used in a propaganda video has left Torres with a deep sense of betrayal: “Knowing that this company I trusted my image with will get away with such a thing makes me so angry. This could potentially cost lives, cost me my life when crossing a border for immigration.”

Torres was invited to another shoot with Synthesia this year, but he declined. His contract ends in a few months, when his Synthesia avatar will be deleted. But what happens to his avatar in the Burkina Faso video is unclear even to him.

“Now I realise why putting faces out for them to use is so dangerous. It’s a shame we were part of this,” he said.

YouTube has since taken the propaganda video featuring Dewhirst down, but it remains available on Facebook.

Torres and Yeates both remain on the front page of Synthesia in a video advertisement for the company.

 

Leave a Comment

Required fields are marked *

*

*