Robert Booth UK technology editor 

Meta says it has taken down about 20 covert influence operations in 2024

Firm names Russia as top source of such activity but says it is ‘striking’ how little AI was used to try to trick voters
  
  

Nick Clegg speaks during a press conference at the Meta showroom in Brussels on 7 December 2022.
Nick Clegg said the relatively low-impact of fakery using generative AI to manipulate video, voices and photos was ‘very, very likely to change’. Photograph: Kenzo Tribouillard/AFP/Getty Images

Meta has intervened to take down about 20 covert influence operations around the world this year, it has emerged – though the tech firm said fears of AI-fuelled fakery warping elections had not materialised in 2024.

Nick Clegg, the president of global affairs at the company that runs Facebook, Instagram and WhatsApp, said Russia was still the No 1 source of the adversarial online activity but said in a briefing it was “striking” how little AI was used to try to trick voters in the busiest ever year for elections around the world.

The former British deputy prime minister revealed that Meta, which has more than 3 billion users, had to take down just over 500,000 requests to generate images on its own AI tools of Donald Trump and Kamala Harris, JD Vance and Joe Biden in the month leading up to US election day.

But the firm’s security experts had to tackle a new operation using fake accounts to manipulate public debate for a strategic goal at the rate of more than one every three weeks. The “coordinated inauthentic behaviour” incidents included a Russian network using dozens of Facebook accounts and fictitious news websites to target people in Georgia, Armenia and Azerbaijan.

Another was a Russia-based operation that employed AI to create fake news websites using brands such as Fox News and the Telegraph to try to weaken western support for Ukraine, and used Francophone fake news sites to promote Russia’s role in Africa and to criticise that of France.

“Russia remains the No 1 source of the covert influence operations we’ve disrupted to date – with 39 networks disrupted in total since 2017,” he said. The next most frequent sources of foreign interference detected by Meta are Iran and China.

Giving an evaluation of the effect of AI fakery after a wave of polls in 50 countries including the US, India, Taiwan, France, Germany and the UK, he said: “There were all sorts of warnings about the potential risks of things like widespread deepfakes and AI enabled disinformation campaigns. That’s not what we’ve seen from what we’ve monitored across our services. It seems these risks did not materialise in a significant way, and that any such impact was modest and limited in scope.”

But Clegg warned against complacency and said the relatively low-impact of fakery using generative AI to manipulate video, voices and photos was “very, very likely to change”.

“Clearly these tools are going to become more and more prevalent and we’re going to see more and more synthetic and hybrid content online,” he said.

Meta’s assessment follows conclusions last month from the Centre for Emerging Technology and Security that “deceptive AI-generated content did shape US election discourse by amplifying other forms of disinformation and inflaming political debates”. It said there was a lack of evidence about its impact on Donald Trump’s election win.

It concluded that AI-enabled threats did begin to damage the health of democratic systems in 2024 and warned “complacency must not creep [in]” before the 2025 elections in Australia and Canada.

Sam Stockwell, research associate at the Alan Turing Institute, said AI tools may have shaped election discourses and amplified harmful narratives in subtle ways, particularly in the recent US election.

“This included misleading claims that Kamala Harris’s rally was AI-generated, and baseless rumours that Haitian immigrants were eating pets going viral with the assistance of xenophobic AI-generated memes,” he said.

 

Leave a Comment

Required fields are marked *

*

*