Rachel Leingang 

AI and misinformation: what’s ahead for social media as the US election looms?

Innovation is outpacing our ability to handle misinformation, experts say. That makes falsehoods easy to weaponize
  
  

phone screen shows facebook and tiktok apps
Tech platforms may not have enough staff to address the spread of misinformation. Photograph: Dado Ruvić/Reuters

As the United States’ fractured political system prepares for a tense election, social media companies may not be prepared for an onslaught of viral rumors and lies that could disrupt the voting process – an ongoing feature of elections in the misinformation age.

A handful of major issues face these tech companies at a time when trust in elections, and in the information people find on social media, is low. The potential for politicians and their supporters to weaponize social media to spread misinformation, meanwhile, is high.

The belief that the 2020 election was stolen has actually grown, with more than one-third of Americans saying they don’t believe Joe Biden legitimately won the presidency, recent polls show. The continuing disinformation campaign, spread in large part on social media platforms both mainstream and rightwing, from Donald Trump and his allies has helped harden distrust in elections.

Companies racing to confront election rumors

Some companies, including Meta and YouTube, have changed policies to allow election misinformation about 2020. Others are so new that they’re still writing policies for political content as they grow. TikTok, under scrutiny by the US government, has tried to stay away from political content in favor of entertainment. Some smaller platforms, such as Trump’s Truth Social and Telegram, have been specifically designed not to moderate content, allowing rumors to fester and eventually move to more broadly used social media.

And platforms may not have enough staff, after rounds of layoffs hitting trust and safety teams, to manage the amount of misinformation flying around the US, much less worldwide, in a stacked election year for democracies around the globe. Elections in other countries are probably more susceptible to misinformation’s impact because much of the focus of the remaining trust and safety staff and factcheckers will be on US elections and in English.

“Everyone wants to know if the companies are prepared and the unfortunate truth is we just don’t know,” said Katie Harbath, who formerly worked in policy at Facebook and now writes about the intersection between technology and democracy.

The unpredictable role of AI

Artificial intelligence tools could exacerbate the problem, too. Already, a faked audio call purporting to be Joe Biden telling New Hampshire voters to stay home has shown the ability of AI to fuel political misinformation. The Federal Communications Commission (FCC) announced a ban on the use of AI-generated voices in robocalls on 8 February to confront the rapidly advancing technology.

Faked images of Trump, made using AI tools, showed the former president being arrested, and AI-generated images of Biden in a military uniform have cropped up, too.

“What we’re really realizing is that the gulf between innovation, which is rapidly increasing, and our consideration, our ability as a society to come together to understand best practices, norms of behavior, what we should do, what should be new legislation – that’s still moving painfully slow,” said David Ryan Polgar, founder and president of the non-profit All Tech is Human.

The potential for social media to disrupt elections shows the outsized roles the platforms play in daily life, Polgar said. They are private companies but are often treated as a public square. These companies’ content moderation can seem more like censorship by a quasi-governmental body than by a private business, and broader conversations about what roles they should play are tough to have on the fly when the country is so polarized, he said.

“The American public, in my interpretation, just in my experience, is really kind of torn with our relationship with tech platforms and what they should even do with the elections,” Polgar said. “It seems like we want them sometimes to exert this power. But then the power is not coming from any kind of moral authority, because it’s not like we vote tech companies and trust and safety teams in and out of office.”

Changing leadership

X, formerly Twitter, has changed its philosophy entirely under Elon Musk, who is himself spreading election falsehoods. In addition to layoffs of trust and safety employees, the platform got rid of tools that flagged potential misinformation, relying more heavily on “community notes”, where readers can add context or competing claims, rather than official Twitter factchecks. Musk has allowed swaths of previously banned accounts, some of which were axed for spreading misinformation about elections, back on the platform.

The potential for X to accelerate misinformation is greater in 2024 without some of its guardrails in place, though it’s possible the platform has “discredited itself enough” for people to not put much trust in what they see on it these days anyway, said Chester Wisniewski, a cybersecurity expert at Sophos.

“Four years ago, the president of the United States used that platform as his primary way to communicate with the public – like it was literally the voice of the president’s office,” Wisniewski said. “And it went from that to cryptocurrency scams in four years.”

Ideally, tools that add factchecks or slow down the ability to share misinformation help people determine what’s true and what’s not. Those sorts of tools won’t sway the readers most hardened in their political views, but they help the average reader get additional context, Wisniewski said.

“People who don’t want the truth will never find the truth, and there’s nothing we can do about that,” he said. “Those are conversations we have to have individually with our friends and family when we feel that they’ve gone down the QAnon rabbit hole.”

Researchers hamstrung by Republicans

Not only is it possible that misinformation will travel more unfettered this year, but it will be hard for researchers to track and flag it. Tech companies are saying less about their plans to moderate political content, an area under intense scrutiny by Republican members of Congress. Led by the US congressman Jim Jordan, Republicans have investigated the platforms and their communications with the government and misinformation researchers, trying to show silencing of conservative views.

Their allies in the legal field have brought lawsuits, including a pending US supreme court case, that hold the potential to severely limit the contact the government can have with social media platforms.

The supreme court case, which will be heard this year, will serve as a bellwether of where the Republican push against anti-misinformation work goes next. After years of claiming conservative censorship, systematic investigations and lawsuits have now found success in lower courts and Congress and, even without a final court ruling, have achieved part of their intent: misinformation researchers have experienced a “chilling effect” and changed their work to some degree because of it.

Researchers’ access to data helps tie together coordinated misinformation activity and analyze how it can cross from one platform to another, Harbath said. Even though smaller platforms may not have the reach of Facebook or X, they still can influence narratives and bad actors can work to exploit loopholes across different social networks, she said.

“You don’t have to influence a lot of people for something like January 6,” Harbath said.

 

Leave a Comment

Required fields are marked *

*

*