Rachel Hall 

Social media algorithms need overhaul in wake of Southport riots, Ofcom says

Watchdog issues warning over misinformation after trouble that flared following killing of three girls on Merseyside
  
  

A police car  on fire as far-right activists protest in Sunderland after the killing of three girls in Southport in August.
A police car on fire as far-right activists protest in Sunderland after the killings in Southport in August. Photograph: Drik/Getty Images

Social media algorithms must be adjusted to prevent misinformation from spreading, the chief executive of Ofcom has warned, responding to the rioting that broke out after the killing of three girls in Southport this summer.

Misinformation about the Southport killings proliferated despite tech firms and social media platforms’ “uneven” attempts to stop it, wrote the Ofcom chief executive, Melanie Dawes, in a letter to the secretary of state for science, innovation and technology, Peter Kyle.

“Posts about the Southport incident and subsequent events from high-profile accounts reached millions of users, demonstrating the role that virality and algorithmic recommendations can play in driving divisive narratives in a crisis period,” Dawes wrote.

“Most online services took rapid action in response to the situation, but responses were uneven.”

Dawes was responding to Kyle’s request for information about whether Ofcom would be targeting online misinformation in the next update of its illegal harms code of practice, which social media companies will be required to sign up to when the online safety bill comes into force.

Dawes said that current draft Ofcom proposals would require apps to change their algorithms so that content which is illegal or harmful, including abuse and incitement to violence or hate speech, is down-ranked for children’s accounts.

Some platforms told Ofcom that misinformation seeking to whip up hatred appeared online “almost immediately” after the attacks, with the result that platforms were “dealing with high volumes, reaching the tens of thousands of posts in some cases”, some of which came from accounts with more than 100,000 followers.

Some of these accounts “falsely stated that the attacker was a Muslim asylum seeker and shared unverified claims about his political and ideological views”, Dawes said, adding: “There was a clear connection between online activity and violent disorder seen on UK streets.”

Dawes also highlighted the role of “major” messaging services that hosted closed groups comprising thousands of users. One example Ofcom has seen was of calls for demonstrations targeting a local mosque circulating in private groups online within two hours of the vigil for the victims of the attack, while others identified targets for damage or arson such as asylum accommodation.

Her letter also sets out how Ofcom responded to the riots by reminding tech firms to protect users and asserting that they did not need to wait for the online safety bill to come into force to do so.

Although Ofcom was unable to investigate whether social media platforms’ responses were fit for purpose because the online safety bill has not yet been implemented, it plans to take tougher enforcement action in future.

“These events have clearly highlighted questions tech firms will need to address as the duties come into force. While some told us they took action to limit the spread of illegal content, we have seen evidence that it nonetheless proliferated,” she said.

“On some platforms, false information regarding the identity of the attacker continued to spread in the three days it took for his real identity to be made public, even when there was evidence of intent to stir up racial and religious hatred.”

This has been highlighted in the convictions which have followed, including of those found guilty of online posts threatening death or serious harm, stirring up racial hatred, or sending false information with intent to cause harm, Dawes noted.

Responses to Southport-related disinformation from tech companies included setting up monitoring groups looking for spikes in harmful content and taking down harmful material, including URLs leading to illegal and harmful content, as well as suspending or closing accounts and channels.

Once the online safety bill comes into force, Ofcom will be expecting platforms to: make explicit how they protect users from illegal hateful content; to have processes which can swiftly take it down, especially viral content; and to have robust complaints mechanisms.

Ofcom said it would use the findings from the Southport case to identify gaps in the current legislation and guidance. It has already established that there is a need to strengthen requirements for social media platforms’ crisis response protocols.

Dawes added that the event further “highlight[s] the importance of promoting media literacy, to heighten public awareness and understanding of how people can protect themselves and others online”.

 

Leave a Comment

Required fields are marked *

*

*