Dan Milmo Global technology editor 

Social media platforms have work to do to comply with Online Safety Act, says Ofcom

Regulator publishes codes of practice and warns that largest sites are not following many of its measures
  
  

A young girl using a laptop
The Online Safety Act was designed to protect children and adults from harmful online content. Photograph: Peter Byrne/PA

Social media platforms have a “job of work” to do in order to comply with the UK’s Online Safety Act and have yet to introduce all the measures needed to protect children and adults from harmful content, the communications regulator has said.

Ofcom on Monday published codes of practice and guidance that tech companies should follow to comply with the act, which carries the threat of significant fines and closure of sites if companies breach it.

The regulator said many of the measures it is recommending are not followed by the largest and riskiest platforms.

“We don’t think any of them are doing all of the measures,” said Jon Higham, Ofcom’s online safety policy director. “We think there is a job of work to do.”

Every site and app in scope of the act – from Facebook, Google and X to Reddit and OnlyFans – now has three months to assess the risk of illegal content appearing on their platform.

From 17 March they will have to start implementing safety measures to deal with those risks, with Ofcom monitoring their progress. Ofcom’s codes of practice and guidelines set out ways of dealing with those risks. Sites and apps that adopt them will be deemed to be complying with the act.

The act applies to sites and apps that publish user-made content to other users, as well as large search engines – covering more than 100,000 online services. It lists 130 “priority offences” – covering a range of content types including child sexual abuse, terrorism and fraud – that tech companies will have to tackle proactively by gearing their moderation systems.

Writing in the Guardian, the technology secretary, Peter Kyle, said the codes and guidelines were “the biggest ever change to online safety policy.”

“No longer will internet terrorists and child abusers be able to behave with impunity,” he wrote. “Because for the first time, tech firms will be forced to proactively take down illegal content that plagues our internet. If they don’t, they will face enormous fines and, if necessary, Ofcom can ask the courts to block access to their platforms in Britain.”

The codes and guidance published by Ofcom include: nominating a senior executive to be accountable for complying with the act; having properly staffed and funded moderation teams that can swiftly remove illegal material such as extreme suicide-related content quickly; better testing of algorithms – which curate what users see on their feeds – to make it harder for illegal material to spread; and removing accounts operated by, or on behalf of, terrorist organisations.

Tech platforms are also expected to operate “easy to find” tools for making content complaints which acknowledge receiving a complaint and indicate when it will be dealt with. The biggest platforms are expected to give users options to block and mute other accounts on the platform, along with the option to disable comments.

Ofcom also expects platforms to put in place automated systems to detect child sexual abuse material, including so-called “hash matching” measures that can match such suspected material with known examples of the content. The new codes and guidelines will now apply to file-sharing services such as Dropbox and Mega, which are at “high risk” of distributing abuse material.

Child safety campaigners said the Ofcom announcement did not go far enough. The Molly Rose Foundation, which was set up by the family of Molly Russell, who took her own life when she was 14 in 2017 after viewing suicide content on social media, said it was “astonished” there were no specific measures to deal with self-harm and suicide-related content that met the threshold of criminality. The NSPCC said it was “deeply concerned” that platforms like WhatsApp would not be required to take down illegal content if it is not technically feasible.

Fraud, a widespread problem on social media, will be tackled with a requirement for platforms to establish dedicated reporting channels with bodies including the National Crime Agency and the National Cyber Security Centre, who can flag examples of fraud to the platforms.

It will also consult in the spring on creating a protocol for crisis events such as the riots that broke out in the summer in response to the Southport murders.

 

Leave a Comment

Required fields are marked *

*

*