Alex Hern 

Facebook and YouTube defend response to Christchurch videos

YouTube says spread of videos of attack was ‘unprecedented in scale and speed’
  
  

YouTube and Facebook
Facebook said the original stream of the attack was viewed live fewer than 200 times and non-live by 4,000 people before being removed from the site. Photograph: AP

YouTube and Facebook have defended themselves against accusations that they failed to act quickly enough in the wake of the Christchurch terror attack, arguing that their moderation is as good as possible given the number of videos uploaded.

Facebook said on Tuesday that the original stream of the attack was viewed live fewer than 200 times and non-live by 4,000 people before it was removed from the site.

Copies of it spread rapidly and by Saturday evening the company had removed 1.5m uploads. By Tuesday morning more than 800 distinct edits of the footage had been posted to the site.

YouTube said it had tried to keep on top of the unprecedented number of videos uploaded, eventually going so far as to eject human reviewers from the loop in order to let automated systems take down more videos instantly.

A spokesman told the Guardian: “The volume of related videos uploaded to YouTube in the 24 hours after the attack was unprecedented both in scale and speed – at times as fast as a new upload every second.

“In response we took a number of steps, including automatically rejecting any footage of the violence, temporarily suspending the ability to sort or filter searches by upload date, and making sure searches on this event pulled up results from authoritative news sources.”

The data underlines the difficulty of keeping such content off social networks, particularly in the immediate aftermath of a global news event.

Facebook largely relies on viewers to flag problematic livestreams, and does not employ enough moderation staff to watch every live video as it is being aired.

Chris Sonderby, a Facebook vice-president, said that in the case of the Christchurch livestream the first user report came in 29 minutes after the broadcast started and 12 minutes after it ended.

He said Facebook created a digital fingerprint of the initial livestream, which powered the bulk of the automatic removals and enabled more than 80% of the videos to be blocked before they were publicly posted.

But the technology used to create the fingerprints is fragile and can be defeated by simple methods such as filming the screen and uploading the resulting video. One such recording, tracked by the Guardian over the course of Friday, was on Facebook for more than five hours before being taken down, despite being headlined “New Zealand mass shooting”.

Facebook said by Monday it had recorded “more than 800 visually distinct videos” and shared them with an industry body, the Global Internet Forum to Counter Terrorism, to let other companies take them down as well.

Neal Mohan, YouTube’s chief product officer, told the Washington Post that as well as cutting human reviewers out of the loop, the site also turned off a feature late on Friday that allowed users to search for recent uploads, in order to block a route for new videos to be seen by millions.

Both those decisions were still in effect and would be reversed once the crisis subsided, YouTube told the paper.

In the wake of the attack, policymakers worldwide said a failure on the part of social media and other companies to act quickly enough was more evidence that stricter regulation was needed.

Jacinda Ardern, New Zealand’s prime minister, called on social media platforms to do more to combat terrorism. “We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published,” she said on Tuesday. “They are the publisher, not just the postman. There cannot be a case of all profit, no responsibility.”

The UK home secretary, Sajid Javid, said: “Online platforms have a responsibility not to do the terrorists’ work for them. This terrorist filmed his shooting with the intention of spreading his ideology. Tech companies must do more to stop his messages being broadcast on their platforms.”

 

Leave a Comment

Required fields are marked *

*

*