The viral YouTube videos featured screaming children being tortured, conspiracy theorists taunting mass shooting victims and webcam footage of young girls in revealing clothing. The disturbing clips drew millions of views and, up until recently, were continuing to spread on the site.
Then, hours after reporters highlighted them to YouTube, asking for comment, the videos disappeared.
The removal of the harmful footage illustrated a dark pattern in Silicon Valley and a common feature of some of the biggest tech scandals of 2017: journalists have become the de facto moderators for the most powerful social media corporations, exposing offensive content that the companies themselves have failed to detect.
In the face of increasing recognition that Facebook, Google, Twitter and others have had detrimental impacts on society – whether enabling harassment and violence, spreading misinformation or threatening core functions of democracy – the companies have largely resisted fundamental changes that would reduce the damage.
On the contrary, Silicon Valley has stuck to its foundational belief that tech firms are ultimately not accountable for the content on their platforms. They have outsourced social responsibility to journalists, watchdogs and other citizens, who have increasingly taken on the role of unpaid moderators – flagging abusive, dangerous and illegal material, which the sites then remove in the face of bad publicity.
By many measures, it’s a system that is failing.
“They’ve been quite content to let the general public or conventional news media do their work for them,” said Jane Kirtley, professor of media ethics and law at the University of Minnesota, who has criticized Facebook’s impact on journalism. The philosophy, she said, was: “‘Maximize our presence and our money, and if something goes wrong, we will ask for forgiveness.’”
On a weekly basis, reporters discover glaring flaws and alarming content, launching a predictable cycle of takedowns, apologies and vague promises to re-evaluate. Often, the companies’ arbitrary enforcement efforts and inadequate algorithms have failed to reject the abusive material, even though it violated official policies and standards.
YouTube, for example, had allowed a wide range of videos and “verified” channels featuring child exploitation to flourish, in some cases enabling violent content to slip past the site’s YouTube Kids safety filter. Dozens of channels and thousands of videos were recently removed from the Google-owned site only because of revelations reported in BuzzFeed, the New York Times and a viral essay on Medium.
Some families of mass shooting victims have also spent countless hours trying to get YouTube to remove harassing videos from conspiracy theorists. They’ve described an emotionally taxing and often unsuccessful process of flagging the footage.
But when the Guardian recently sent YouTube 10 such videos, the site removed half of them for violating “policy on harassment and bullying” within hours of an email inquiry.
YouTube also changed its search algorithm to promote more reputable news sources after a day of negative news coverage surrounding offensive video content attacking victims of the Las Vegas mass shooting.
On Monday, YouTube said that it would hire thousands of new moderators next year in an effort to fight child abuse content in the same way the site has tackled violent extremist videos. With more than 10,000 moderators total across Google, the company said it would still continue to rely heavily on machine learning to identify problematic content.
Facebook, too, has repeatedly been forced to reverse course and publicly apologize after embarrassing or unethical decisions by algorithms or internal moderators (who have said they are overburdened and underpaid).
ProPublica reported this month that Facebook was allowing housing advertisements to illegally exclude users by race. That’s despite the fact that Facebook previously said it had instituted a “machine learning” system to spot and block discriminatory ads.
The tech company said in a statement that this was a “failure in our enforcement” and that it was, once again, adopting new safeguards and undergoing a policy review.
Facebook, Google and Twitter were all also forced to change policies this year after reporters revealed that users could buy ads targeted to offensive categories, such as people who consider themselves “Jew haters”.
“They are concerned about getting as many eyeballs on to as many advertisements as is humanly possible,” said Don Heider, founder of the Center for Digital Ethics and Policy. “That doesn’t always bode well for what the results are going to be.”
Facebook and Twitter have both faced intense backlash for allowing abuse and false news to flourish while simultaneously shutting down accounts and posts featuring legitimate news and exposing wrongdoing. For affected users, sometimes the best way to get action is to tell a journalist.
Facebook reinstated a Chechen independence activist group this year after the Guardian inquired about its decision to shut down the page for “terrorist activity”. Reporters’ inquiries also forced the site to admit that it had erred when it censored posts from a Rohingya group opposing Myanmar’s military, which has been accused of ethnic cleansing.
There are no easy solutions to moderation given the scale and complexity of the content, but experts said the companies should invest significantly more resources into staff with journalism and ethics backgrounds.
Facebook’s fact-checking efforts – which have been unsuccessful, according to some insiders – should involve a large team of full-time journalists structured like a newsroom, said Kirtley: “They have to start acting like a news operation.”
Reem Suleiman, campaigner with SumOfUs, a not-for-profit organization that has criticized Facebook for having a “racially biased” moderation system, argued that the companies should be more transparent, release internal data and explain how their algorithms work: “Without the work of journalists chasing down some of this information, we’re just completely left in the dark.”
Claire Wardle, research fellow at the Shorenstein Center on Media, Politics and Public Policy at Harvard, said the platforms were starting to develop more robust specialist teams doing moderation work. Still, she said, it was “shocking” that Twitter would not have detected propaganda accounts that a group of BuzzFeed reporters without access to internal data was able to uncover.
“The cynical side of me thinks they are not looking, because they don’t want to find it,” she said.
A Twitter representative touted its machine learning technology and said it detected and blocks 450,000 suspicious logins each day. The statement also praised the work of the media, saying: “We welcome reporting that identifies Twitter accounts that may violate our terms of service, and appreciate that many journalists themselves use Twitter to identify and expose such disinformation in real time.”
A Facebook representative noted that the company was hiring thousands of people to review content and will have 7,500 total by the end of the year. That team works to evaluate whether content violates “community standards”, the representative noted, adding, “We aren’t the arbiters of truth.”