Paul Lewis 

Senator warns YouTube algorithm may be open to manipulation by ‘bad actors’

Senator Mark Warner of Virginia warns of ‘optimising for outrageous, salacious, and often fraudulent content’ amid 2016 election concerns
  
  

Senator Mark Warner
Senator Mark Warner: ‘I’ve been increasingly concerned that the recommendation engine algorithms behind platforms like YouTube are, at best, intrinsically flawed in optimising for outrageous, salacious, and often fraudulent content.’ Photograph: Michael Reynolds/EPA

The top-ranking Democrat on the Senate intelligence committee has warned that YouTube’s powerful recommendation algorithm may be “optimising for outrageous, salacious and often fraudulent content” or susceptible to “manipulation by bad actors, including foreign intelligence entities”.

Senator Mark Warner, of Virginia, made the stark warning after an investigation by the Guardian found that the Google-owned video platform was systematically promoting divisive and conspiratorial videos that were damaging to Hillary Clinton’s campaign in the months leading up to the 2016 election.

“Companies like YouTube have immense power and influence in shaping the media and content that users see,” Warner said. “I’ve been increasingly concerned that the recommendation engine algorithms behind platforms like YouTube are, at best, intrinsically flawed in optimising for outrageous, salacious and often fraudulent content.”

He added: “At worst, they can be highly susceptible to gaming and manipulation by bad actors, including foreign intelligence entities.”

YouTube’s recommendation algorithm is a closely guarded formula that determines which videos are promoted in the “Up next” column beside the video player. It drives the bulk of traffic to many videos on YouTube, where over a billion hours of footage are watched each day.

However, critics have for months been warning that the complex recommendation algorithm has also been developing alarming biases or tendencies, pushing disturbing content directed at children or giving enormous oxygen to conspiracy theories about mass shootings.

The algorithm’s role in the 2016 election has, until now, largely gone unexplored.

The Guardian’s research was based on a previously unseen database of 8,000 videos recommended by the algorithm in the months leading up to the election. The database was collated at the time by Guillaume Chaslot, a former YouTube engineer who built a program to detect which videos the company recommends.

An analysis of the videos contained in the database suggests the algorithm was six times more likely to recommend videos that was damaging to Clinton than Trump, and also tended to amplify wild conspiracy theories about the former secretary of state.

Videos that were given a huge boost by YouTube’s algorithm included dozens of clips that claimed Clinton had a mental breakdown or suffered from syphilis or Parkinson’s disease, and many others that fabricating the contents of WikiLeaks disclosures to make unfounded claims, accusing Clinton of involvement in murders or connecting her to satanic and paedophilic cults.

The videos in the database collated by Chaslot and shared with the Guardian were watched more than three billion times before the election. Many of the videos have since vanished from YouTube and the research prompted several experts to question whether the algorithm was manipulated or gamed by Russia.

The Alex Jones Channel, the broadcasting arm of the far-right conspiracy website InfoWars, was one of the most recommended channels in the database of videos.

In his statement, Warner added: “The [tech] platform companies have enormous influence over the news we see and the shape of our political discourse, and they have an obligation to act responsibly in wielding that power.”

Warner’s warning about potential foreign interference in YouTube’s recommendation algorithm is especially noteworthy given Google repeatedly played down the extent of Russian involvement in its video platform during testimony to the Senate committee in December.

The committee’s investigation into Russian interference in the US presidential election is ongoing but has so far mostly focused on Facebook and Twitter.

The 8,000 YouTube-recommended videos were also analysed by Graphika, a commercial analytics firm that has been tracking political disinformation campaigns. It concluded many of the YouTube videos appeared to have been pushed by networks of Twitter sock puppets and bots controlled by pro-Trump digital consultants with “a presumably unsolicited assist” from Russia.

This and other techniques may have encouraged YouTube’s recommendation algorithm into disseminating videos that were damaging to Clinton. Chaslot has said he is willing to cooperate with the Senate intelligence committee and share his database with investigators.

Correspondence made public just last week revealed that Warner wrote to Google demanding more information about YouTube’s recommendation algorithm, which he warned could be manipulated by foreign actors.

The senator asked Google what it was doing to prevent a “malign incursion” of its video platform’s recommendation system. Google’s counsel, Kent Walker, offered few specifics in his written reply, but said YouTube had “a sophisticated spam and security ­breach detection system to identify anomalous behavior and malignant incursions”.

Google was initially critical of Guardian’s research, saying it “strongly disagreed” with its methodology, data and conclusions. “It appears as if the Guardian is attempting to shoehorn research, data and their conclusions into a common narrative about the role of technology in last year’s election,” a company spokesperson said. “The reality of how our systems work, however, simply doesn’t support this premise.”

However, last week, after correspondence between the Senate intelligence committee and Google was made public, revealing Warner’s written exchange with the company over the recommendation algorithm, Google offered a new statement.

“We appreciate the Guardian’s work to shine a spotlight on this challenging issue,” the new statement said, pointing to changes made since the election to discourage algorithms from promoting problematic content. “We know there is more to do here and we’re looking forward to making more announcements in the months ahead.”

On Friday, after it was informed the Guardian would imminently publish its investigation, YouTube provided an interview to the Wall Street Journal in which it laid out plans to label state-sponsored content and tackle the proliferation of conspiracy theories on the platform.

The Journal reported the plan “was early in development, so it is unclear when it would take effect – or how the site would select conspiracy theories”.

 

Leave a Comment

Required fields are marked *

*

*