Twitter has denied using secret anti-spam tools to help an MP suffering anti-semitic abuse on the social network, after members of a white supremacist website complained that they were unable to send racial slurs.
Luciana Berger, the Labour MP for Liverpool Wavertree, has been the target of a long-running campaign of anti-semitic abuse, which included the imprisonment of 21-year-old Garron Helm in October after he was found guilty of sending an offensive, indecent or obscene message.
Ten days into the campaign, Berger got in touch with Twitter, asking the service for help dealing with the waves of harassment. The firm promised to support the MP, and shortly after, the abuse died down.
But as The Verge reported, that’s not because Twitter was following its usual protocol of suspending accounts or deleting abusive tweets. Instead, it seemed to be preventing them from being sent in the first place.
“For a while, at least, Berger didn’t receive any tweets containing anti-Semitic slurs, including relatively innocuous words like ‘rat’,” reported Sarah Jeong. “If an account attempted to @-mention her in a tweet containing certain slurs, it would receive an error message, and the tweet would not be allowed to send. Frustrated by their inability to tweet at Berger, the harassers began to find novel ways to defeat the filter, like using dashes between the letters of slurs, or pictures, to evade the text filters.”
The Electronic Frontier Foundation said they were concerned by the apparent pre-emptive censorship, telling Jeong that “even white supremacists are entitled to free speech when it’s not in violation of the terms of service”.
Sources close to Berger have confirmed that the abuse dropped away noticeably in the weeks following the meeting with Twitter, following which the social network promised that it would permanently suspend users who were abusive or who serially created accounts.
But while Twitter cited a number of projects it has started, including mandatory phone number verification for accounts which look like they might be abusive, and a larger safety team responding rapidly to user reports, none of its promises included the systems Jeong noted.
The company says it has no new anti-abuse system in play, and suggests that the system which prevented Berger’s harassers from sending the abusive tweets may have been an anti-spam tool which was overstretching its boundaries.
“We regularly refine and review our spam tools to identify serial accounts and reduce targeted abuse,” a Twitter spokesman told the Guardian. “Individual users and co-ordinated campaigns sometimes report abusive content as spam and accounts may be flagged mistakenly in those situations.”
The situation seems to have reverted back to normal, and tweets sent to Berger using abusive language no longer trip the filter.
But sources close to the MP were also concerned at how readily the apparent filtering was seized on as a free speech issue. They told the Guardian that they felt strongly that the co-ordinated attacks, stemming from white-supremacist websites that issued daily updates on how to circumvent any attempts to block them, was a very serious situation, which merited a serious response.