Leigh Alexander 

Online abuse: how women are fighting back

With the world only half paying attention to online threats, women are rising up to help each other, from strategy to support
  
  

The women leading the fight, clockwise from top: Jamia Wilson, Randi Lee Harper and Michelle Ferrier.
The women leading the fight, clockwise from top: Jamia Wilson, Randi Lee Harper and Michelle Ferrier. Composite: The Guardian

For two years, Michelle Ferrier was the target of a campaign of intimidation and harassment. The only black, female reporter on Florida’s Daytona Beach News-Journal, from 2007 Ferrier was targeted with a stream of abusive letters threatening lynchings and a “race war”, all in the same handwriting and from the same potentially dangerous person.

But without a specific threat, police said, there was no chance of a criminal investigation. Afraid for her family, Ferrier left the paper and moved away.

Now an associate dean at Ohio University, Ferrier continues to notice the chilling effect that harassment and abuse can have on the voices of women and people of colour, particularly in the media. But now the campaigns of race- and gender-based hatred are launched online, often with frightening consequences to freedom of speech and daily life for reporters, activists and citizens.

On Facebook, Twitter, Reddit, YouTube and other sites you are never far from a violent, sexist, racist or homophobic meme, and this violence proliferates across the web. The writer Umair Haque recently described it as the “ceaseless flickering hum of low-level emotional violence”.

The world is only half paying attention to the threats so many face in their online lives, and many believe the tech industry is not doing enough. Now people targeted by online abuse are devising their own solutions, often backed by little more than public fundraising, grassroots effort, and the learnings from their unfortunate and often traumatic experiences.

It is common to see situations of online abuse dismissed – targets are supposedly just “too sensitive” and should “switch off the computer”. But these experiences go far beyond offensive words that might be ignored. Our digital footprints often reveal a great deal of personal information, including home addresses or bank details, and attackers expose and manipulate this information to create real-life consequences for their targets.

Take the case of Veerender Jubbal, who made regular online criticisms of video game culture. One of Jubbal’s selfies was picked out and photoshopped so it appeared he was wearing a bomb vest and holding a Qur’an. The day after the November 2015 terrorist attacks in Paris, this edited image was propagated across social media and presented as an image of one of the attackers.

Jubbal is Canadian, a Sikh, and has never been to Paris. But a network of unofficial Isis supporters used social media to spread the story, and the Spanish paper La Razon ran an image of Jubbal as a face of the attacks. In December someone posted a photo of a gun to Jubbal’s home address and police have now asked him to stay offline for his own safety while the case is investigated.

Anita Sarkeesian was targeted for her documentary series exploring how women are represented in video games. She suggests that in doing this type of work, she and others have been transformed into “folk demons” by a community of mostly young, mostly white men who fear progressivism in technology and politics alike – a community for whom the phrase “social justice” is pejorative.

Sarkeesian is targeted with grotesque hate speech on social media, has had public appearances disrupted with bomb threats and has been the subject of a crude video game about punching her in the face. The broader tactic, she explains, is to spread lies, conspiracies and misinformation to discredit her work, to paint her as an enemy of free speech or to spread the false belief she misuses the funds she raises from the public to create her videos. “These absurd characterisations, unquestioningly accepted as true, then serve as the justification for more extreme forms of harassment,” Sarkeesian says.

The law is not well-suited to these situations. Often, police can’t investigate without a specifically worded threat, and even then targets can be expected to continue to engage with the harmful or frightening situation so that police can document it. Most law enforcement officers aren’t trained in the technical knowledge required to protect personal accounts or data, nor do they have a broad understanding of how online communities form and behave. And often, targets seek redress through the courts only to find the legal process requires them to be further silenced – while abusers online are free to continue heaping on the smear campaigns.

Independent game-maker Zoe Quinn recently dropped harassment charges against a vindictive ex-boyfriend who she says posted personal accusations about her online.

“One of the biggest myths is that your first response to being abused should be to go to the police and seek justice,” she wrote. “Leaving aside the fact that the police flat-out murder unarmed citizens for their race all the time, and that sex workers are likely to be incarcerated when reporting crime done to them … getting a restraining order against [him] was the least effective thing I could do in terms of getting him out of my life for good, and for protecting myself.”

Quinn founded the Crash Override network, a grassroots “crisis helpline, advocacy group and resource centre” for people experiencing abuse online. Thanks to private donors and financial support from Anita Sarkeesian’s Feminist Frequency, it is free for people seeking help.

Crash Override offers confidential support and advice on self-protection: how to behave if you are targeted, what to do if your home address is published, how to activate security tools on your accounts. It also offers advice on how to talk to your employer, as it’s common for abusers to bombard a target’s workplace with false accusations, hoax phone calls and other tactics designed to discredit them. Quinn says this resource is visited by over 6,000 people per month.

Ferrier is also fighting back with TrollBusters, a “rescue service for women journalists, bloggers and publishers”. Ferrier prototyped the site with four other women using a $3,000 grant for women in news entrepreneurship, and later received $35,000 from the Knight Foundation, which supports quality journalism and media innovation.

Twitter, Facebook and other social media tools do offer users the option to block, report or mute offenders, and sometimes people are banned or suspended. But this approach focuses on the abuser, with the target often expected to do their own labor of managing their online experience even as they face threats.

Ferrier says that when targets see their online life drowned out by lies and smear messages, professional testimonials and endorsements can help support them. At the point of attack, TrollBusters sends supportive images and affirmations, like “I always take positive action in the face of fear”, into a targeted person’s social media feed. It offers what Ferrier calls a “hedge of protection”.

TrollBusters teams can also document and monitor an attack so that the target doesn’t have to sit online watching the abuse unfold. “Stories of or by people of colour tend to be targeted the most,” Ferrier says. “It never matters what the topic is – we got the worst comments on a story about an African American family whose house had just burned down.

“We’re talking about shutting down voices in media. Some journalists are not trained on these issues at all, and we don’t talk about the dangers of this job, both online and off,” she says. Media outlets currently don’t prepare reporters for the possibility that they might be targeted online, she says, and nor do they sufficiently account for the need to protect sources, particularly those on topics of race and gender.

Holly Brockwell is a UK-based journalist who founded the tech site Gadgette. She recently reported on an app called Stolen, which was built around the premise of “owning” Twitter users like trading cards, without asking them whether they wanted to opt-in.

Brockwell received an unsolicited notice that she was “owned” by a stranger, and wrote a story pointing out that the app could encourage online abuse. The contrite developer volunteered to take Stolen down and revisit it, but even that civil exchange brought Brockwell a flood of hate; someone tweeted her an image of her own face with semen on it. Brockwell is used to attacks, and has a policy of publicly sharing the abusive messages and images she gets.

“I’m constantly told I’m making life worse for myself by calling out trolls and harassers,” she says. “I’m told I’m giving them what they want, nourishing their rotten souls with attention, giving their unhappy lives some meaning. Well, maybe I am. But that’s my decision.

“Watching how someone copes with a horrendous situation isn’t your opportunity to jump in and tell them how to do it better. I’ve tried every method for dealing with harassers: blocking, ignoring, responding, reporting, mocking, crying, everything. And do you know what worked? Nothing. So the only option left is the one that makes me feel better, that gives me back some control.”

Brockwell says that even with experience of harassment, it’s a challenge for any outlet, especially one led by women, to protect writers. “We had to remove our comment section very early on in the site’s existence after a series of horrific, threatening messages,” she says. But getting rid of comments doesn’t keep staff safe because “more motivated” harassers end up redoubling their efforts on things like “novel-length emails”.Platforms like Twitter, Facebook or Reddit often talk a strong game about their concern for user safety, and occasionally they invite in those who have been working closely on abuse issues, as with Twitter’s recently announced Trust and Safety Council.

On Monday, the Guardian reported an initiative by Google, Facebook and Twitter to foster a global grassroots counter-speech movement. The tech firms are reaching out to women’s groups, NGOs and communities in Africa, America, India, Europe and the Middle East as the scale of online abuse continues to increase. But critics say the companies should be doing more to clean up abuse on their own sites.

The systems engineer Randi Lee Harper says she is sometimes frustrated by how little these companies seem to learn or change, and how rarely they offer to pay women in tech for their expertise they keep claiming to be “listening” to.

More than a year ago, Harper authored an auto-blocker for Twitter that lets users automatically, pre-emptively block users likely to be associated with harassment groups – people who follow more than one high-profile harasser, or who frequently use hashtags associated with abuse movements. This brought her into the crosshairs of the mob, where she remains; last year she was even “SWATted”, which means harassers call in false but serious threat reports on a target’s address to local police. Trained teams show up armed to the teeth, prepared to deal with a frightening and dangerous event – ultimately creating one instead.

Since then, Harper has founded the Online Abuse Prevention Network, backed entirely by individual donors who have collectively raised around $4,700 per month via Patreon, though some of her donors have been targeted in turn. The funding allows her to work full-time on assisting targeted individuals, consulting with technology platforms, and helping targets secure their personal information at no charge.

Harper suggests solutions for fixing some of Twitter’s most obvious vulnerabilities – fixes she says would require very few engineering hours on the inside but that would offer significant and immediate benefits to users in terms of security, privacy and usability.

It’s a glacial task, trying to persuade privileged, male-orientated Silicon Valley culture that harassment is an issue, particularly when some threads of West Coast libertarianism react to moderation of offensive comments by claiming it is censorship. But Harper has found that presenting this issue as an engineering challenge has persuaded more professionals to come on board. “I’m trying to build an army of open source developers, and I work really hard to get engineers interested in these problems,” she says.

“My long-term goal is tracking the ways communities move. Harassment is complex; nobody wakes up in the morning saying ‘I’m going to be an asshole today.’ They believe they are the hero of their story, they’re going in with strong beliefs, and when these communities brush up against each other, there’s friction that can escalate to harassment.”

Companies such as Google are wary of appointing themselves arbiters of what is and isn’t harassment. Evaluating each case would be a monumental job, and the giant seems leery of being drawn into the business of judging content rather than merely indexing it. Google does act to remove links to content piracy sites or “revenge porn” sites.

Nick Pickles, Twitter’s head of policy in the UK, says creating a technological solution to the problem of online abuse is extremely difficult. “No such magical algorithm exists and, if it did, it wouldn’t be that simple to implement because of the complexity of understanding sentiment and context.”

“Tech companies cannot simply delete misogyny from society,” he said. “The idea that abusive speech or behaviour didn’t exist before the internet is simply false. Intolerance, in all its forms, is a deeply rooted societal problem.”

A website plastered in trollish images and engaged in obvious slander might not be subject to removal or lower ranking in search results because it can be determined to be “abusive” but because it is poor-quality content. Making harassment into a “content quality” issue, or an “engineering solutions” issue, as Harper suggests, might well be the only way to get platform-holders, tech culture entrepreneurs and investors to join the victims in their work on this issue.

More research, and more funding for research, is a crucial next step, says Jamia Wilson, executive director of Women, Action and the Media (WAM). In May 2015 WAM published a significant research report on harassment, primarily geared at illustrating the scope of the problem and the work that would be required to address it. Wilson also sits on the advisory board for Hollaback, an anti-harassment organisation which began by documenting street harassment, and has now extended that study into the digital realm via Heartmob.

Wilson and her colleagues explored how harassment manifests differently depending on the target’s identity. Some black women experiment with changing their Twitter picture to that of a white man, or even a white woman, in order to demonstrate this difference. This is not an issue invented by the internet era, Wilson says. She refers to hate mail received by Martin Luther King Jr, the language of which is uncomfortably familiar.

And while Wilson and her team believe that data examining the intersection of race and gender online will provide a clearer picture of the problem, they say the media needs to do more than just focus on the highest profile cases.

When we don’t look at that intersectional data, Wilson says we miss the impact caused by focusing only on certain kinds of situations. When we emphasise that police need to understand the online landscape better or respond more adeptly to abuse cases, we need to acknowledge the additional complications people of colour may face in going to police in the first place. And in communities where racial tensions with law enforcement are high and black people are at greater risk of state violence, the “SWATting” tactic has broader risks, she explains.

“When Apple decides it’s an ‘unnecessary burden’ to focus on diversifying its board, these staffing decisions have a critical impact on how these issues are prioritised and addressed within organisations,” adds Wilson.

“The tech industry has a role to play,” Ferrier agrees. “Our laws have given them the wiggle room to abdicate a lot of their responsibility back to individuals. But the platforms are coming to the conclusion that there’s an economic imperative to participate in this conversation. Other groups currently have to do cleanup on the bad design of their platforms, and the marketplace will walk if those solutions are not in place.”

“Sexual harassment in the workplace was the issue that radicalised my godmother, and this is connected,” says Wilson. “I think this all comes back to harassment and violence being about power and control. The internet means people can get [more] information about you and spread it more quickly, but I think it is based on the same core issues we’ve been dealing with for years.”

Those who have endured the worst of web culture say they should not have to create their own solutions without support in an industry flush with so much money; Facebook generated more than $5.8bn in revenue in the last quarter of 2015, Google $75bn and Twitter $710m.

The industry makes obeisances to the ideals of “diversity” and “representation”, but many at the sharp end of abuse argue that it has so far done little to help them or to learn from their experiences.

 

Leave a Comment

Required fields are marked *

*

*