Katharine Viner 

How do we make the Guardian a better place for conversation?

Online abuse pollutes the water in which we all swim. As the Guardian’s first female editor, it is important to me that we tackle it
  
  

A woman reads the Guardian website
Should commenting be for members only, or subject to stricter moderation? Photograph: Christian Sinibaldi/The Guardian

Last year, a few weeks before I started as the new editor-in-chief of the Guardian, I read a review in the New York Times of Jon Ronson’s So You’ve Been Publicly Shamed. The book looks at the emergence of public humiliations on social media, and the review ended by saying that “the actual problem is that none of the men running those bazillion-dollar internet companies can think of one single thing to do about all the men who send women death threats”. Since I was about to become the first woman to run the Guardian (not, sad to say, a bazillion-dollar internet company), I decided that I had a responsibility to try to do something about it.

That’s why, over the past two weeks, the Guardian has published a series of articles looking at online abuse, with more to follow in the coming months. You might have read our interview with Monica Lewinsky in which she described the trauma of being subjected to what could be called the first great internet shaming, and how she still has to think of the consequences of talking about her past – whether by misspeaking, she could trigger a whole new round of abuse.

Lewinsky’s experience has prompted her to tackle online harassment head on: she is now a respected anti-bullying advocate. But as we’ve considered online abuse in all its forms – the rape and death threats, the sexist, racist and ad hominem attacks, the widespread lack of empathy – it has become clear that some of the institutions that most need to follow Lewinsky’s lead are not; that police and tech companies are failing to keep on top of the problem, and victims are being abandoned to their abusers.

We’ve called our series the Web We Want. It’s an attempt to imagine what the digital world could and should be: a public space that reflects our humanity, our civility and who we want to be. It asks big questions of all of us: as platform providers, as users and readers, as people who write things online that they would never say in real life. It also asks big questions of the Guardian.

Online abuse has been a problem since the earliest days of the web. Since the Guardian opened up its online articles to comments by readers in 2006, tens of thousands of conversations have taken place below the line between readers and journalists, and between readers and other readers. Many of these conversations have been excellent; thoughtful, engaged and rewarding. But some subjects – latterly Islam, refugees and immigration– have become magnets for racism and hate speech, while others – feminism, domestic violence and rape – can attract highly misogynistic responses.

Across the internet, we have now reached a tipping point. For women, the abuse is often violent and sexualised, with direct threats to rape and mutilate. For non-white people, the abuse is often racist; for Jews, it is antisemitic; for Muslims, it is Islamophobic. To some extent, everyone online is affected. To the extent that our lives are conducted online, this is the water in which we all swim: it’s horribly polluted and it’s making a lot of us sick.The focus is beginning to shift towards what we can do to reduce abuse and who should be taking responsibility for ensuring that this happens. The Guardian’s political editor Anushka Asthana this week reported that the Labour MP Yvette Cooper has called on police and prosecutors to unmask the true extent of online harassment, which she says is “stifling debate and ruining lives”.

Sandra Laville, a senior reporter in our London newsroom, helped launch the Web We Want with her piece about attempts by Facebook, Google and Twitter to fight online abuse by fostering a “counter-speech” movement. Many people, including some of the communities that these tech companies are attempting to empower, have questioned whether they are dodging their responsibilities and providing too little support to victims of abuse.

I would argue that the big digital players still need to bear more of the burden of the social costs of what they do (they have the deep pockets to pay for that). But news organisations aren’t blameless. With this series, we are acknowledging that the Guardian has a problem with abuse and harassment. That is why we took the very unusual step of publishing research on our moderation data, making us the first organisation in the media or technology industries to do so, and engaged readers in a discussion about how to have better conversations.

The editor of the Web We Want, Becky Gardiner, and Mahana Mansfield, the Guardian’s senior data scientist, examined the 70m comments left on the Guardian since 1999, particularly those comments blocked by our moderators for abuse or derailing the conversation, and reported on what they found. The stark results offer proof of what many have long suspected: of the 10 regular writers whose articles have had the most comments blocked, eight are women (four white and four non-white, one Muslim and one Jewish) and two are black men. Three of the 10 most abused writers are gay.

The response to this work has been fantastic – some commentators called it historic – although we also heard constructive criticism about how we communicate our moderation policy with readers, as well as the role of headlines in steering conversations. We hope that others will follow our lead in looking at their own comments, because effective solutions will be hard to find without data and dialogue. We are now exploring the possibility of sharing our data with academics working in this area, and hope others will do the same.

The Web We Want is not just about identifying the problem: it is also about trying to work out what can be done to make it better. Extreme abuse is rare on the Guardian thanks to our highly skilled moderators whose work ensures that comments abide by the community standards that are there to keep conversation respectful and constructive. But we need to maintain a supportive working environment for Guardian moderators and writers, and even low-level abuse can have a chilling effect on journalists and participation in the comments.

As editor, I think we need to act more decisively on what kind of material appears on the Guardian. Those who argue that this is an affront to freedom of speech miss the point. That freedom counts for little if it is used to silence others. When women and minorities don’t feel able to speak their mind for fear of insult, threat or humiliation, no such freedom exists.

In a video we made for the series, the Guardian columnist Jessica Valenti described it this way: “Imagine going to work every day and walking through a gauntlet of 100 people saying ‘You’re stupid’, ‘You’re terrible’, ‘You suck’, ‘I can’t believe you get paid for this’. It’s a terrible way to go to work.”

Over the next few months, the Guardian will continue to explore, with our readers, the questions and challenges raised by these issues. Should we look at stricter moderation, or more ways of rewarding positive contributions to our site? Should we limit the number of comments we host, or make them a privilege of membership? In a time of challenge to the business model of journalism, moderation is not cheap.

In her book Hate Crimes in Cyberspace, Danielle Keats Citron compares contemporary attitudes to online abuse with attitudes to workplace sexual harassment in the 1970s. Then, it was normal to have your bottom pinched at work. It isn’t any more. Today, all kinds of bullying and aggression dominate much online conversation. Sadly, we can’t eliminate bigotry. But that doesn’t mean we have to tolerate it, much less give it a platform on which to thrive.

 

Leave a Comment

Required fields are marked *

*

*