Julia Carrie Wong in San Francisco 

‘They don’t think it’s important’: Ellen Pao on why Facebook can’t beat hate

The tech executive known for her work to detoxify Reddit says social media bosses know what’s right – they just need to act
  
  

Ellen Pao says the internet has only grown more toxic since she left Reddit five years ago.
Ellen Pao says the internet has only grown more toxic since she left Reddit five years ago. Photograph: Justin Sullivan/Getty Images

The United States’s reckoning with racism set off by the police killing of George Floyd has been expressed by social media companies in Silicon Valley with a certain stiffening of the spine. Twitter hid one of Donald Trump’s tweets behind a label; YouTube banned some white nationalists; Twitch suspended Trump’s account for hate speech and Reddit got rid of the largest pro-Trump message board, which had long been a fount of harassment. As for Facebook, well, Facebook maintained its practice of exempting Trump from many of its rules and was hit with an unprecedented advertiser boycott.

Finally, it seems, content moderation on social media is making some moves away from the free speech absolutism of the early internet and toward a more nuanced understanding of what kind of rules are necessary to enable free expression for the broadest possible range of people.

One of the first tech executives to take a stand for this more holistic view of protecting free expression on the web was Ellen Pao, who took the helm of Reddit in 2014, while she was also pursuing a groundbreaking gender discrimination suit against one of Silicon Valley’s most prominent venture capital firms.

Pao started the process of detoxifying Reddit with a ban on harassment and non-consensual intimate images – and was met with a user revolt and harassment campaign. She resigned after 10 months, warning on her way out that “the trolls are winning” and that the behaviors that social media platforms incentivized would have serious consequences for the offline world – a warning that seems prescient.

Pao spoke to the Guardian about the lessons she learned, the mistakes she made, and how she knows that Facebook doesn’t actually take content moderation seriously.

This interview has been edited for length and clarity.

Looking back at your time at Reddit, what did you learn that might be applicable to other social platforms trying to get their houses in order?

It’s all about setting boundaries and being clear. Some people are just going to push the boundaries, no matter what. So it’s even more important to hold highly visible people to the rules because the exceptions are so visible and they drive the behavior of their followers.

I don’t understand why things are so hard for Facebook. They have very smart people! I just don’t think they’re paying any attention.

Take their internal rule book [for content moderators] that came out a couple of years ago. That was basically gobbledygook. It was almost unintelligible. Looking at that, you [conclude] that nobody at any high level actually looked at it, because if they had, they’d say: ‘No, this is not acceptable for the team to use.’

What mistakes did you make during your time at Reddit? What would you do differently?

We were slow to build relationships with the unpaid content moderators and that was a mistake. We should have been building closer ties and stronger communications earlier.

I think I would also probably re-examine the structure of the internal content moderation team. Some of them were hourly workers and weren’t really part of the overall team. It would have been better to have everybody be treated the same as employees, instead of having a second class of people that were separate.

At companies like Facebook and YouTube, moderators are not just in different offices or buildings, but often in different countries, working for completely different companies. Do you think that Facebook would have a better approach to dealing with hate speech if it was directly employing its moderators ?

It would make a huge difference. You outsource the stuff that’s not important. It’s the chicken and the egg, right? They don’t think it’s important so they outsource it to this team, and that team doesn’t have any visibility within or voice within the organization.

When a team is very distant, you don’t see it being harassed, and you don’t care about it that much because they’re not actually part of your team. It just becomes easier to dismiss problems.

Reddit’s head of content moderation was part of my staff, so she was in every meeting that we had. And when she implemented big changes, we would create a war room and I would be part of that. So it was from top to bottom, we’re all in a room together, driving the change.

You can pay people so much less if they’re in the Philippines than if they’re in Menlo Park. How do you make the business case to these companies to invest in content moderation instead of outsourcing it?

When a company appears to be OK with a content moderation team making mistakes or not being resourced to deal with problems, you end up with genocide in Myanmar, a changed election and all sorts of misinformation.

I would say, hey, put more money into it and you won’t have as many problems.

But it’s not clear that they care about that. They would invest more if they wanted to change the outcome. The hard part about the internet right now is that people are more interested in engagement than community, and that engagement grows with misinformation and toxicity.

Most of the CEOs of social media companies are white, and most of them are men. I believe that you’re the only woman of color who has run a major platform. Do you think that that informed the way that you approached the job?

I think so. Pew Research has shown that the people on the internet who have faced the most harassment are women of color, and I received a lot of harassment when I was changing Reddit. Seeing that harassment or listening to the people when they talked about their harassment made it easier for me to understand it.

I also built a team that was very diverse. That helped a lot in being able to identify problems and solve them in ways that didn’t create more problems.

How would you advise Mark Zuckerberg, Jack Dorsey, Susan Wojcicki and other social media platform leaders in this moment?

Just do the right thing. Most of them know what the right thing to do is. Just have that conviction and push your way through.

I was asked to resign, so it’s not like I pushed for these changes and it was very successful and I ended up rewarded for it. No, I got pushed out. But I was able to remove revenge porn and unauthorized nude photos from Reddit and all of the other platforms followed. We were the first to do that out of the major platforms, and I’m very proud of that change. I have no regrets.

When you do the right thing you feel better about things and it’s very motivating.

One of the ideas for content moderation that you promoted was to focus on behavior, not ideas. Do you think it still makes sense to focus on behavior in a time when white nationalism and other extremist ideologies are gaining traction?

We are now seeing a direct tie between these ideas and behavior. People are running people over at these protests. People are idealizing mass shooters and copying their behavior. The language between behavior and ideas is super blurry, and I do think that you could go either way and it doesn’t make that much of a difference any more.

When you left Reddit, you wrote that some of what happened made you doubt humanity. I don’t necessarily question humanity, but I do question whether the internet can be saved from these tendencies towards hate, harassment and extremism. Do you feel any hope for the internet? Do you feel like we as humans can figure out how to interact with this technology in a way that doesn’t lead to harming other people?

I don’t know. Five years later, things are even worse. The level of toxicity is higher, the interactions are more negative. The people who are in the right are just as vitriolic as the people who are wrong. It’s hard to tell who has the better idea based on the types of conversations people are having. There’s no meeting in the middle any more. It’s really extremes and it silences the people who have good ideas and want to engage in discussion.

In the beginning, we thought anonymity was part of the problem – that people being able to hide behind their screens without being identified were willing to say more extreme things than people who are named. But now you see people don’t care about being named. They’re willing to go to a public white supremacist rally unmasked with their full identity showing. They’re proud of it. It doesn’t make me believe more in humanity.

I do think the next generation is more oriented around inclusion. I think they have a stronger moral compass. They’re more oriented around values. And I hope that makes a difference. When you see some of these protest rallies being organized by high school students and college students, when you see, you know, what those kids from Parkland have done around gun control – that is what gives me faith in a set of humanity.

I don’t know if they can fix the internet, but I do feel like there will be some better, more ethical leaders in the future.

If you look at where we are now, even if you make a few mistakes, is it possible to be in a worse place than we are now? I mean, there’s no place to fail to, right?

That makes me want to knock on wood.

I know – that’s what I thought five years ago, too. But we’ve got to move out of where we are now. You need to take some risks and do some work and be willing to make mistakes, because where we are now and where we’re headed is not sustainable.

 

Leave a Comment

Required fields are marked *

*

*