Mark Zuckerberg famously boasted that Facebook had a saying: “Move fast and break things”. His product has not just destroyed industry models but also social mores and expectations of the reliability of information and, in the global south, has upended people’s lives. The technology writer Kara Swisher described him as “not just a technologist; he’s a social engineer”.
Meta’s Tuesday announcement shows that it still moves at speed. Having previously been pressured into insufficient improvements in moderation, it is abruptly scrapping factcheckers on platforms including Facebook, WhatsApp and Instagram, starting in the US, and loosening other content restrictions. This purportedly addresses overreach by moderators. Other controls remain. But the signal is clear. Its new guidelines will allow users to call others mentally ill on the basis of their sexuality or gender identity. Meta will also “recommend more political content based on ... personalized signals” – which sounds a lot like buttressing echo chambers.
But Meta can no longer claim to act in haste and repent at leisure. It is retreating from harm prevention knowing what will result: Mr Zuckerberg acknowledged that “we’re going to catch less bad stuff”. An internal memo leaked in 2021 acknowledged that core parts of its platform appeared hardwired for spreading misinformation and divisive content. And 3.3 billion people now access one of its core products daily.
Mr Zuckerberg is ingratiating himself with Donald Trump – who complains of bias at Meta and has threatened the billionaire – while cutting costs by outsourcing moderation to users. Mr Trump will soon have the power to kill the federal anti-trust case against Meta, release regulatory pressure on big tech and offer a supportive environment for AI.
Mr Zuckerberg also pledged to work with him “to push back on governments around the world. They’re going after American companies and pushing to censor more”. To put it another way: democratically elected leaders are seeking to hold powerful businesses accountable and protect their citizens and societies. The UK’s Online Safety Act and the EU Digital Services Act are imperfect but essential tools which must be used to their full effect.
Advertisers don’t like being promoted next to hate content. But Meta’s dominance means that few want to quit its platforms. In the US, accountability will probably fall to states, like the more than 30 who are suing Instagram on the grounds that its addictive nature has contributed to a youth mental health crisis.
There are even greater concerns abroad. Former Meta employee Frances Haugen said that she became a whistleblower “to save the lives of people, especially in the global south, who I think are being endangered by Facebook’s prioritisation of profits over people”.
In Myanmar, UN investigators blamed the spread of hate speech on Facebook for fuelling pogroms which killed tens of thousands of Rohingya Muslims. The platform acknowledged that it had been used to incite offline violence. In India, Meta has been accused of failing to stop the spread of Islamophobic hate speech, calls to violence and anti-Muslim conspiracy theories on its platforms.
As the Nobel peace laureate Maria Ressa warned, this is about safety. These changes will be damaging everywhere. But they threaten the greatest harm in countries where Meta has extraordinary market dominance, where governments themselves foment hate speech or disinformation, and where such material has already spread to devastating and sometimes deadly effect.
Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.