Marietje Schaake 

Algorithms have become so powerful we need a robust, Europe-wide response

Social media platforms claim they’ll improve their behaviour. But blind trust is no longer sustainable, says Dutch MEP Marietje Schaake
  
  

Social networking friends
‘Whenever the nefarious consequences of their profit models are exposed, tech companies essentially reply, ‘don’t regulate us, we’ll improve our behaviour’.’ Photograph: franckreporter/Getty Images

After a debate in the European parliament, I usually upload my intervention to YouTube, as well as to my own website. It creates transparency and an online archive of my work as an MEP. About a year and a half ago, I was told a video I’d uploaded had been taken down. This was puzzling. There can hardly be a clearer example of political speech than speaking out in parliament.

Could the word “torture”, contained in the title of that particular debate (we were discussing how trade rules could be updated to end global trade in torture and death penalty equipment), have caused it to be flagged by an algorithm? I tweeted about the danger of having content taken down automatically. Promptly, Google called me. I’m not sure this is the treatment an average internet user gets. In fact, who do you call when you have a problem with a tech platform?

A few hours later my video was back online, and Google went on to apologise on Twitter. The company claimed that my video had been flagged as spam. This seems quite convenient, because many users will be happy that there is filtering for spam. Still, I had no way of checking. Since I had never posted spam before, and with my account being probably boring and predictable if anything, I wasn’t convinced. Having received no reply about whether a person or an algorithm had taken the video down, or why, I was left with many unanswered questions.

This small episode said a lot about the challenges of assessing how algorithmic choices can affect online content. We’ve seen how, in the wake of discussions about “fake news”, Facebook, Google and other tech platforms often resort to promises that they’ll tweak their algorithms. Now the Cambridge Analytica revelations have raised yet more acute concerns.

Whenever the nefarious consequences of their profit models are exposed, tech companies essentially reply, “don’t regulate us, we’ll improve our behaviour”. But self-regulation is simply not working well enough, especially when we have no way of knowing whether tweaking algorithms makes matters better or worse.

Opaque algorithms in effect challenge the checks and balances essential for liberal democracies and market economies to function. As the EU builds a digital single market, it needs to ensure that market is anchored in democratic principles. Yet the software codes that determine which link shows up first, second, third and onwards, remain protected by intellectual property rights as “trade secrets”. They’re treated like the Coca-Cola recipe. Companies argue that if algorithms were made public, they’d lose their competitive edge.

It’s time we looked into how the curating of information, as it’s presented to users, can affect the rule of law online across Europe. Think about how, whenever fair economic or industrial competition is at stake, anti-trust authorities can demand that companies provide sensitive information. That information is then investigated – it isn’t released to the public. Why not apply the same logic to online activity?

We know much is at stake. Think about how algorithms can add to discrimination rather than combat it: is it really as easy for a black person to rent or post a room on Airbnb as it is for a white person? How do we prevent bogus conspiracy theories going viral? Is political content, whatever its leanings, treated equally on Facebook? Are users informed about who pays for political ads? Coca-Cola’s recipe may be a secret, but it can be tested for compliance with health requirements. And if hundreds of millions of people suddenly drank nothing but soft drinks, surely public authorities would start raising concerns and work out policies to address them.

None of this means we need new EU regulations or EU regulations to oversee technology platforms. The notion that laws should apply online as they do offline generally holds. But we do have to make sure there is accountability beyond mere promises of better behaviour.

For oversight to be possible, regulators need to be able to assess the workings of algorithms. This can be done in a confidential manner, by empowering telecommunications and competition regulators. Those software codes wouldn’t need to be published, but their workings could be scrutinised. The impact of algorithms could be tested through a form of sampling to assess their intentions, and whether they promote some kinds of content while downplaying others.

There was a time when social media was heralded as the “online public square”. But we now know that most social media companies operate like a marketplace, literally. Whoever pays the most will get more reach. The advertisement-selling mechanisms that boost profits were not built to foster democracy, and it was naive to think they were.

Facebook’s work with Cambridge Analytica has served as another wake-up call. Platforms try to reassure us. “We are on to this,” they say. But blind trust is no longer sustainable. We need to know the extent to which profit-driven social media platforms truly respect the principles that protect fair competition, privacy rights, access to information and freedom of expression.

What the Cambridge Analytica scandal has taught us is that there is still much we don’t know. For a long time, European efforts to introduce regulation of the digital economy were ridiculed, especially in Silicon Valley – and often in Washington too.

That taboo now seems to be broken. And the EU has not shied away from setting norms in the digital economy, from defending net-neutrality to strengthening data protection. Of course, we need to make sure that any attempt to rein in the anti-democratic fallout of tech companies’ actions does nothing to stifle free speech or innovation.

But by now we’re well aware that online platforms aren’t necessarily good for democracy. They weren’t designed to be. The best we can do now in Europe is create mechanisms for oversight and accountability. We cannot simply trust – we need to verify.

• Marietje Schaake is a Dutch MEP and a member of the Democrats 66 political party

 

Leave a Comment

Required fields are marked *

*

*