Ellen Broad 

Are Cambridge Analytica’s insights even that insightful?

Facebook thinks I like snow and a small dog I’ve never heard of. What else has the data got wrong?
  
  

Facebook thumbs down symbol
‘So here’s what worries me, even more than who has access to our Facebook data: I’m worried that the organisations who do have it don’t understand what it means.’ Photograph: Facebook

I want to be worried that my Facebook data is being used to wage psychological warfare. I want to be terrified that businesses like Cambridge Analytica are able to build a sophisticated psychological profile of me based on little more than my Facebook likes. But right now Facebook thinks my interests include snow and miniature pinschers and pre-school. After googling “miniature pinscher”, something tells me my likes aren’t quite the window into my psyche they’re cracked up to be.

Revelations that Cambridge Analytica harvested 50 million Facebook profiles – and used them to develop “psychographic” profiling tools to help swing elections around the world – have sparked widespread concern about how Facebook handles our data. A former Facebook manager responsible for investigating data breaches says covert data harvesting has been a known issue inside the company for years. Christopher Wylie, the whistleblower who helped the Observer and the Guardian expose Cambridge Analytica’s Facebook data access, said they “broke” Facebook with political ads supposedly tuned to individual psychological profiles.

Facebook’s own checkered history pushing the boundaries of online privacy (while conducting psychological experiments on users without their consent) mean this is a problem entirely of their own making. Facebook has built its advertising business on offering targeted, “accurate” insights into the interests and motivations of its 2 billion users. Meanwhile, they have only reluctantly given Facebook users more control over their privacy and have resisted calls to make their own advertising operations more transparent.

This makes it hard for people to understand who has access to their Facebook profiles and what insights they’re trying to come up with. Or figure out whether these insights are “accurate” at all.

Cambridge Analytica is far from the only third party with access to an abundance of Facebook user data. They’re one of thousands of businesses around the world offering a mix of data mining, behavioural profiling and predictive analytics based on big pools of personal data. They trade off the idea that bigger data means better insights, and typically advertise themselves as delivering “highly accurate”, deeply revealing results. Bigger data definitely does not always mean better insights. Humans still have to make assumptions about what the data tells them. Humans can be really bad at that.

Cambridge Analytica’s psychographic profiling, for example, is based on research arguing that Facebook likes reveal genuine insights into things like a person’s intelligence. But Facebook hasn’t engineered likes to be “genuine”. Likes are visible and quantifiable, playing on our desire for social approval and support. Likes from Facebook friends shape the kinds of posts that show up in our Newsfeeds. Facebook is making its own correlations, not based on what we like, exactly, but what’s popular among our friends.

So here’s what worries me, even more than who has access to our Facebook data: I’m worried that the organisations who do have it don’t understand what it means.

These days getting access to data is reasonably cheap and easy. Making bad predictions is also easy. Creating accurate, targeted predictions about what an individual is like and their “inner demons”, in a context where Facebook is also acting on that constantly, remains hard. It’s particularly hard when an organisation’s whole business model is based on keeping those predictions – and how they’re made – secret from the subject of the prediction. We’ve slid so easily into a world of secretive inference, we’ve forgotten that transparency and trust can sometimes get better, more accurate results.

Have you ever opened your ad settings on Facebook? Facebook does show you, in a vague and fairly unhelpful way, what its algorithms have guessed you’re interested in based on your likes and clicks. This is where I found out that Facebook has guessed my interests include snow and pre-school and a tiny dog I’ve never heard of.

Reading between the lines, it looks like Facebook’s algorithms struggle to deal with the context they’ve engineered. When I like a friend’s regular posts about their art practice involving snow? That doesn’t mean I’m interested in any content about snow. Some of my “interests” are based on the times when I seem to have clicked on advertisements that were actually disguised as vaguely related news stories. At best, this is circular logic; at worst, it works to reinforce extreme views and worsens political divides.

I don’t care that Facebook struggles to figure out what I like to advertise more stuff to me. But I do care, deeply, about the same kind of flimsy data analysis being used to make more serious predictions – like the job I should do or my suitability for a home loan – without any scrutiny.

At the moment, too much of how organisations make these “highly accurate” predictions about us are hidden behind smoke and mirrors. Most of the time, a business selling “accuracy” doesn’t have to say what data they’re using for their predictions, or what they think that data tells them, or whether they’ve thought about the limitations. They don’t have to prove that their systems are accurate, or explain how they test for accuracy at all.

The winds are shifting. In the European Union, the General Data Protection Regulation, due to come into force in May, sets an expectation that people should be able to understand how predictions and decisions are made about them. In the US, the former Silicon Valley triumph Theranos and its CEO Elizabeth Holmes have just been charged with fraud by the SEC for making baseless claims about what its blood-testing technology could do. Australia is examining a new consumer right to data.

Maybe Cambridge Analytica’s dirty tricks like slick data analytics do swing elections. But maybe, just maybe, they don’t. Maybe it’s all smoke and mirrors. How can we tell if they don’t have to show us?

  • Ellen Broad is an independent consultant and associate with the Open Data Institute. Her first book, Made by Humans: The AI Condition, is being published by Melbourne University Publishing in August 2018
 

Leave a Comment

Required fields are marked *

*

*