The gigantically popular messaging app WhatsApp has announced that from 25 May it is raising the minimum age for users to 16 – but only in the European Union. It’s a remarkable ambition, one that every government and online service around the world will be amazed by if it actually succeeds. You know “on the internet, nobody knows you’re a dog”? Nobody knows how old (or young) a dog you are, either.
It’s a safe bet that the Facebook-owned company, which itself has more than 1.3 billion monthly users worldwide, won’t be able to stop European under-16s using its service. An Ofcom report in 2017 found that 83% of British children aged between 12 and 15 have their own smartphone (it’s 39% for those aged between eight and 11), and 74% have a social media profile (23% for the eight-11s, who aren’t officially allowed on any of the major social networks until 13). That’s a lot of children in the UK alone surely already using WhatsApp.
If all they have to do is tick a box saying they’re over 16, and WhatsApp does no further check, they will. After all, they already do on Facebook, which gathers data even about people who aren’t on its network. Cornershop retailers, risking prosecution for selling cigarettes to under-16s, or pub operators who can’t serve alcohol to under-18s, or cinema owners showing 18-certificate films, must look at social networks with envy. If only they could just ask customers to tick a box to verify their age. No need to check: the customer said so!
The obvious riposte is that alcohol and tobacco are definitely bad for children, and 18-certificate films are shocking – well, apart from the, um, arty ones. But we’re starting to think that social media isn’t always good for children either. That Ofcom report noted that 45% of 12-15s have seen hateful content online in the past year, and 10% had seen something “online or on their phone of a sexual nature that had made them feel uncomfortable”. Jeremy Hunt, who wrote angrily to the big tech companies this week complaining that their lack of safeguards on age verification “is not good enough” must be feeling pleased.
WhatsApp didn’t explain in its blogpost how it will enforce its policy. No social network – in fact, no ad-supported service – is keen on truly finding out the age of its users, because then it might have to turn them away, which means less chance to show them adverts, collect data about them, and tie them into longer-term use. Teenagers are the ideal customer: they have long lives ahead of them, can be brand-loyal, and are very susceptible to peer pressure to join. And online age verification really is difficult. Phone companies such as O2 demand photo ID at a shop (effective, but no use for online-only services) or a credit card (easy to defeat by stealing details from parents). Quizzes can be beaten by a search. Perhaps playing the mosquito alarm through the speakers constantly? Or maybe a question asking about a new meme; if you get the answer correct, you’re too young.
Nobody knows, as the government’s 2016 consultation demonstrated. Plans to enforce it for sex sites starting this month were quietly buried in March. Though it insists the scheme will be introduced this year, don’t hold your breath. The subtler, but bigger, problem is that the big social media companies (which includes YouTube) are flattening out the teenage years. To Facebook, YouTube, Google and so on you’re either over 18 or over 13; no other user ages exist.
Yet that period between 13 and 18 is one of enormous intellectual and social development, and you wouldn’t throw such disparate ages together in a youth club and just leave them to it. But in their indifferent rush to corral the world, social networks do, with the results that Ofcom noted: young children coming online are confronted by much older ones who already own the space.
So is the answer for parents to spend more time monitoring their children? As the parent of two teenagers and one ex-teen, I’d have to say it’s almost impossible, unless you’re prepared to be the most unpopular (and nosy) parent in the street. And this is a new problem; previous generations of parents didn’t worry how much of the world’s population their kids were talking to on the phone at night, or whether they were watching a video banned in seven countries while sitting in the room with your Gogglebox-watching parents. This is terra incognita in so many ways.
The panacea, as unworkable as all of them, would be for the tech companies to truly know people’s ages, at least up to 18, so they could properly segment content in an age-appropriate way. But at a time when we’re suspicious about what those companies gather about us, are we willing to contemplate that? Would we let them see a child’s birth certificate, or pass them our passport or driving licence so we can vouch for someone’s age? I’m dubious WhatsApp is about to do this, but I’m willing to be surprised. Even so, I expect that if it’s criminal to be an under-16 user of WhatsApp in the EU, we’re about to mint a lot of new criminals after 25 May.
- Charles Arthur is author of Cyber Wars: Hacks That Shocked The Business World, published by Kogan Page on 3 May