Chris Stokel-Walker 

Social media bans for teens: Australia has passed one, should other countries follow suit?

A block for under-16s would soothe many parents’ concerns, but experts are divided over the evidence in support of it, and how it might work in practice
  
  

an illustration of smartphones on a dark background with childishly painted portraits on each screen
Illustration: Alamy/Getty

Social media has transformed our relationships with our friends and family, brought unfiltered news from around the world to our handsets and introduced us to an unending supply of cat memes. Some of this has been positive, some negative and, for much of it, the jury is still out. But as the first generation of social media natives start to have children of their own, there is increasing unease about tech’s impact on children. These concerns prompted Australia to pass legislation last November banning access to social media for under-16s.

“So many things are happening at once,” says Sonia Livingstone, professor of social psychology at the London School of Economics and a specialist in children and social media. “We clearly have a silent problem of parents at home struggling with social media and feeling unsupported. We have a small number of parents whose children have come to serious harm, or died, who have become mobilised. We have politicians worried about complaints in their constituencies and also looking for a good news story in gloomy times. And we have big tech outrunning regulation in all directions.” It is a perfect storm, she says, into which discussion of an outright ban on social media for under-16s has come as a supposed saviour.

The UK government has twisted itself into a torturous position: Peter Kyle, the technology secretary, said last November that a ban was “on the table”, before then telling the Guardian it was “not on the cards” for now. In January, he said: “I don’t have any plans to ban social media for under-16s.”

While the UK government seems to be deciding that a ban is not for them, some big names have signalled their support. Microsoft co-founder Bill Gates recently said of Australia’s ban, “There’s a good chance that that’s a smart thing”. The UK’s head of counter-terrorism policing said a ban “warrants serious attention”. Chris Philp, the shadow home secretary, has said he is “broadly in favour” of a ban, but the age limit could be lower than 16.

“There’s a huge amount of conflict and uncertainty in the world,” says Livingstone. “And social media seems the fixable problem.” But is banning access the answer?

How might social media bans work?
The new Australian law says that social media networks have to take “reasonable steps” to prevent those under 16 from having an account when the law comes into force in December this year.

What this means in practice is not fully fleshed out, but an explanatory memorandum suggested that a minimum level should put in place “age assurance” tech, which might include facial recognition and age estimation. Such technology is often offered as the solution to identifying someone’s age, but it remains an estimate – and can be wrong. The average gap between what one of these systems believes someone’s age to be and their actual age can vary between one and three years. That may be a small margin of error for a 45-year-old, but if you are an 18-year-old student and the computer says you are 15 so can’t join social media with your university friends, that is frustrating.

Would a ban actually work?
A recent More In Common poll found that three-quarters of the public would support a ban on social media for under-16s, up from the current minimum age of 13 when children can legally access platforms. Many will be parents at their wit’s end as they struggle to keep their children safe online. “Social media has no place for children under 16,” says Vicky Borman, a mother of three children, one of whom is under the age of 16. “It exposes them to a myriad of unacceptable content, including pornography, nudity, bullying and harassment, that they simply aren’t equipped to handle.”

Typical of many parents, Borman is in favour of a ban. “It’s time for us to reclaim childhood for our kids, ensuring they have the opportunity to create lasting memories away from screens,” she says.

Yet even those pushing most publicly for something to be done do not believe an outright ban on children accessing social media is the answer. Andy Burrows is the CEO of the Molly Rose Foundation, set up by the family of Molly Russell, the 14-year-old who took her own life after being bombarded with negativity on social media. “The reality is that if we pull up the drawbridge on social media platforms, those bad actors won’t disappear,” he says. “They will simply migrate to gaming and messaging services, and the risk would be that the volume of harm on those platforms then becomes unmanageable.”

Sonia Livingstone also has doubts. “A ban makes a great headline and seems straightforward, but it isn’t,” she says. “A ban is meant to be a ban on technology companies making problematic products available to children, and it very quickly becomes a ban on children accessing technology.”

What protections exist at the moment, and how effective are they judged to be?
There are currently protections for child users of social media – many put in place and managed by the social media platforms – for example, that users should be over 13. “But they’re not very transparent or stable,” says Livingstone. Most companies tag accounts they suspect are run by children younger than 13 and put child-safety features on them, such as limits on who can message them or the type of content they can encounter. But it is not clear they work, says Livingstone, who regularly talks to children as part of her research. They say they still receive message requests from adult users.

“There are some protections, but absolutely not enough,” says Livingstone. “And until the [UK’s] Online Safety Act and the [EU’s] Digital Services Act kick in, we’re a long way from getting those algorithmic protections people really want.” (While the laws have been passed, enforcement by regulators, such as Ofcom in the UK, is still months away.) Burrows agrees on the UK front. “The prime minister should be urgently prioritising, strengthening and fixing the Online Safety Act, so it works much more effectively for children,” he says.

What is the evidence that under-16 social media use is harmful?
If you read US social scientist Jonathan Haidt’s book The Anxious Generation – which has been on the New York Times bestseller list for 46 weeks – there is a lot of evidence that it is harmful. The book is a compelling manifesto warning about the polluting impact of social media and tech on our teenagers’ minds.

Yet, one statistician argues that a good number of the studies Haidt relies on are misrepresented, and some even contradict his reasoning. The author admits two minor errors on his website. While a psychology professor has accused Haidt of “making up stories by simply looking at trend lines”, adding that his conclusions were “not supported by science”. Haidt says that his critics have misinterpreted his claims, including using the wrong standard of proof.

Among the criticisms of the book was that Haidt confuses correlation with causation. But his central argument seems to fit with the concerns and experiences for many parents. Few people doubt there is a teenage mental health crisis. And adults can feel the addictive nature of their own smartphones. Debates about causation and correlation can feel abstract when parents face daily dilemmas about how to manage their children’s access to smartphones and social media.

What constitutes social media?
This is the big question that vexes those who are studying this issue. “We don’t really have any clear definitions of what legislators mean by social media at the moment,” says Pete Etchells, professor of psychology at Bath Spa University and the author of Unlocked. Do two friends chatting to one another on WhatsApp become social media? What happens when you add a third? And does using the status update function on WhatsApp make it social?

A definition has not been settled on, even by Australia. When it passed its legislation in November, it failed to detail which companies would be affected, although the country’s communications minister Michelle Rowland said Snapchat, TikTok, X, Instagram, Reddit and Facebook would probably come under the rules.

What is the evidence so far from Australia and other places that have passed bans?
Australia is the highest-profile country to take action, but its ban has not yet come into force. In the absence of evidence from a total ban, we have to rely on data from partial or scenario-specific bans, such as limiting access to tech or phones in schools or at certain hours of the day. A recent study published in the Lancet of more than 1,200 secondary school pupils found little difference in the mental wellbeing of those attending schools that had restrictive bans and schools that did not. The authors’ explanation was that school bans did not affect total phone use. However, according to the study: “We observed that increased time spent on phones/social media is significantly associated with worsened outcomes for mental health and wellbeing, physical activity and sleep, and attainment and disruptive behaviour.”

“Anecdotally, we know that overly restrictive, blanket bans tend not to work, tend to be circumvented by teens, but feel like they’re the right thing to do,” says Etchells. “The South Korea shutdown law is a good example of this.” In 2011, the country enacted a ban on children under the age of 16 from playing video games between midnight and 6am to try to head off concerns about video game addiction. The law was repealed a decade later after the country realised it did not have the intended effect, with identity theft rising as kids found ways to circumvent it.

Are some of the manifestations of big tech cosying up to Donald Trump in the US – from downsizing moderation teams to cancelling factchecking initiatives – focusing the calls for bans?
During Joe Biden’s presidency, says Livingstone, “there was a sense that trust and safety teams were building up. The regulation was coming, being consulted on and under way”. But recent attacks by the Trump campaign against NCMEC, the National Center for Missing & Exploited Children, a US nonprofit that is government funded, worry experts. NCMEC stops the spread of images of child abuse, and has had its funding threatened over its gender ideology. Overall, some fear it adds up to a bleak picture that might trigger more calls for blunt tools such as bans, rather than more nuanced measures that can make a real difference. “The child online safety experts are really worried about whether regulators are positioned to stand up to big tech,” says Livingstone. “Right now, it’s hard to reassure children, parents and the public that social media will get safer in the coming year.”

 

Leave a Comment

Required fields are marked *

*

*