The federal government’s proposed social media ban for under-16s has been unveiled in the parliament, with $50m in fines for big tech companies which don’t comply – and an admission from Labor the new rules may require all Australians to hand over more of their personal data.
But while the government has promised “robust” privacy protections for that extra information, and bans on social media giants using that data for other purposes, we still don’t know exactly what kind of data people will have to hand over. The government instead is kicking that can to the big tech firms themselves, essentially asking Facebook, Snapchat and X to come up with their own systems instead.
Here’s what we know so far – and what questions still need to be answered.
Which platforms will be targeted?
The communications minister, Michelle Rowland, introduced the legislation on Thursday. She said the laws aren’t meant to apply to messaging services, such as Facebook Messenger or WhatsApp, or online services like Kids Helpline. The laws will also give some exemptions to products used for educational purposes, such as YouTube or Google Classroom.
The bill is introducing a new term – “age-restricted social media platforms” – into the Online Safety Act. This will apply to platforms such as TikTok, Facebook, Snapchat, Instagram and X, Rowland said – as well as message board Reddit.
Specifically, the bill seeks to stop under-16s being “logged in” or having accounts on those services. Young people would still be able to view YouTube videos while being logged out of the platform, or view some Facebook pages used for a business in a logged-out state. For instance, the explanatory memorandum concedes some Facebook pages “such as the landing page of a business or service that uses social media as their business host platform” would be able to be viewed.
Why is it needed?
The government has raised major concerns about the mental health effects of social media on young people, including issues around body image, bullying, abuse and harmful messages such as misogynous content.
Introducing the bill, Rowland said social media could be “a source of entertainment, education and connection with the world and each other”.
“But for too many young Australians, social media can be harmful. Almost two-thirds of 14- to 17-year-old Australians have viewed extremely harmful content online, including drug abuse, suicide or self-harm, as well as violent material,” she said.
The minister went on to detail eSafety Commission research saying parents found online safety to be one of their biggest challenges.
“The Albanese government has heard the concerns of parents of young people and experts. Social media has a social responsibility. We know they can and should do better to address harms on their platforms.”
The memorandum, citing worldwide research and conversation, concedes there is “currently no clear and agreed age at which children can safely use social media” – but that the Australian age of 16 was agreed after wide local consultation.
How will it work?
In short – we don’t know exactly.
The changes will introduce “an obligation on providers of an age-restricted social media platform to take reasonable steps to prevent age-restricted users from having an account with the platform”, according to the bill’s explanatory memorandum.
The onus will be on platforms to “introduce systems and processes that can be demonstrated to ensure that people under the minimum age cannot create and hold a social media account”. Individual users won’t be punished for managing to open an account, and platforms won’t be penalised for “individual instances” of getting around their tools – but the new fines, up to $50m, would apply for “systematic” issues where many users are managing to skirt the rules.
However, the bill itself is intentionally vague on how that should be achieved.
“The Bill does not dictate how platforms must comply with the minimum age obligation,” the memorandum states.
“However, it is expected that, at a minimum, the obligation will require platforms to implement some form of age assurance, as a means of identifying whether a prospective or existing account holder is an Australian child under the age of 16 years.”
The “reasonable steps” test will be assessed on a series of factors like the costs of implementing the technology, the availability of technology and the data implications for users. The government’s age assurance trial, a program meant to look into possible options for age assurance technology, hasn’t even begun yet – so we don’t know what the platforms might do, or require from their users.
The government said the reasonable steps test may “evolve over time” due to technology change.
What data will people have to hand over?
We don’t know yet, and probably won’t until the platforms are forced to build these systems.
The government concedes that complying with the age assurance framework “may require the collection, use and disclosure of additional personal information”.
In a Senate estimates hearing earlier this month, the Greens senator David Shoebridge asked James Chisholm, the deputy secretary of the communications department, if “everybody [would] have to go through an age-verification process”.
“Yes,” Chisholm replied.
The memorandum goes on to stress there are “robust” privacy protections for any extra data needed, “including prohibiting platforms from using information collected for age assurance purposes for any other purpose, unless explicitly agreed to by the individual”.
“Once the information has been used for age assurance or any other agreed purpose, it must be destroyed by the platform (or any third party contracted by the platform).”
It also states that “serious and repeated breaches of these privacy provisions” could be met by fines of up to $50m under the Privacy Act.
Won’t young people be able to get around the laws?
In a word – yes.
Indeed the memorandum says “Australia should be prepared for the reality that some people will break the rules, or slip through the cracks” – but, in comparing it to how young people manage to obtain alcohol or cigarettes, the bill is said to “set clear parameters for our society and assist in ensuring the right outcomes”.
The government has said that giving parents the power to tell their children that social media is illegal for them, even if the enforcement itself is not perfect, would be a powerful tool.
Rowland said the plan “will not be met with universal acceptance”, but governments should take such steps.
“There is wide acknowledgment that something must be done in the immediate term to help prevent young teens and children from being exposed to streams of content, unfiltered and infinite,” she said.
• In Australia, support is available at Beyond Blue on 1300 22 4636, Lifeline on 13 11 14, and at MensLine on 1300 789 978. In the UK, the charity Mind is available on 0300 123 3393 and Childline on 0800 1111. In the US, call or text Mental Health America at 988 or chat 988lifeline.org