The Morrison government’s anti-trolling bill purports to help the victims of online abuse by improving pathways to complain about defamation and identify anonymous commenters.
But a close analysis of the bill, released on Wednesday alongside the announcement of a parliamentary inquiry into social media, suggests the biggest beneficiaries are media companies and other operators of social media pages, including community groups and businesses.
Ordinary social media users will still face a tricky process to identify commenters, with alleged trolls able to refuse to take posts down or consent to being identified.
So how will the proposed changes work and will they really help ordinary people pursue a defamation claim?
What is the problem?
The bill counteracts the high court’s decision in Dylan Voller’s defamation case, which ruled that media companies were liable as publishers of comments that other users made on their social media posts.
In a paper on the changes, the government said this could have a “chilling effect on free speech” and see Australians and businesses with ordinary social media pages held liable. It gave the example of a busy cafe owner who doesn’t have the capacity to moderate comments on her page promoting her cafe.
The bill also covers cases where a complainant does not know where a commenter is located, and does not know their identity or contact details, preventing them from starting a defamation case against them.
How does the bill fix these?
The bill deals with the liability issue by deeming that an Australian person or company with a social media page is not the publisher of third-party comments made by other users. Instead, it deems the social media company the publisher.
The bill deals with anonymous trolling by providing social media companies a defence in defamation if they have a complaints procedure to help identify commenters that meets certain requirements.
Who will the new publisher rule benefit?
Prof David Rolph, a defamation expert at the University of Sydney, told Guardian Australia that deeming any organisation, irrespective of their size, as not liable for third-party comments on their social media pages “has the effect of alleviating any media organisation from an obligation of content moderation”.
“Why would they moderate comments on their social media pages … if they’re never going to be found to be a publisher?”
Social media page operators may still moderate for other reasons, including their own reputations, but defamation liability being removed decreases the financial incentive to do so.
Which social media companies will it apply to?
The bill requires social media companies with more than 250,000 Australian users to have a “nominated entity” in Australia to comply with the obligations, and also allows the attorney general to set rules applying the law to other social media companies.
There are fines of up to $550,000 for companies that refuse to comply, with a further penalty of up to $27,750 for every day they refuse.
Rolph said that online defamation cases were difficult to pursue and enforce against international social media companies that do not necessarily have a presence in Australia.
“A reform that seeks to make social media companies subject to Australian jurisdiction and make judgments enforceable against them is an advance,” he said.
How will complainants unmask trolls?
There are two pathways for complainants to seek to identify anonymous commenters.
A person can complain to a social media company, which then has 72 hours to take the complaint to the commenter.
Social media companies can remove the comment and identify the commenter, but only if they provide their consent. The companies must inform the complainant of the outcome.
The second avenue is to apply to a court for an end-user disclosure order. If the complainant can’t get contact details or can’t determine if the commenter is in Australia and the court determines they could have a claim for defamation, the court may require the disclosure of the user’s country and contact details including name, email and phone number.
Rolph noted the new bill comes after the first stage of defamation law reforms added the threshold of “serious harm” before an applicant can make a defamation claim. This area of the law is still “untested” and it “may not be as easy to get these types of orders any more” as a result of needing to prove serious harm, he said.
Rolph noted there is already a regime for defamation applicants to apply for “preliminary discovery”, and applicants may continue to use this method to identify commenters.
Will the government help complainants?
The bill allows the attorney general to intervene if the complainant starts a defamation case against the social media company; or in cases seeking an end-user information disclosure order.
The attorney general can then authorise the commonwealth to pay the applicants’ costs if the case settles an uncertain area of law or assists a section of the public that is “socially or economically disadvantaged”.
The discretion for the attorney general to pay costs appears to only apply in favour of the applicant, not the commenter being sued, although a court may order costs against the commonwealth once they have intervened.
Are there safeguards?
To prevent abuse of the system, social media companies are only required to act if the defamation claim appears genuine.
Courts may refuse an end-user disclosure order if “disclosure is likely to present a risk to the commenter’s safety”.
Rolph said that safeguard is “limited” to the safety of the commenter, but in some circumstances identifying an anonymous person making allegations, such as a claim of family violence, might put others at risk, such as family members.
How do social media companies currently deal with complaints?
Facebook has a form for people to fill out when they believe they have been defamed, so users can ask for the allegedly defamatory comments to be removed, although lawyers describe the process as cumbersome.
Twitter’s process requires law enforcement or other authorities to request allegedly defamatory content be removed or ask for information that must come with a court order to remove the defamatory comments.
Twitter argues its own processes, including removing accounts found to have engaged in abuse or harassment, and the introduction of powers to limit who can reply to tweets are improving the overall experience on the platform.
Google has recently found itself party to multiple defamation proceedings in Australian courts in order for applicants to force Google to reveal the account information or IP addresses of people who leave allegedly defamatory reviews of businesses on its maps and search platforms.
After orders are made in court, a letter is then sent to Google in the United States, who will then release account information or IP addresses. Internet service providers can also be required to identify who is behind a particular IP address.
What is the social media inquiry?
On Wednesday, the Morrison government set up a select committee to inquire into the harms of social media, to report back by 15 February.
The inquiry will consider: “the potential impacts of online harms on the mental health and wellbeing of Australians”; age and identity verification; child safety and parental tools; transparency and accountability; and safe collection of data.