The Australian government is considering a European Union-style artificial intelligence act to regulate minimum standards on high-risk AI across the whole economy.
On Wednesday the industry and science minister, Ed Husic, released a discussion paper proposing 10 “mandatory guardrails” for high-risk AI including human oversight and the ability to challenge the use of AI or outcomes of automated decision-making.
In a statement, Husic said Australians “know AI can do great things” but they also wanted to know protections are in place “if things go off the rails”.
“We need more people to use AI and to do that we need to build trust” by creating a regulatory framework, he said.
Long the stuff of science fiction, generative artificial intelligence – which can create new content such as text, images, audio and code – has experienced a surge in uptake through “large language model” programs such as ChatGPT, Google’s chatbot Bard and Microsoft Bing’s chat feature.
AI is already throwing up challenges to lawmakers and consumers, from misleading people on prices, alleged defamation from false claims served up by chat programs to warnings from the eSafety commissioner that generative AI could automate child grooming by predators.
The paper notes while AI can “improve wellbeing and quality of life” its potential for harms include creating and amplifying bias; physical or psychological injury; breach of privacy; and threatening national security through information manipulation and malicious cyber activity.
The paper proposes to define high-risk AI to “intended and foreseeable uses” and the “unforeseeable risks” created by general-purpose AI, which “can be applied in contexts they were not originally designed for”.
Under the 10 guardrails, organisations developing or deploying high-risk AI would need to establish risk management processes; test AI and monitor once deployed; enable human control or intervention to achieve meaningful human oversight; inform end-users of AI decisions, interactions and content; and establish processes for people affected by AI systems to challenge use or outcomes.
Organisations should also be required to “keep and maintain records to allow third parties to assess compliance with guardrails” and undertake tests to certify their own compliance, it said.
Once settled, the guardrails could be applied within existing regulations, through amendments to legislation in separate domains or through “a new cross-economy AI act”, as the EU moved to do in June.
That option would enable the creation of an independent AI regulator, although this would “take resources and time”, the paper said.
Husic said Australians want stronger protections on AI and businesses had also been calling for greater clarity around using AI safely.
“From today, we’re starting to put those protections in place,” he said.
The paper noted the government plans to “strengthen privacy protections, transparency and accountability”, a reference to the privacy law reform package expected to be unveiled as early as next week.
The attorney general’s department is also working on a “whole of government legal framework to support use of automated decision-making systems”, which was a recommendation from the robodebt royal commission.