Artificial intelligence systems capable of feelings or self-awareness are at risk of being harmed if the technology is developed irresponsibly, according to an open letter signed by AI practitioners and thinkers including Sir Stephen Fry.
More than 100 experts have put forward five principles for conducting responsible research into AI consciousness, as rapid advances raise concerns that such systems could be considered sentient.
The principles include prioritising research on understanding and assessing consciousness in AIs, in order to prevent “mistreatment and suffering”.
The other principles are: setting constraints on developing conscious AI systems; taking a phased approach to developing such systems; sharing findings with the public; and refraining from making misleading or overconfident statements about creating conscious AI.
The letter’s signatories include academics such as Sir Anthony Finkelstein at the University of London and AI professionals at companies including Amazon and the advertising group WPP.
It has been published alongside a new research paper that outlines the principles. The paper argues that conscious AI systems could be built in the near future – or at least ones that give the impression of being conscious.
“It may be the case that large numbers of conscious systems could be created and caused to suffer,” the researchers say, adding that if powerful AI systems were able to reproduce themselves it could lead to the creation of “large numbers of new beings deserving moral consideration”.
The paper, written by Oxford University’s Patrick Butlin and Theodoros Lappas of the Athens University of Economics and Business, adds that even companies not intending to create conscious systems will need guidelines in case of “inadvertently creating conscious entities”.
It acknowledges that there is widespread uncertainty and disagreement over defining consciousness in AI systems and whether it is even possible, but says it is an issue that “we must not ignore”.
Other questions raised by the paper focus on what to do with an AI system if it is defined as a “moral patient” – an entity that matters morally “in its own right, for its own sake”. In that scenario, it questions if destroying the AI would be comparable to killing an animal.
The paper, published in the Journal of Artificial Intelligence Research, also warned that a mistaken belief that AI systems are already conscious could lead to a waste of political energy as misguided efforts are made to promote their welfare.
The paper and letter were organised by Conscium, a research organisation part-funded by WPP and co-founded by WPP’s chief AI officer, Daniel Hulme.
Last year a group of senior academics argued there was a “realistic possibility” that some AI systems will be conscious and “morally significant” by 2035.
In 2023, Sir Demis Hassabis, the head of Google’s AI programme and a Nobel prize winner, said AI systems were “definitely” not sentient currently but could be in the future.
“Philosophers haven’t really settled on a definition of consciousness yet but if we mean sort of self-awareness, these kinds of things, I think there’s a possibility AI one day could be,” he said in an interview with US broadcaster CBS.