Alexander Hurst 

I met the ‘godfathers of AI’ in Paris – here’s what they told me to really worry about

Experts are split between concerns about future threats and present dangers. Both camps issued dire warnings, says Guardian Europe columnist Alexander Hurst
  
  

Co-director of the International Association for Safe and Ethical AI, Stuart Russell, in Paris, 6 February 2025
Co-director of the International Association for Safe and Ethical AI, Stuart Russell, in Paris, 6 February 2025. Photograph: Sameer Al-Doumy/AFP/Getty Images

I was a technophile in my early teenage days, sometimes wishing that I had been born in 2090, rather than 1990, so that I could see all the incredible technology of the future. Lately, though, I’ve become far more sceptical about whether the technology that we interact with most is really serving us – or whether we are serving it.

So when I got an invitation to attend a conference on developing safe and ethical AI in the lead-up to the Paris AI summit, I was fully prepared to hear Maria Ressa, the Filipino journalist and 2021 Nobel peace prize laureate, talk about how big tech has, with impunity, allowed its networks to be flooded with disinformation, hate and manipulation in ways that have had very real, negative, impact on elections.

But I wasn’t prepared to hear some of the “godfathers of AI”, such as Yoshua Bengio, Geoffrey Hinton, Stuart Russell and Max Tegmark, talk about how things might go much farther off the rails. At the centre of their concerns was the race towards AGI (artificial general intelligence, though Tegmark believes the “A” should refer to “autonomous”) which would mean that for the first time in the history of life on Earth, there would be an entity other than human beings simultaneously possessing high autonomy, high generality and high intelligence, and that might develop objectives that are “misaligned” with human wellbeing. Perhaps it will come about as the result of a nation state’s security strategy, or the search for corporate profits at all costs, or perhaps all on its own.

“It’s not today’s AI we need to worry about, it’s next year’s,” Tegmark told me. “It’s like if you were interviewing me in 1942, and you asked me: ‘Why aren’t people worried about a nuclear arms race?’ Except they think they are in an arms race, but it’s actually a suicide race.”

It brought to mind Ronald D Moore’s 2003 reimagining of Battlestar Galactica, in which a public relations official shows journalists: “things that look odd, or even antiquated, to modern eyes, like phones with cords, awkward manual valves, computers that barely deserve the name”. “It was all designed to operate against an enemy that could infiltrate and disrupt all but the most basic computer systems … we were so frightened by our enemies that we literally looked backwards for protection.”

Perhaps we need a new acronym, I thought. Instead of mutually assured destruction, we should be talking about “self-assured destruction” with an extra emphasis: SAD! An acronym that might even break through to Donald Trump.

The idea that we, on Earth, might lose control of an AGI that then turns on us sounds like science fiction – but is it really so far-fetched considering the exponential growth of AI development? As Bengio pointed out, some of the most advanced AI models have already attempted to deceive human programmers during testing, both in pursuit of their designated objectives and to escape being deleted or replaced with an update.

When breakthroughs in human cloning were within scientists’ reach, biologists came together and agreed not to pursue it, says Stuart Russell, who literally wrote the textbook on AI. Similarly, both Tegmark and Russell favour a moratorium on the pursuit of AGI, and a tiered risk approach – stricter than the EU’s AI Act – where, just as with the drug approval process, AI systems in the higher-risk tiers would have to demonstrate to a regulator that they don’t cross certain red lines, such as being able to copy themselves on to other computers.

But even if the conference seemed weighted towards these future-driven fears, there was a fairly evident split among the leading AI safety and ethics experts from industry, academia and government in attendance. If the “godfathers” were worried about AGI, a younger and more diverse demographic were pushing to put an equivalent focus on the dangers that AIs already pose to climate and democracy.

We don’t have to wait for an AGI to decide, on its own, to flood the world with datacentres to evolve itself more quickly – Microsoft, Meta, Alphabet, OpenAI and their Chinese counterparts are already doing it. Or for an AGI to decide, on its own, to manipulate voters en masse in order to put politicians with a deregulation agenda into office – which, again, Donald Trump and Elon Musk are already pursuing. And even in AI’s current, early stages, its energy use is catastrophic: according to Kate Crawford, visiting chair of AI and justice at the École Normale Supérieur, data centres already account for more than 6% of all electricity consumption in the US and China, and demand is only going to keep surging.

“Rather than treating the topics as mutually exclusive, we need policymakers and governments to account for both,” Sacha Alanoca, a PhD researcher in AI governance at Stanford, told me. “And we should give priority to empirically driven issues like environmental harms, which already have tangible solutions.”

To that end, Sasha Luccioni, AI and climate lead at Hugging Face – a collaborative platform for open source AI models – announced this week that the startup has rolled out an AI energy score, ranking 166 models on their energy consumption when completing different tasks. The startup will also offer a one- to five-star rating system, comparable with the EU’s energy label for household appliances, to guide users towards sustainable choices.

“There’s the science budget of the world, and there’s the money we’re spending on AI,” says Russell. “We could have done something useful, and instead we’re pouring resources into this race to go off the edge of a cliff.” He didn’t specify what alternatives, but just two months into the year, roughly $1tn in AI investments have been announced, all while the world is still falling far short of what is needed to stay even within 2C of heating, much less 1.5C.

It seems as if we have a shrinking opportunity to lay down the incentives for companies to create the kind of AI that actually benefits our individual and collective lives: sustainable, inclusive, democracy-compatible, controlled. And beyond regulation, “to make sure there is a culture of participation embedded in AI development in general”, as Eloïse Gabadou, a consultant to the OECD on technology and democracy, put it.

At the close of the conference, I said to Russell that we seemed to be using an incredible amount of energy and other natural resources to race headlong into something we probably shouldn’t be creating in the first place, and which the relatively benign versions of are already, in many ways, misaligned with the kinds of societies that we actually want to live in.

“Yup,” he replied.

  • Alexander Hurst is a Guardian Europe columnist

 

Leave a Comment

Required fields are marked *

*

*