In the 1983 movie WarGames, the US defence department runs a superintelligent central computer that is hacked into by a teenager, who unwittingly almost causes a nuclear Armageddon. The end of the world is averted when the computer, known as Joshua, learns, after playing tic-tac-toe with the teenager, that nuclear war cannot have a winner. The insight causes him to rescind missile launch orders with the comment: “A strange game. The only winning move is not to play.”
Joshua embodied the idea that a superintelligent AI would have an anthropomorphic mindset. Yet it was a human who saved the world that year. Lt Col Stanislav Petrov disobeyed orders for a catastrophic retaliatory nuclear strike when the automated early warning system of the Soviet Union in September 1983 falsely indicated an American nuclear attack. Supersmart machines cannot just be left to their own devices. They – and their development – need to be properly handled.
That has been made harder by the corporate chaos at OpenAI, the makers of the most famous electronic human impersonator, ChatGPT. The firing and rehiring of Sam Altman, the company’s chief executive, raises questions over whether OpenAI becomes just another profit-driven business. This would be a loss. OpenAI was an innovative attempt to facilitate large-scale cooperation to reduce AI risks. Its governing charter, however imperfectly observed, committed the firm to assist rivals rather than compete with them.
Mr Altman, along with Elon Musk, set up OpenAI in 2015 as a nonprofit research lab aimed at safely achieving the holy grail of computing: an artificial general intelligence (AGI) which can match or surpass human thinking. But this has turned out to be an expensive venture – and a profit‑making subsidiary was set up to fund the work by commercialising OpenAI’s breakthroughs. In 2019, Microsoft began taking an interest. Two years later it had put up $13bn for a 49% stake in the for-profit arm, which OpenAI considered a “donation” for its noble cause rather than an “investment”.
On paper, OpenAI runs a for-profit business, where returns are capped, controlled by a nonprofit tasked with developing AI for the “best interests of humanity”. Mr Altman was fired because he “was not consistently candid in his communications with the [nonprofit’s] board”. When staff warned that Mr Altman’s departure could trigger OpenAI’s collapse, the board reportedly said this would “be consistent with the mission”.
All those who voted for Mr Altman’s removal have been replaced by a more business-friendly team – including reportedly a seat for Microsoft, which was key to Mr Altman’s return. Much has been made of a possible breakthrough in superintelligence that precipitated his initial fall. But that seems fanciful. Current AI systems seem more like digital slaves, albeit slaves with superhuman abilities.
In the movie WarGames, Joshua asks “Shall we play a game?” before initiating a nuclear war. The autonomous power to hurt, destroy or deceive humans should never be vested in AI. Governments need to regulate this area more carefully. What if the pursuit of technological supremacy means the tug of conscience is resisted and laws dodged? OpenAI’s bust-up may be a sign of maturity. Hasty choices were made and unmade. But cash has had the last word. The history of corporations suggests many have goals, and the financial power, to seek to evade oversight from the rest of society. Putting that ethos in charge of yet another AI firm would be a step back.
Do you have an opinion on the issues raised in this article? If you would like to submit a letter of up to 250 words to be considered for publication, email it to us at observer.letters@observer.co.uk