Editorial 

The Guardian view on a global AI race: geopolitics, innovation and the rise of chaos

Editorial: China’s tech leap challenges US dominance through innovation. But unregulated competition increases the risk of catastrophe
  
  

President Trump signs an executive order on AI
President Trump signs an executive order on AI. Photograph: Kevin Lamarque/Reuters

Eight years ago, Vladimir Putin proclaimed that mastering artificial intelligence (AI) would make a nation the “ruler of the world”. Western tech sanctions after Russia’s invasion of Ukraine should have dashed his ambitions to lead in AI by 2030. But that might be too hasty a judgment. Last week, the Chinese lab DeepSeek unveiled R1, an AI that analysts say rivals OpenAI’s top reasoning model, o1. Astonishingly, it matches o1’s capabilities while using a fraction of the computing power – and at a tenth of the cost. Predictably, one of Mr Putin’s first moves in 2025 was to align with China on AI development. R1’s launch seems no coincidence, coming just as Donald Trump backed OpenAI’s $500bn Stargate plan to outpace its peers. OpenAI has singled out DeepSeek’s parent, High Flyer Capital, as a potential threat. But at least three Chinese labs claim to rival or surpass OpenAI’s achievements.

Anticipating tighter US chip sanctions, Chinese companies stockpiled critical processors to ensure their AI models could advance despite restricted access to hardware. DeepSeek’s success underscores the ingenuity born of necessity: lacking massive datacentres or powerful specialised chips, it achieved breakthroughs through better data curation and optimisation of its model. Unlike proprietary systems, R1’s source code is public, allowing anyone competent to modify it. Yet its openness has limits: overseen by China’s internet regulator, R1 conforms to “core socialist values”. Type in Tiananmen Square or Taiwan, and the model reportedly shuts down the conversation.

DeepSeek’s R1 highlights a broader debate over the future of AI: should it remain locked behind proprietary walls, controlled by a few big corporations, or be “open sourced” to foster global innovation? One of the Biden administration’s final acts was to clamp down on open-source AI for national security reasons. Freely accessible, highly capable AI could empower bad actors. Interestingly, Mr Trump later rescinded the order, arguing that stifling open-source development harms innovation. Open-source advocates, like Meta, have a point when crediting recent AI breakthroughs to a decade of freely sharing code. Yet the risks are undeniable: in February, OpenAI shut down accounts linked to state-backed hackers from China, Iran, Russia and North Korea who used its tools for phishing and malware campaigns. By summer, OpenAI had halted services in those nations.

Superior US control over critical AI hardware in the future may give rivals little chance to compete. OpenAI offers “structured access”, controlling how users can interact with its models. But DeepSeek’s success suggests that open-source AI can drive innovation through creativity, rather than brute processing power. The contradiction is clear: open-source AI democratises technology and fuels progress, but it also enables exploitation by malefactors. Resolving this tension between innovation and security demands an international framework to prevent misuse.

The AI race is as much about global influence as technological dominance. Mr Putin urges developing nations to unite to challenge US tech leadership, but without global regulation, there are immense risks in a frantic push for AI supremacy. It would be wise to pay heed to Geoffrey Hinton, the AI pioneer and Nobel laureate. He warns that the breakneck pace of progress shortens the odds of catastrophe. In the race to dominate this technology, the greatest risk isn’t falling behind. It’s losing control entirely.

 

Leave a Comment

Required fields are marked *

*

*