Just before midnight on New Year’s Eve, 25 years ago, Queen Elizabeth II stepped off a private barge to arrive at London’s Millennium Dome for its grand opening ceremony. Dressed in a pumpkin-orange coat, she entered the venue with Prince Philip, taking her place alongside Tony and Cherie Blair and 12,000 guests to celebrate the dawn of a new millennium. At the stroke of midnight, Big Ben began to chime and 40 tonnes of fireworks were launched from 16 barges lined along the river. The crowd joined hands, preparing to sing Auld Lang Syne. For a few long moments, the Queen was neglected – she flapped her arms out like a toddler wanting to be lifted up, before Blair and Philip noticed her, took a hand each, and the singing began. A new century was born.
One politician who wasn’t in attendance at the glitzy celebration was Paddy Tipping, a Labour MP who spent the night in the Cabinet Office. Tipping was minister for the millennium bug. After 25 years, it might be hard to recall just how big a deal the bug – now more commonly called Y2K – felt then. But for the last few years of the 90s, the idea that computers would fail catastrophically as the clock ticked over into the year 2000 was near the top of the political agenda in the UK and the US. Here was a hi-tech threat people feared might topple social order, underlining humanity’s new dependence on technological systems most of us did not understand. Though there are no precise figures, it’s estimated that the cost of the global effort to prevent Y2K exceeded £300bn (£633bn today, accounting for inflation).
So Tipping spent the night at 70 Whitehall, among boxy grey computers and a small group of civil servants in communication with other world governments. “We watched the sun rise across the world,” he recalls, “first in Australia, New Zealand, right through Asia. There were no reports of any real problems. Come midnight, I was actually quite relaxed about things.” After a few hours, when it became clear disaster would not strike, he walked back across the bridge to his home in Lambeth. The streets were full of drunken revellers, the mood was joyous, and the world was resolutely not ending.
Y2K went down in history as a millennial damp squib, much like the dome itself, which is largely remembered for brazen corporate sponsorship, broken attractions and hour-long queues to spend a few minutes walking inside a giant human body. Curiously enough, to this day experts disagree over why nothing happened: did the world’s IT professionals unite to successfully avert an impending disaster? Or was it all a pointless panic and a colossal waste of money? And given that we live today in a society more reliant on complex technology than ever before, could something like this happen again?
Though the Y2K threat was voiced publicly as early as 1958, it became a common concern only after the publication of a 1993 article in Computerworld magazine by Canadian engineer Peter de Jager, apocalyptically headlined “Doomsday 2000”. The problem was simple. Most computers at the time stored dates as six-digit numbers, so 30 August 1991 would be 30/08/91. Using two digits for the year was never a problem during the 20th century, but on the first day of the new millennium, the date would read 01/01/00, and IT professionals were concerned computers would think it was 1 January 1900, rather than 2000, causing errors in their systems.
A common misconception is that the problem was a coding mistake, perhaps encouraged by widespread use of the word “bug”. In fact, storing years as two digits was a deliberate design compromise made by coders trying to save space in the days when every byte of hard drive storage cost serious money. Much of this early code was written decades earlier by programmers who never expected their software to still be in use in 2000.
The concern was that, as a result, computers might get the date wrong, leading to failures in everything from personal computers to those controlling financial markets, hospitals, air travel, military equipment and social infrastructure such as traffic lights or ventilation systems. People were afraid of “cascading faults”, where one system failing would knock out another like dominoes, perhaps compromising essential services such as the electric grid or running water.
After a slow start, the UK began to take the threat seriously. In 1998, Blair warned in The Independent, “Ticking away inside many of our computers is a potential technical time bomb ... unless we act, the consequences of the Millennium Bug could be severe.” Others raised awareness in different ways, such as leader of the House of Commons Margaret Beckett, who was photographed cutting a Y2K bug-themed cake featuring the government’s official bug mascot, resembling a microchip with 10 legs and malevolent eyes, which was plastered across Y2K compliance brochures, pull-outs in newspapers and billboards that asked in bold type: “Are you sure you’ve done enough?”
Much of the messaging came from the government-funded groups Taskforce 2000 and Action 2000. Robin Guenier, previously chief executive of the government’s Central Computer and Telecommunications Agency, led the former and was a prominent voice warning of the dangers. At the start, he says, it was tough to get people interested: “I remember one early morning TV interview when, just before I came on, the interviewer looked at his schedule and said, ‘Oh no, it’s about computers – everyone’s going to pull the duvet over their heads.’”
The remediation work was not sexy. Guenier called the job of scouring raw code for dates that might be problematic an “exceptionally boring and unglamorous undertaking” that involved repeated rounds of testing, because changing code could cause issues elsewhere in the system. It was an enormous job. Martyn Thomas, who ran Y2K remediation efforts internationally for Deloitte, recalled the work for the finance division of General Motors in Europe: “We put together a small army of people, rented an aircraft hangar and bought three or four hundred PCs so we could run the scanning and repair equipment needed. And that was just one example of what was happening all over the world.”
By late 1999, most UK organisations felt their systems were prepared. But the global media had other ideas and revelled in fantasies of apocalyptic doomsday scenarios. Articles in Time Magazine and Vanity Fair painted a picture of a Y2K midnight moment, when planes would fall out of the sky, people’s savings would be wiped out in the blink of a cursor, home appliances would explode and nuclear reactors would go into meltdown. It didn’t matter that few experts expected problems of this severity. In the words of Anthony Finkelstein, then a professor of software systems engineering at University College London, for many journalists at the time, the Y2K doomsday story was “simply too good to check”.
“This was a beautiful, perfect story for the popular press,” says Zachary Loeb, a historian at Purdue University, Indiana, who is writing a book about Y2K. “There’s a race against time to fix the computers or something terrible is going to happen, with an immovable deadline set in stone … if you wrote this in a movie script, it would be too on-the-nose.”
Y2K seized the cultural zeitgeist. You could buy A Christian’s Guide to the Millennium Bug or Y2K for Women. Episodes of Family Guy and The Simpsons riffed on the Y2K apocalypse. There was the action movie Y2K: Year to Kill. Leonard Nimoy hosted a video called Y2K Family Survival Guide in which he asked, in front of scrolling images of satellites and computer chips, “How could the omission of two simple digits affect the destiny of all mankind?” Even Raid bug spray jumped on the bandwagon with TV ads that boasted, “Raid, the official killer of the millennium bug.”
While some opportunists saw a chance to cash in, others took the threat more seriously. Survivalist and fundamentalist religious groups in the US co-opted Y2K into their apocalyptic ideologies. Reverend Jerry Falwell advised his flock to stock up on food and guns, calling Y2K “God’s instrument to shake this nation”. With hindsight, it looks like a precursor to the conspiracy theorists, from QAnon to Covid sceptics, who today adopt crises to add urgency to their arguments.
It wasn’t just in the US. In 1998, publisher Angela Perron and her computer programmer husband hired a lorry and drove from their Wiltshire home to a remote cottage in Moray, Scotland, where they could isolate themselves from the social collapse they anticipated around Y2K. They lived there totally off-grid, with no plumbing or mains electricity, instead relying on a generator, water from a local stream, vegetables in the garden and hens for eggs. Perron’s husband learned to shoot in case they needed to kill rabbits for food.
* * *
What was it about the Y2K bug that created such panic? In part, there was already something witchy about the idea of a new millennium, even before the warnings of technological collapse. Some historians believe humans also had a collective wobble about the apocalypse at the turn of the year 1000. The 1999/2000 date change intersected with another long-held fear: that our grand scientific inventions are going to lead to disaster, an idea stretching back at least as far as Mary Shelley’s 1818 book Frankenstein.
It’s worth remembering that the 20th century had seen dramatic social changes resulting from new technologies, as we evolved from horse and cart to landing on the moon in the space of a single lifetime. Even in the 90s, technology’s infiltration into daily life still felt novel for many. “Doing an email was a struggle,” Tipping says. “We’ve all changed over the years.”
The Y2K threat made people suddenly aware of how deeply dependent modern society was on technology. Simultaneously, they realised computers were not the sleek, perfect devices they might have assumed – in fact they were fragile patchworks of design compromises and lazy code, liable to fail at any moment. It was as if you thought you’d been standing on a sturdy suspension bridge, then looked down to realise it was a rickety platform of rotting planks held together by glue, duct tape and hope. As one computer science adage had it, “If we built houses the way we build software, the first woodpecker to come along would destroy civilisation.”
“It’s frightening when you tell people they are reliant on opaque technical systems, that these are vulnerable, and all they can do is hope the nerds are going to fix it in time,” Loeb says. “Amid the hype and excitement around computers in the 90s, suddenly this is the dark side.”
* * *
Midnight. 1 January 2000. 01/01/00. As the Queen and Blair held hands stiffly for Auld Lang Syne, Paddy Tipping waited in the Cabinet Office and Angela Perron holed up in Scotland with her family, most IT professionals were quietly confident that nothing bad was going to happen. In any case, there was little scientific evidence to suggest that the problems would all happen simultaneously at the stroke of 12 o’clock.
Martyn Thomas believed if their remediation efforts had been insufficient, faults would already have been occurring en masse throughout 1999, as systems looked ahead to dates in the future. “That wasn’t happening, so it was clear we’d cracked it,” he says. “We managed to fix enough of the problems that it wasn’t going to be a major disaster.” Just to be safe, though, he still brought in a small stock of food and filled all the bathtubs in his house with water, so he’d have a supply of the essentials in case something went terribly wrong. “But I emptied the baths as soon as the millennium was over,” he says.
There were two main news stories as the BBC reported on the new year: Russian president Boris Yeltsin had suddenly resigned and ceded power to prime minister Vladimir Putin; and the UK was celebrating the start of a new millennium. “The first babies of the century have been born,” broadcaster Michael Buerk said, “and there’s no real sign yet of the millennium bug.”
That’s not to say nothing went wrong. Already in the years building up to Y2K, there had been some date-related glitches. A 104-year-old American woman received an invitation to join preschool, because the system thought she was just four. Five hundred Philadelphia residents received jury summonses in 1999 to appear in court in the year 1900. A New York video rental store gave a bill for $91,250 to a customer who appeared to have returned the John Travolta military mystery The General’s Daughter 100 years late.
On the millennium itself, there were many small failures around the world, mainly due to a lack of preventive action, but most were quickly remedied: police breathalysers in Hong Kong, traffic lights in Jamaica, slot machines in Delaware. Some issues were more serious: 10,000 HSBC card machines in the UK stopped working for three days. Bedfordshire social services were unable to find anyone in their care aged older than 100. The monitoring equipment in a Japanese nuclear power plant briefly shut down, though it caused no risk to the public. Some medical equipment failed, including a few dialysis machines in Egypt and equipment to measure bone marrow in South Korea.
Most seriously, 154 women in South Yorkshire and the east Midlands were given incorrect test results regarding their risk level for giving birth to a child with Down’s syndrome, because the system had calculated their ages incorrectly. This directly resulted in two pregnancies being terminated, while four babies were born with Down’s to mothers who had been incorrectly told they were at low risk. The NHS issued an apology and updated its systems after the error was recognised.
Though these were significant issues, there was no series of cascading faults leading to infrastructural collapse as the doomsayers had warned. President Bill Clinton called it “the first challenge of the 21st century successfully met”. Yet almost overnight, the tenor of media coverage changed. The bug became a punchline. On 2 January, the Guardian wrote, “The much-hyped Y2K disaster fizzled out like a damp firework,” asking, “Has the world been caught by a massive Y2K bug con?” In Scotland, Perron held on for a few months, then left her isolated cottage, and she and her husband divorced, blaming the strain of their millennium move. A friend reportedly said, “It is ironic that the only people who seem to have been bitten by the millennium bug are the very people who gave up everything to be safe from it.”
The idea that Y2K was a hoax began to take hold in both the media and public memory. People wondered if they’d been wrong to trust the experts. Some historians believe this change of perspective was a reaction to the hyperbolic warnings in the press, which had painted a far more cataclysmic picture than experts actually anticipated, coupled with the fact that some opportunists did exploit Y2K fears to turn a quick buck. These ranged from tech companies that suggested clients get expensive unnecessary upgrades to fake Y2K remedy discs that were meant to protect home computers, but in reality did nothing other than trigger a pop-up window telling users their PCs were now safe. “People assumed it was all a big scam,” Thomas says. “If you insure your house against it burning down and it doesn’t burn down, you’ve wasted your money, haven’t you?”
* * *
why did nothing terrible happen? Was it because we were so well prepared, or because there was nothing to worry about in the first place? Even 25 years later, this question remains unanswered. On one side of the debate are the sceptics, including Finkelstein, who had criticised the Y2K fears since the 90s due to a perceived lack of scientific evidence. He calls the British government’s response “an overreaction”, believing there was no major threat and any issues that did occur could have been easily addressed when they appeared. A common argument is that other tech-dependent countries where there was little Y2K remediation effort, such as South Korea and Italy, did not suffer major issues compared with those that poured money into the problem, such as the UK. “Perhaps the most obvious explanation – that they got it right and we did not – is too difficult to accept,” Finkelstein wrote in 2000.
On the other side of the debate are IT professionals who feel they worked hard to fix a very real threat, then became victims of their own success when the crisis was averted. “It outrages all the people working long and hard to get the job done that everybody thinks it was all a scam,” Thomas says. “People who win a war like that never get the credit they really deserve.”
The problem is, it’s impossible to prove why something didn’t happen. Both sides can claim that they were proved correct by the lack of a major meltdown. “I don’t think we can settle it,” says Dylan Mulvin, a historian at the London School of Economics who has been studying Y2K for 15 years. “It represents two different ways of approaching human relationships to technology: one that emphasises a mastery over it, the other that defaults to catastrophic thinking. The only way to have sorted it out would have been to do nothing, fixed no code, and seen what happened.”
The most common criticism from the sceptics is that too much money was spent on Y2K remediation. But much of this investment had long-term benefits beyond 2000, pushing companies and governments to upgrade dated IT systems. “Spending money on Y2K didn’t mean pouring money into a volcano, it meant pouring money into IT,” Loeb says. There have even been suggestions that the fact US infrastructure was barely interrupted by the 9/11 attacks the following year was thanks to reinforcements made to computer systems for Y2K.
Looking back, it’s possible to see Y2K as a model for collective action, a rare moment when international governments and the private sector cooperated to head off a global threat. With other existential threats facing the world today, ranging from AI to climate change, we might wonder if, in today’s polarised age, we will be able to meet such challenges with the same unity. “There’s actually something quite inspiring about the idea of spending too much money and making too much effort to fix a problem,” Mulvin says.
Still, when asked whether we learned our lessons from Y2K, every person interviewed for this piece gave the same answer: no. While our IT systems may be more robust today (and even this is a point of contention), we have not learned how to communicate more judiciously about technology. “Every new thing is hailed as if it’s going to either save the world or destroy it,” Loeb says. “What gets lost is the complexity of what’s happening in between.” So while the media frets over AI one day achieving human levels of intelligence, less grabby topics with more immediate impact are neglected, such as how unaccountable algorithms are used to evaluate people’s eligibility for jobs, healthcare or loans.
* * *
This past year, Y2K has been making a comeback. Not the millennium bug, but rather #Y2K the aesthetic, the fashion trend filling the TikToks and moodboards of kids who weren’t even born then. “Now is the perfect time to be nostalgic about Y2K,” says Mulvin, pointing out that people who remember it from their childhoods are now setting trends and directing the cultural conversation. This month sees the release of comedy horror film Y2K, in which household electronics come to life at a teen party on New Year’s Eve 1999 and try to destroy humanity – starting with a killer Tamagotchi.
But could a real Y2K-like bug threaten our infrastructure today the way it was prophesied in 1999? There are possibilities of other date-related errors. On the scintillating Wikipedia page for “Time formatting and storage bugs”, Y2K is just one of 44 entries regarding error-causing dates, which range from 1975 to the year 275,760 and even 292,277,026,596. Already coders are talking about the 2038 problem, where it is predicted that on 19 January 2038, at 3.14am and seven seconds, computers with older 32-bit Unix operating systems will no longer be able to represent the time. Thomas isn’t worried about Y2K38, though: “Everybody knows about that one … I don’t expect it to be a problem.” He adds that most systems have already been upgraded to 64-bit, or will be in the near future, meaning they will be able to track the time for another 292bn years.
The closest parallel to Y2K that actually occurred was the software outage in July 2024 resulting from a defective content update by the cybersecurity company CrowdStrike, which caused planes to be grounded on runways, errors in the London Stock Exchange and NHS surgeries being unable to access patient records – many of the same issues predicted around Y2K. Though affected companies lost an estimated $5.4bn, the outage was resolved in a matter of days. The world kept turning.
“IT problems happen, and most of the time, those things get fixed without any of us realising,” Loeb says. “To the extent that people learned anything from Y2K, it’s that they don’t have to worry about the computers, because the people in IT will always fix it in time.”