Hello, and welcome to TechScape. It’s been another wild few days in Elon Musk news. Stay tuned for our coverage. In personal news, I deleted Instagram from my phone to try out a month without it there. Instead of scrolling, I’ve been listening to Shygirl and Lady Gaga’s new music.
American AI superiority?
DeepSeek roiled the US stock market last week by proposing that AI shouldn’t really be all that expensive. The suggestion was so stunning it wiped about $600bn off of Nvidia’s market cap in one day. DeepSeek says it trained its flagship AI model, which topped US app stores and nearly equals the performance of the US’s top models, with just $5.6m. (How accurate that figure is has been disputed.) For a moment, it seemed like the joint announcement of Stargate, the US’s $500bn AI infrastructure project that joins Oracle, Softbank and OpenAI, would be an enormous over-commitment by people who didn’t know what they were talking about. Same with Meta and Microsoft’s enormous earmarks. Hey, big spender: investors want to see this cashflow turn the other way.
Amid the mania, Meta and Microsoft, two tech giants that have staked their futures on their artificial intelligence products, reported their quarterly earnings. Each has committed to spending tens of billions of dollars next year to build out their artificial intelligence infrastructure, which each has lavished tens of billions on already. Meta has promised $60bn, Microsoft $80bn.
Asked about DeepSeek on a call with analysts, Mark Zuckerberg rebuffed such suspicions: “I continue to think that investing very heavily in CapEx and infra is going to be a strategic advantage over time.”
Satya Nadella said: “As AI becomes more efficient and accessible, we will see exponentially more demand.” Microsoft has embraced DeepSeek and made it available to Azure customers.
One person’s entire fortune will live or die on the superiority of American AI: Sam Altman. He reacted to the DeepSeek mania by announcing that OpenAI would release a new version of ChatGPT with greater capabilities for free. Previously, the chatbot’s paid users, some of whom pay $200 a month, received access to its cutting-edge capabilities first. What Altman didn’t say was just as notable. He did not announce that OpenAI would reduce its enormous spending, nor did he say Stargate would need less cash. He’s just as committed as Zuckerberg and Nadella to the big bucks game.
I’ll be watching Google’s earnings tonight for Sundar Pichai’s opinion on what DeepSeek means for his company and its enormous spending.
AI philosophy and corporate governance take the stage
On Thursday of last week, I attended the premiere of Doomers, a new play set in the offices of OpenAI the weekend that Sam Altman was fired as CEO. I found it thought-provoking and funny, if imperfect and frustrating, and I would recommend seeing it if you can.
The play occurs in two acts. In the first, Seth, the Altman analogue, sits at a long table with other company executives while they discuss what’s happened: the board has given the CEO the axe. As they talk, Altman and the company’s head of safety and alignment, Alina, grow testier with one another. Alina fears his megalomania has derailed any possibility of creating a safe AI; Seth finds her anxiety juvenile and obstructive. Myra, the stand-in for Mira Murati, who briefly took over as CEO in Altman’s stead, wants everyone to get along so the company can function. In the second act, equal in length but less interesting than the first, board members think aloud about what they’ve done with increasing regret.
The play is a dramatized thought experiment that sets an unstoppable force – the barreling, multibillion-dollar progress of the AI industry – against an immovable object: the belief that AI could grow so powerful and smart it concludes that humans are no longer necessary and wipes us out. Such fears may seem like the province of a fanciful novelist, but Altman himself has said AI could end human civilization in an open letter co-signed by dozens of AI scientists and businesspeople.
The play succeeds in personifying the opposing schools of thought, with each character representing a viewpoint in the debate. Seth wants to accelerate AI’s development as much as possible, and he wants to be the one to do it. Alina wants to slow things down enough for careful consideration of the dangers. Other characters throw punches from different angles and ally themselves with one pole of the argument or the other in configurations that may leave you wondering which side of the debate you really stand on. The ethical lines may be blurry, but playwright Matthew Gasda brings plenty of sharp zingers to this C-suite knife fight.
Gasda said he interviewed AI industry players for the work, and the effort is evident in how the ideological contours of the play’s conversations mirror AI’s real rival factions. The characters’ exchanges mirror real ones reported to have taken place in the OpenAI offices with most of the same players. Doomers may be the nearest we get to transcripts of their squabbles. One tech reporter who covered the saga and attended the premiere told me it hewed closely to the real drama.
Where the play fails, however, is in plot. It does not move beyond staged philosophical colloquy. The characters do very little but pace to and fro, especially in the second act, leaving the conflict in nearly the same position as when it began. In the first half, Seth and Alina’s conflict does come to a head within their office, but that confrontation does not produce a discernible change within either character, nor do we observe what Seth does after he and the others leave. I sat through intermission with a feeling of unresolved tension that did not dissipate in the second act, when “MindMesh” board members debate taking steps to circumvent Seth’s next possible moves. They fail to act. The audience sees no result from either act’s lengthy arguments, which left me doubly frustrated because the lack of change repeated. We are left to revisit the news stories that came after that weekend, which presents an unsatisfying denouement. Altman was reinstated as CEO. He won. Helen Toner, the OpenAI board member most associated with AI safety and Alina’s analogue, was ousted.
The play’s program lists its dramaturges as ChatGPT and Claude, so it may be no surprise that it feels abstracted and mechanical. Though the characters show emotion when they make their moral points, the show as a whole lacks the human element of action based on belief. The characters argue about faith, but the holy war has already happened before the play begins. Gasda wrote that he “saw new versions of Frankenstein” in the AI startup’s boardroom coup. The themes of hubris, fear and technical achievement at all costs are apparent and well-crafted in Doomers. The ideas are all there. What the play lacks, however, is the dire drama and movement of the global chase between Victor Frankenstein and his creation.
RFK Jr, Sweetgreen, seed oils and the health conspiracy pipeline
My colleague Johana Bhuiyan writes:
Robert F Kennedy Jr has waged a war on quite a few things. But among his chief enemies are seed oils. According to RFK Jr, who was nominated to lead the US Department of Health and Human Services and faced bruising confirmation hearings last week, seed oils such as canola, soybean and sunflower oils are poisoning Americans. He recommends using beef tallow instead.
Nutritionists say these oils are not only safe but could also have cardiovascular benefits; nonetheless, the demonization of seed oils has caught on. Health and wellness influencers on social media have touted the benefits of beef tallow, claimed seed oils were to blame for inflammation, and touted the anti-ageing and mental health benefits of quitting them. The battle against seed oils wasn’t started by RFK Jr – Eater traced it back to a study published seven or eight years ago – but he has given it renewed steam.
Posts about seed oils are hitting the mainstream without any hint of conspiracy. If you follow anything related to health or fitness, or just spend some time on Instagram, YouTube or TikTok, you will come across a seed oil post that makes it seem like the science behind the claims is all but definitive. It’s even affected business decisions: the salad chain Sweetgreen started marketing its seed-oil free dressings two weeks ago.
Given the conspiracy-peddling of current Trump administration nominees and social platforms’ move away from factchecking, it’s likely conspiracy theories will seep into the mainstream much more often. By the time those ideas hit your feeds, they’ll be stripped of all the markings of conspiracy and fed to you alongside comforting but easily debunkable science.
The wider TechScape
AI tools used for child sexual abuse images targeted in Home Office crackdown
A man stalked a professor for six years. Then he used AI chatbots to lure strangers to her home
The Guardian view on AI and copyright law: big tech must pay
WhatsApp says journalists and civil society members were targets of Israeli spyware
Meta agrees to pay Trump $25m for suspending accounts over Capitol riots
Tesla sees disappointing fourth-quarter earnings amid declining car deliveries
White House says New Jersey drones ‘authorized to be flown by FAA’
OpenAI launches ‘deep research’ tool that it says can match research analyst
Elon Musk’s Doge team granted ‘full access’ to federal payment system