Dan Milmo and Alex Hern 

Can AI boom drive Nvidia to a $4tn valuation despite investor doubt?

Powerful new chips are on the way but there are questions over whether tech firm’s growth can be sustained
  
  

Jensen Huang delivers a speech in Taipei, 2 June 2024: he is seen very small in the foreground in front of a black background with huge illustrated images of a robotic figure and moving mechanical joints behind him.
The Nvidia CEO, Jensen Huang, has described robotic factories orchestrating robots that ‘build products that are robotic’. Photograph: Sam Yeh/AFP/Getty Images

When Jensen Huang spoke at the Nvidia annual general meeting last week, he made no mention of a share price slide.

The US chipmaker, buoyed up by its key role in the artificial intelligence boom, had briefly become the world’s most valuable company on 18 June but the crown slipped quickly. Nvidia shed about $550bn (£434bn) from the $3.4tn (£2.68tn) peak market value it had reached that week, as tech investors, combining profit-taking with doubts about the sustainability of its rocketing growth, applied the brakes.

Huang, however, spoke like the CEO of a business that took 30 days this year to go from a valuation of $2tn to $3tn – and sees $4tn coming into view.

He described a forthcoming group of powerful new chips, called Blackwell, as potentially “the most successful product in our history” and perhaps in the entire history of the computer. He added that the new wave for AI would be automating $50tn of heavy industry, and described what sounded like an endless loop of robotic factories orchestrating robots that “build products that are robotic”.

Wrapping up, he said: “We’ve reinvented Nvidia, the computer industry and very likely the world.”

These are the kinds of words on which a $4tn valuation, and the AI hype cycle, are built. Nvidia shares are inching back, returning above $3tn this week, because it remains the best way to buy shares in the AI boom. Is that enough to propel it to $4tn despite the emergence of investor doubt?

Alvin Nguyen, a senior analyst at the research company Forrester, said “only a collapse of the genAI market” would prevent Nvidia from reaching $4tn at some point – but whether it got there first ahead of tech rivals was another matter. Currently, Microsoft – another big player in AI – and Apple are first and second respectively in terms of market size, with Nvidia third.

If OpenAI’s next big AI model, GPT-5, and other new models were astonishing, the share price would stay buoyant and could get to $4tn by the end of 2025, said Nguyen. But if they underwhelmed, then the share price could be affected, given its status as a flag-carrier for the technology. A technological breakthrough could result in less computing power being needed to train models, he added, or interest from businesses and consumers in generative AI tools could be less robust than hoped.

“There is a lot that is unknown and out of Nvidia’s control that could impact their path to $4tn,” said Nguyen. “Such as disappointment with new models that come out, model improvements that reduce the computational needs, and weaker than expected demand from enterprises and consumers for genAI products.”

Private AI research labs such as OpenAI and Anthropic – the entities behind the ChatGPT and Claude chatbots – aren’t traded on public markets, leaving vast sums of money floating around in investor accounts with no way to access some of the big hitters in the generative AI frenzy.

Buying shares in multinationals such as Microsoft or Google is already expensive, and only a fraction of an investment is related to the hot new thing. There could be a vast AI boom but if, for example, Google’s search ads business faltered as a result, then the company wouldn’t necessarily be a net winner.

Nvidia, by contrast, is selling spades in a gold rush. Despite years of investment in capacity, it continues to sell its top-end chips faster than it can make them. Huge proportions of the investments in frontier AI research flow straight out of the labs and into Nvidia’s coffers, with companies such as Meta committing billions of dollars of spend to secure hundreds of thousands of Nvidia’s GPUs (graphics processing units).

Interactive

That type of chip, the company’s specialty, was once sold to enable gamers to experience crisp and smooth graphics in 3D games – and through a monumental stroke of good luck, turned out to be exactly what cutting-edge researchers needed to build massive AI systems such as GPT-4 or Claude 3.5.

GPUs are able to carry out, at great volume and speed, the complicated calculations that underpin the training and operation of AI tools such as chatbots. So any company wanting to build or operate a generative AI product, such as ChatGPT or Google’s Gemini, needs GPUs. The same goes for deployment of freely available AI models such as Meta’s Llama, which also requires vast amounts of chips as part of its training phase. In the case of systems known as large language models (LLMs), training involves crunching through huge blocks of data. This teaches the LLM to recognise patterns in language and gauge what should be the next word or sentence in response to a chatbot query.

Nvidia has never quite cornered the AI chip market, though. Google has always relied on its own chips, which it calls TPUs (for “tensor”, a feature of an AI model), and others want to join it. Meta has developed its Meta Training and Inference Accelerator, Amazon offers its Trainium2 chips to companies using AWS (Amazon Web Services), and Intel has produced the Gaudi 3.

None of the big rivals compete with Nvidia – yet – at the absolute top end. But that’s not the only place where competition is happening. A report from the Information, a tech news site, highlighted the rise of “batch processing”, offering businesses cheaper access to AI models if they’re OK with waiting for their queries to be run at periods of low demand. That, in turn, allows providers such as OpenAI to buy cheaper, more efficient chips for their datacentres rather than focus all their spending on the fastest possible hardware.

At the other end, smaller businesses are starting to offer increasingly specialised products that beat what Nvidia can provide in a head-to-head race. Groq (not to be confused with Elon Musk’s similarly named Grok AI, the launch of which sparked an ongoing trademark dispute) makes chips which can’t be used to train AI at all – but which run the resulting models blazingly fast. Not to be outdone, the startup Etched, which has just raised $120m, is building a chip that only runs one type of AI model: a “transformer”, the T in GPT (generative pre-trained transformer).

Nvidia doesn’t just need to hold its own in the face of competition, big and small. To hit the next milestone, it needs to thrive. Market fundamentals are out of fashion, but if the company was valued like a traditional, low-growth enterprise, even a $3tn market cap would require it to sell a trillion dollars worth of its top-end GPUs a year, at a 30% profit margin, forever, one expert noted.

Even if the AI industry grows enough to justify that, Nvidia’s own profit margin may be harder to defend. The company has the chip designs to hold the lead, but the real bottlenecks in its supply chain are the same as for much of the rest of the industry: at the advanced semiconductor foundries, of the sort operated by Taiwan’s TSMC, America’s Intel, China’s SMIC and precious few others around the world. Notably not on that list is Nvidia itself, which is a customer of TSMC. No matter how advanced Nvidia’s chipsets are, if it needs to eat into the rest of TSMC’s order book to match demand, then the profit will inevitably flow that way too.

Neil Wilson, the chief analyst at the brokerage firm Finalto, said the bear case against Nvidia – market jargon for a sustained fall in share price – rested on the argument that once the company worked through its order book, it would go back to less frenetic levels of demand.

“All their customers have been rushing to order the GPUs but they won’t be doing that forever,” said Wilson. “Customers over-order and then start cancelling. It’s a sweet spot now but it cannot be sustained.” He could see Nvidia getting to $4tn and beyond, but “maybe not at the current pace”.

Jim Reid, Deutsche Bank’s head of global economics and thematic research, published a note this week asking if Nvidia was “the fastest growing large company of all time?” Pointing out that Nvidia went from $2tn to $3tn in 30 days, Reid said conversely, it had taken Warren Buffett 60 years to get Berkshire Hathaway close to $1tn.

Nonetheless, in a world of low productivity – a measure of economic efficiency – and declining working age populations and rising government debts, the economic promise of AI was welcome, said Reid.

“If AI is the catalyst for a fourth Industrial Revolution, that would be very good news,” he wrote. “If not then markets will ultimately have a big problem.”

More is at stake than winning a race to $4tn.

• This article was amended on 3 July 2024. An earlier version said that if Nvidia was valued like a traditional, low-growth enterprise, even a $3tn market cap would require it to sell a “trillion of its top-end GPUs a year”; this should have said a trillion dollars worth of its top-end GPUs a year.

 

Leave a Comment

Required fields are marked *

*

*