AI supply is way ahead of AI demand

Everyone wants in on the AI boom. For now, however, you can probably count on one hand the number of vendors cashing in.

The most obvious one is Nvidia, of course. Nvidia has earned nation-state levels of cash for its GPUs ($26 billion in the first quarter of 2024 alone). Beyond Nvidia are the big three cloud vendors and OpenAI. Beyond that cast of five, however, it’s pretty hard to find many—yet.

That “yet” is the key here. We are absolutely in a frothy period for AI, where vendors are selling “hopium” and enterprises are buying just enough to fuel proofs of concept, without much production usage. That will change, especially as we move beyond today’s amazement (“Wow, look at how a few lines of text can create a visually impressive but practically useless video!”).

We are not yet into real use cases that mainstream enterprises are willing to spend on. It’s coming though, and that’s one reason vendors keep spending big on AI even though it’s not paying off (yet). But for now, someone needs to answer Sequoia’s $200 billion question.

Spending AI money to make AI money

As Sequoia Capital partner David Cahn argues, Nvidia sold roughly $50 billion in GPUs last year, which in turn requires $50 billion in energy costs. That translates into $100 billion in data center costs. Because the end user of the GPU must earn something too, add another $100 billion in margin (at 50%) for those companies (e.g. X, Tesla, OpenAI, GitHub Copilot, AI startups). All that adds up to $200 billion in revenue that needs to be generated just to break even on those Nvidia GPUs (i.e. zero margin for the cloud providers). However, as Cahn shows, even the most generous math gets us to only $75 billion in industry revenue (of which just $3 billion or so goes to the AI startups, as The Wall Street Journal points out).

Cahn asks, “How much of this capex buildout is linked to true end-customer demand, and how much of it is being built in anticipation of future end-customer demand?” He doesn’t answer directly, but the clear implication is that this immoderate overbuilding of infrastructure may be good for some, but all that AI money right now is sloshing around in the coffers of a small handful of companies, with the real beneficiaries of AI yet to emerge.

Before that happens, we may well see an AI bust. As The Economist observes, “If the past is any guide, a bust is coming and the firms carry such weight in the stock market that, should their overexcitement lead to overcapacity, the consequences would be huge.” That’s the glass-half-empty analysis. Cahn, the VC, gives the glass-half-full view, arguing that in past boom cycles, “overbuilding of infrastructure has often incinerated capital, while at the same time unleashing future innovation by bringing down the marginal cost of new product development.”

In other words, the big infrastructure companies’ overspending on AI may eventually shred their balance sheets, but it will lead to lower-cost development of real, customer-focused innovation down the line. This is already starting to happen, if slowly.

Meanwhile, back in the real world

I’m starting to see enterprises consider AI for boring workloads, which is perhaps the ultimate sign that AI is about to be real. These aren’t the “Gee whiz! These LLMs are amazing!” apps that make for great show-and-tell online but have limited real-world applicability. These are instead retrieval-augmented generation (RAG) apps that use corporate data to improve things like search. Think of media companies building tools to allow their journalists to search the totality of their historical coverage, or health care providers improving search for patient-related data coming from multiple sources, or law firms vectorizing contact, contract, and other data to improve search.

None of these would light up social media networks. However, each one helps enterprises run more effectively, and hence they are more likely to get budget approval.

We’ve been in a weird wait-and-see moment for AI in the enterprise, but I believe we’re nearing the end of that period. Surely the boom-and-bust economics that Cahn highlights will help make AI more cost-effective, but ironically, the bigger driver may be lowered expectations. Once enterprises can get past the wishful thinking that AI will magically transform the way they do everything at some indeterminate future date, and instead find practical ways to put it to work right now, they’ll start to invest. No, they’re not going to write $200 billion checks, but it should pad the spending they’re already doing with their preferred, trusted vendors. The winners will be established vendors that already have solid relationships with customers, not point solution aspirants.

Like others, The Information’s Anita Ramaswamy suggests that “companies [may be] holding off on big software commitments given the possibility that AI will make that software less necessary in the next couple of years.” This seems unlikely. More probable, as Jamin Ball posits, we’re in a murky economic period and AI has yet to turn into a tailwind. That tailwind is coming, but it’s starting with a gentle, growing breeze of low-key, unsexy enterprise RAG applications, and not as-seen-on-Twitter LLM demos.

Copyright © 2024 IDG Communications, Inc.