heading · body

YouTube

We Asked a $30 Billion Manager Where AI Profits Will Actually Go

Excess Returns published 2026-04-09 added 2026-04-10
investing AI quality-investing tech-stocks GMO value-chain
watch on youtube → view transcript

ELI5/TLDR

A portfolio manager at GMO (a firm that manages $30 billion) breaks the AI investment world into four layers — applications, LLMs, hyperscalers, and chipmakers — and says most investors are making the mistake of treating them all the same. His core argument: the safest money is at the top (hyperscalers like Microsoft and Alphabet, who have cash and visibility) and at the very bottom (semiconductor equipment companies like ASML, who are basically irreplaceable). The middle layers — especially LLMs — are where the risk lives, because nobody knows which ones will matter in five years.

Summary

Tom Hancock, head of focused equity at GMO, walks through the firm’s recent paper “Hype vs. High Conviction” on how to think about AI investing through the lens of quality. He maps the AI ecosystem into four layers (applications, hyperscalers, LLMs, and infrastructure/suppliers), then traces how money flows down through those layers — each company’s revenue being the capex of the layer above it. The conversation covers why this boom is more stable than the dot-com era (funded by cash, not debt), why software companies are getting unfairly punished, why Oracle got sold from the portfolio, and how GMO defines “quality” as a company’s ability to earn above-average returns on capital over time while surviving the inevitable rough patches. His bottom line: invest where the competitive advantages are durable and the balance sheet can take a punch.

Key Takeaways

  • The AI value chain is a waterfall. Applications pay hyperscalers, who pay LLMs, who pay Nvidia, who pays TSMC, who pays equipment makers. Every company’s revenue depends on the spending of whoever is above them.
  • The further down the stack, the more volatile your revenue. Think of a whip — the handle barely moves, the tip whips around wildly. Nvidia is closer to the tip than Microsoft is.
  • Most of what hyperscalers spend right now is growth capex, not maintenance. If they even slightly slow their growth rate, the impact on companies further down the chain gets amplified dramatically.
  • This is not the dot-com bubble. The big spenders (Microsoft, Alphabet) are using their own cash, not borrowed money. That makes the system much more resilient to macro shocks.
  • But their pockets are deep, not infinite. Some of these companies are approaching break-even on cash flow. Debt or slower growth is coming.
  • The LLM layer is the riskiest bet. Too many companies are trying to build LLMs. Differentiation will come from proprietary data (which favors Alphabet), not from fancier algorithms or more GPUs.
  • Software companies are being unfairly hammered. The market is treating every software company as if AI will replace it tomorrow. Most have proprietary data, regulatory lock-in, and workflow entrenchment that AI cannot easily replicate.
  • The real threat test for software: How bad is it if the AI gets it wrong, and how easy is it to check? Low-stakes, easy-to-verify software (like data visualization) is at risk. High-stakes software (financial systems, supply chain logistics) is not.
  • Quality = the ability to earn above-average returns on capital, consistently, with a balance sheet strong enough to survive bad years. That is the whole definition. Everything else is decoration.
  • GMO sold Oracle not because the stock was expensive, but because the balance sheet got too risky. Too much debt, too much customer concentration (OpenAI), too much dependence on things outside their control.
  • The incumbents will likely win this round. Unlike past tech waves where new companies displaced old ones, the current winners are big enough to fund the next generation of innovation themselves.

Detailed Notes

The Four-Layer AI Stack

The AI ecosystem, as GMO sees it, has four distinct layers. Each one has different economics, different risks, and different investment implications.

Layer 1: Applications. This is where humans actually interact with AI. ChatGPT, Copilot, Cursor, autonomous vehicles. If AI becomes a big commercial success, it gets monetized here — people paying for things that do things. The problem: we do not yet know what most of the killer applications will be. It is easy to see that smartphones were going to be huge. It was not easy to predict Uber. Code generation and chatbots are the two things clearly making money right now. Everything else is still a question mark.

Layer 2: Hyperscalers. Microsoft (Azure), Alphabet (Google Cloud), Amazon (AWS). They provide the compute that runs the applications. These companies have cash, diversified businesses, and direct visibility into what customers actually want. They are the safest place in the AI stack. They are also not trading at crazy multiples — Microsoft never got above 30x earnings in this cycle, versus 50x+ in 1999.

Layer 3: LLMs. GPT, Gemini, Claude, Llama. This is where the technological innovation lives, but also where the competitive moat is thinnest. There are more LLMs being built than the world needs. The key differentiator will be proprietary data, not compute power or clever algorithms. Alphabet has the best data. Companies trying to build LLMs with just capital and public data are going to have a hard time.

Layer 4: Infrastructure/Suppliers. Nvidia, TSMC, ASML, Applied Materials. The picks-and-shovels layer. Nvidia gets all the attention, but it sits further down the whip than people realize — its revenue is almost entirely growth capex from the layers above. If spending even slows slightly, Nvidia feels it disproportionately. Semiconductor equipment companies like ASML are in a different position: there is essentially no alternative to their tools. In any scenario where AI succeeds, they win.

Follow the Cash

One of the paper’s core ideas: trace how money flows through the four layers. An application earns revenue from end users and pays it to a hyperscaler. The hyperscaler takes that money and pays LLM providers and buys GPUs. The GPU maker pays TSMC. TSMC pays equipment companies.

Right now, the system is being bootstrapped by outside investors — venture capital, sovereign wealth funds, the hyperscalers spending ahead of revenue. But in the long run, all of this has to be funded by actual application revenue from actual end users. If those applications do not materialize at the scale the market expects, every layer below feels the pain.

The further down the stack you go, the less visibility you have into whether the layers above you are going to keep spending. Nvidia cannot really know whether OpenAI’s business model will work. TSMC cannot really know what Nvidia’s order book will look like in three years. Each layer is making a bet on the layer above it.

Growth Capex vs. Maintenance Capex

This is one of the most important and underappreciated dynamics. Right now, the hyperscalers are growing their capex at roughly 60% per year. Almost all of that is growth capex — building new capacity, not replacing old stuff.

Here is why that matters: chips last a long time. People are still using Ampere-generation GPUs. The useful life of this hardware extends well beyond its depreciation schedule. So maintenance capex — the amount you would need to spend just to keep current capacity running — is a relatively small fraction of total capex.

Nvidia receives mostly growth capex. If Microsoft decided to grow 40% instead of 60%, that is a small change for Microsoft. For Nvidia, it could mean a dramatic revenue decline, because maintenance capex stays flat and the growth piece shrinks.

Why This Is Not the Dot-Com Bubble

Three big differences:

  1. Funded by cash, not debt. Microsoft and Alphabet are not going to stop investing because the Fed raises rates. They are playing a long game with their own money.
  2. The companies are actually high quality. The dot-com era was full of telecom companies laying fiber — capital-intensive, undifferentiated businesses. Today’s tech leaders do genuinely hard, differentiated things.
  3. Valuations are not as extreme. Microsoft at 30x earnings is expensive but not absurd. Microsoft at 50x earnings in 1999 was a different animal.

The caveat: these companies are approaching the limits of what they can self-fund. Some may need to take on debt or slow their spending. And there is still outside capital risk — Middle Eastern sovereign funds backing Stargate and OpenAI could pull back.

The Software Panic Is Overdone

Three or four years ago, software was “the best business in the world” — recurring revenue, asset-light, near-100% gross margins. Then AI showed up and suddenly being asset-light became a liability. If your product is just code, and AI can write code, what is your moat?

The market’s answer has been to sell first and ask questions later. But there are several reasons most software companies are safer than the market thinks:

  • Proprietary data. Many software companies sit on data that AI cannot replicate.
  • Regulatory lock-in. Financial systems, healthcare platforms, compliance tools — switching costs are enormous and failures are catastrophic.
  • Workflow entrenchment. When an entire industry standardizes on a tool, that creates network effects that go beyond the quality of the code.
  • The cost of the software is small relative to the cost of the people using it. A cheaper Salesforce does not save much money. Replacing the salesperson using Salesforce would — but that is a much harder problem.

The software most at risk: tools where the data is not proprietary, mistakes are immediately obvious, and the cost of being wrong is low. Data visualization, for instance. The software least at risk: anything where failure is expensive and hard to detect, like supply chain logistics or financial compliance.

The Oracle Case Study

GMO held Oracle for a long time. It was boring in the best way — strong balance sheet, low growth, low multiple. Then Oracle pivoted to AI cloud infrastructure, took on a lot of debt, and concentrated its customer base around companies like OpenAI.

GMO sold. Not because the stock was expensive (it had gone up nicely), but because the balance sheet no longer met their quality standard. Oracle’s debt is only serviceable if its customers keep paying. Its biggest customers are in the most uncertain, competitive part of the AI stack. If OpenAI hits a rough patch, Oracle’s ironclad contracts are only as good as OpenAI’s solvency.

How GMO Defines Quality

Quality, as GMO uses the term, means one thing: a company that can consistently earn above-average returns on the capital it deploys. In a competitive market, that implies a moat — something competitors cannot easily duplicate.

The backward-looking signals: high profitability, high return on equity, stability across economic cycles, strong balance sheet.

The real work: figuring out whether those things will still be true in five years.

Balance sheet strength is not just a safety feature. It is what lets a company survive the inevitable bad years and invest when competitors are retrenching. If you are generating tons of cash and you still need to borrow money, something is wrong.

GMO targets 40-50 names in the portfolio. They invest at the hyperscaler and infrastructure layers of AI. They avoid LLMs (too uncertain) and are cautious on pure applications (too early).

Apple’s Quiet Strategy

Apple mostly sat out the AI spending race. At the time, this was widely criticized. In retrospect, it looks smart. They can license LLM technology from others the way they licensed Google Search, while controlling the most valuable consumer platform on earth. They have everyone’s personal data through the phone. The bet is that they will eventually ship a breakthrough product built on top of other people’s AI infrastructure.

Meta’s Interesting Position

Meta is closer to the edge than Microsoft or Alphabet in terms of running out of cash flow to fund AI. The open question: does Meta actually need its own LLM (Llama)? It is not clear. But Meta has one advantage nobody else does — it is its own customer. It uses AI to direct content and ads, so it has the best visibility into whether its own AI investments are paying off.

Quotes/Notable Moments

“Nvidia’s revenues are OpenAI’s capex, and OpenAI has the capex to spend because they’re getting money from Microsoft and also because they’re spending in advance of that revenue.”

“I used to joke that tech investing was IBM and a bunch of crappy companies. Now it’s a bunch of great companies.”

“Software was ‘the best business in the world’ three years ago. Suddenly not having tangible assets worked against you.”

“A lot of being quality is just being able to keep going through the tough patches.”

“I can’t really be convinced that OpenAI will be the best LLM — or even a relevant LLM — in five years from now.”

“I can’t see a world in which AI plays out as successful and it’s different companies building the tools to build the chips.”

“If you’re a company trying to cherry-pick your stock price with buybacks, you’re basically trading on insider information.”

“A year from now, is what’s happening today going to matter? If the answer is no, you shouldn’t be trading on it.”

Claude’s Take

What is solid:

The four-layer framework is genuinely useful. It is not revolutionary — anyone who has looked at a supply chain understands how money flows through layers — but applying it systematically to AI investing, and especially the growth-vs-maintenance-capex distinction, is clear-headed thinking. The whip analogy is exactly right: a small deceleration at the top becomes a violent swing at the bottom. This is the kind of structural insight that ages well.

The argument that software companies are being unfairly punished is well-supported. The “how bad is it if the AI is wrong, and how easy is it to check” framework is a practical, concrete way to think about AI disruption risk. Most market commentary on this topic is either “AI will replace everything” or “nothing will change.” This sits in the useful middle.

The Oracle analysis is the most valuable part of the conversation. It is a real, specific example of how quality investing works in practice — not “we thought the stock was expensive” but “the balance sheet changed and we could not underwrite the risk anymore.” That is an honest, process-driven answer.

What is less convincing:

The “this is not the dot-com bubble” argument is true but incomplete. The dot-com companies were mostly garbage. The current companies are real businesses with real cash flows. But the question is not whether Microsoft is a good company — it is whether the market is correctly pricing the future returns on $60-80 billion per year in AI capex. A good company at the wrong price is still a bad investment. He acknowledges this but does not dwell on it.

The LLM skepticism feels a bit too neat. “There are too many LLMs” and “differentiation will come from data” are reasonable positions, but he does not seriously engage with the possibility that the LLM layer consolidates into 2-3 winners who capture enormous value — which is exactly what happened with search engines, social networks, and cloud providers. History is full of “too many competitors” situations that resolved into oligopolies worth trillions.

What is missing:

No discussion of AI agents and their potential to change the application layer faster than he seems to expect. No mention of open-source models potentially undermining the entire LLM business model. No real engagement with the possibility that AI progress continues to accelerate rather than plateau — he flags plateau risk but treats the acceleration scenario as the default assumption, which it may not be.

The conversation is also very US-centric. The brief mention of international quality investing feels like an afterthought. Chinese AI companies, European regulation, and the geopolitics of semiconductor supply chains are all absent.

Bottom line: This is a sober, well-structured conversation that gives you a useful mental model for thinking about where AI profits will actually land. It is better than 90% of what you will find on the topic. The four-layer framework and the growth-vs-maintenance-capex point alone are worth the time. Just do not mistake sobriety for completeness — there are big questions he does not touch.