heading · body

YouTube

Class #1 | MS&E435: Economics of the AI Supercycle - Stanford University Spring '26

MS&E 435: Economics of AI published 2026-04-09 added 2026-04-10
ai investing economics stanford supercycle infrastructure nvidia capex
watch on youtube → view transcript

Economics of the AI Supercycle — Class 1, Stanford MS&E435

ELI5/TLDR

The AI industry has a money problem shaped like an upside-down triangle: almost all the revenue sits at the bottom with chipmakers (mostly Nvidia), while the apps people actually use barely make anything. In every previous tech wave — internet, mobile, cloud — the triangle eventually flipped so that apps earned more than infrastructure. But the AI triangle has barely budged in two years, and it might take a decade or longer to flip, because unlike traditional software, every new AI user costs real money to serve. The big open question: where does the actual profit show up, and when?

The Full Story

The Instructor and the Course

Apoorv Agrawal leads investing at Altimeter Capital, a concentrated investment firm with public and private arms. He started at Palantir, did grad school at Stanford, and now lives across the street from campus. The course runs nine weeks, features guest speakers from Nvidia, OpenAI, Anthropic, and others, and operates under Chatham House rules — speakers will apparently overshare, so no recording. Grading is 50% attendance, 50% a final assignment. The pitch to students is straightforward: half of you will start AI companies, the other half will fund them, so you should probably understand the economics first.

The Upside-Down Triangle

The central image of the lecture — and arguably the central question of the entire AI industry — is a revenue chart comparing the AI ecosystem to the cloud ecosystem.

In cloud, the shape is a normal triangle: a wide top (applications earning the most revenue), a middle layer (infrastructure), and a narrow bottom (semiconductors). In AI, it is the exact opposite. Semiconductors dominate. Apps are a sliver.

“We are investing so much into the capex… energy, chips, power, interconnects, memory, all that to give you a data center that you can either rent by the hour or by the token… And then the question is, hey, these models that you’ve built, are they creating economic value?”

The students offered three explanations for why the AI triangle looks so different: it is still early; Nvidia has a near-monopoly on chips and can charge accordingly; and the fundamental economics of AI are just different from traditional software.

Agrawal confirmed all three, but lingered on the third. Traditional software had near-zero marginal cost per user. You build it once, ship it to millions, and run at 80-90% gross margins. AI does not work that way. Every additional user burns GPUs. Some AI companies running billions in revenue are still not profitable. The incremental user is expensive in a way that would make a SaaS founder weep.

The Railroad Analogy

The students asked how long this lopsided shape will persist. Agrawal compared the current moment to laying down railroads — massive upfront capital that takes years to pay off.

He walked through the history of AWS. Amazon Web Services broke ground in 2004. Its first major customer, Netflix, arrived in 2010. Amazon itself fully migrated to AWS in 2012. Eight years from first investment to maturity. And during that build-out, the dominant question on Wall Street was whether Amazon would go bankrupt.

“Thankfully nobody at least yet is on the verge of bankruptcy, but these are large numbers.”

The AI ecosystem has grown fivefold in two years, adding roughly $350 billion in revenue. About 75% of that went straight to semiconductors. Apps grew more than 10x and still barely registered on the chart. The shape, stubbornly, did not change.

Agrawal’s estimate: the triangle could take a decade or longer to flip. Possibly longer than cloud, because the underlying hardware problem — getting the “substrate” right — is harder. Two things could accelerate the flip: a breakout success from one of the custom chip programs (Google’s TPUs, Meta’s MTIA, or one of the many secret ASIC efforts), or the hyperscalers simply stopping their massive capex spending, which would signal the current model is broken.

Training vs. Inference

A student asked about the split between training and inference workloads. Agrawal noted that Nvidia’s earnings calls are the most closely watched source for this data. Last disclosed: about 40% of GPUs are used for inference, 60% for training. He expects inference to grow over time, but could not say when. Training workloads are predictable and high-utilization. Inference is bursty — it follows human sleep cycles, dips at Christmas and Thanksgiving, and will only become 24/7 when agents take over.

Where the Profit Lives

The most profitable layer is semiconductors. By a long shot. Nvidia’s data center business earns roughly 75% gross margins. Application-layer companies earn somewhere between 0% and 30%, depending on who you ask. When you look at profitability instead of just revenue, the triangle becomes even more concentrated at the bottom. One company runs the table.

The Vertical Integration Question

A student asked whether past tech cycles had been won by vertically integrated players. Agrawal ran through the list with a quiet precision that suggested he had thought about this before:

  • Internet: Google. ~$3 trillion market cap. 99% search market share. Runs its own servers all the way up to the user interface. Fully integrated.
  • Mobile: Apple. ~$2.5 trillion. Designs its own chips, builds the hardware, controls the OS and the app store. Fully integrated.
  • Social: Meta. ~$2 trillion. Dominant, but did not go all the way down to the servers. Perhaps lost a trillion dollars of value for that gap.
  • Cloud: No single winner. AWS, GCP, Azure share the market. None fully integrated.

The implication hung in the air: if the pattern holds, whoever manages to integrate vertically in AI — from chips to models to applications — stands to capture the most value.

The Consumer AI Ceiling

Agrawal presented a framework for thinking about how big consumer AI can get by comparing it to other consumer products at scale:

  • Mandatory apps (3 billion users): WhatsApp, Chrome, YouTube. You cannot function without them.
  • Social apps (1.5-2 billion users): Instagram, TikTok, Facebook. Not mandatory, but strong network effects pull you in.
  • Niche apps (~500M-1B users): Spotify, Twitter, Amazon. Useful for specific tasks.

ChatGPT, with about a billion users, has just crossed into niche-app territory. Gemini has not reached it yet. Neither is close to social-app scale, let alone mandatory.

The reason, Agrawal suggested, is that ChatGPT requires active work. You have to go ask it something. It is not a passive feed, not a messaging app, not a dopamine machine. The number of people who actively ask questions of technology is simply smaller than the number who scroll.

“Is knowledge work work that everybody does?”

The current economics: Alphabet monetizes 4 billion users at ~$100/year each. Meta monetizes 3.5 billion at ~$70/year. ChatGPT monetizes about 1 billion users at ~$10/year. Getting from $10 to $100 per user likely requires ads. Agrawal drew the parallel to Facebook’s IPO in 2012, when short-sellers argued that ads working on desktop computers would never work on phone screens — there was simply no space. The industry found the space.

“The same thing’s going on right now, which is while I’m having this conversation, it is a very personal conversation. I don’t want to be interrupted by advertisements. That’s the bare debate.”

He was optimistic the industry would solve this too. But the how remains open.

Claude’s Take

This is a genuinely useful lecture, mostly because Agrawal is doing something rare: he is an investor who actually shows his math. The upside-down triangle framework is simple, but the data behind it — mapping real revenue across the AI stack, comparing it to historical cloud/mobile/internet data — gives it weight. It is not vibes-based analysis.

The strongest point is the marginal-cost argument. Traditional software’s near-zero marginal cost per user was the engine of the entire SaaS era. AI fundamentally breaks that model. Every inference costs money. This is not a minor detail; it is the central economic fact of the industry, and Agrawal is right to hammer it.

The AWS historical comparison is well-chosen but worth scrutinizing. AWS took eight years to mature, yes. But AWS was building something genuinely new — on-demand cloud computing did not exist before. The AI application layer is building on top of existing cloud infrastructure. The comparison might overstate how long the triangle takes to flip, because the substrate already exists. On the other hand, the chip-level concentration (Nvidia’s dominance) is more extreme than anything AWS faced, which could slow things down.

The consumer AI ceiling analysis is the most thought-provoking section. The observation that ChatGPT requires active effort — that it is a “knowledge work” tool, not a passive consumption platform — is a genuine insight into why AI apps may never reach WhatsApp-scale ubiquity through their current form factor. The implicit suggestion is that AI needs to become ambient (embedded in everything, running in the background) rather than something you deliberately open and query.

The weakest part is the vertical integration history. It is a neat pattern (Google won internet, Apple won mobile) but the sample size is four, and the cloud example — where no integrated player won — somewhat undermines the thesis. Drawing investment conclusions from n=4 is entertaining over dinner but thin as predictive analysis.

One thing Agrawal did not address: the possibility that the triangle never flips because AI is simply a different kind of technology. Not every industry follows the same value-migration pattern. Oil and gas has been infrastructure-heavy for a century. The assumption that “AI will eventually look like cloud” is itself a hypothesis worth testing, not a given.

Overall, this is a solid opening lecture. The framework is clear, the data is real, and the questions are the right ones. The course lineup — Nvidia, OpenAI, Anthropic executives under Chatham House rules — suggests the subsequent sessions will be where the real signal lives.