heading · body

YouTube

CS 153 '26: Frontier Systems - Anjney Midha, AMP PBC

CS153: Frontier Systems published 2026-04-04 added 2026-04-10
ai-infrastructure compute scaling-laws reinforcement-learning stanford sovereign-ai venture-capital context-feedback-loops
watch on youtube → view transcript

CS 153 Frontier Systems - Anjney Midha on Context Wars and Compute Scarcity

ELI5/TLDR

Anjney Midha, co-instructor of Stanford’s CS 153 and founding investor at Amp, argues that the AI race will not be won by whoever builds the smartest model. It will be won by whoever controls the best training environment — what he calls “context.” Meanwhile, the GPUs everyone needs to train these models are getting more expensive, not cheaper, which breaks a fundamental assumption the whole tech industry has been running on for 15 years. His prescription: we need to treat compute like we eventually treated electricity — standardize it, build institutions to keep it accessible, and stop letting a handful of companies hoard it all.

The Full Story

The Class and the Man Behind It

CS 153 started four years ago as “Security at Scale,” a class Midha and co-instructor Mike created because Stanford had no course covering the real-world frontier problems of systems engineering. Midha was running platform at Discord; Mike was running infrastructure at Apple. It began with 50 students. It now has 500, another 50 waitlisted, and thousands following online. The speaker list for this quarter reads like an AI industry yearbook: Jensen Huang, Lisa Su, Satya Nadella, Sam Altman, the co-creator of ChatGPT, the creator of Stable Diffusion.

Midha himself is a Stanford undergrad and grad school alum — born in India, high school in Singapore, degrees in economics, math, computational science, and bioinformatics. He has been involved in the early days of over ten AI labs, including Anthropic, Mistral, and Black Forest Labs. His current company is Amp (previously called Periodic Labs). His role, as he describes it, is neither traditional VC nor CEO — he co-founds companies on day one alongside scientists, one at a time.

Before diving into technical material, he gets visibly emotional telling students that the most important people in the room are each other. He met his wife as a sophomore at Stanford. He started both his companies with former Stanford roommates. His life advice is disarmingly simple:

“Just have fun. With people you enjoy hanging out with. That’s pretty much it.”

He calls this an empirical finding, not a prediction. The scaling laws of life, apparently, are also discovered by running the experiment.

The Recipe for Manufacturing Intelligence

The lecture’s technical backbone is a framework Midha has been refining through years of co-founding AI labs. The recipe: raise money, buy compute, add data, pre-train a model, ship it, run inference, and collect two things — revenue (to buy more compute) and context feedback (to improve the model via reinforcement learning).

Four years ago, when Dario and Tom Amodei left OpenAI to start Anthropic, Midha made 22 introductions to Sand Hill Road investors. Twenty-one said no. They wanted empirical proof that scaling capabilities would translate into a business.

Four years later, Anthropic has gone from $9 billion to $20 billion in revenue. The proof arrived.

Context: The Real Moat

The question Midha keeps hearing is straightforward: if the scaling recipe is so simple and repeatable, who wins?

His answer: whoever controls the context. Context, in his framework, means the environment in which an AI agent learns and gets verified. Think of training a dog to fetch in a park. The park — the grass, the weather, the kids running around — that is the context. It shapes what the dog can learn.

“Where will frontier progress continue most rapidly? Wherever in life we have verifiability.”

Code is verifiable. You write a unit test; it passes or it does not. Material science is verifiable — a company called PI Labs is using RL with physical verification to discover new superconductors, with robots in a 30,000 square foot facility in Menlo Park.

But beauty? Love? Aesthetics? Those are not easily verifiable. And this, Midha argues, is why AI is terrible at long-form creative writing. He tells the story of sending a blog post — outlined by him, fleshed out by Claude — to a founder friend. The friend responded in 30 seconds: “Did you use Claude for this?” Busted. Amp now has an internal rule: no AI-generated documents sent between team members. They sit. They write. They share it raw.

The strategic implication cuts deep. Teams that control unique, defensible context will capture the most value. Teams that get locked out of essential contexts will not have a chance.

The Windsurf Incident

Midha offers a pointed case study. About a year ago, OpenAI moved to acquire Windsurf, a coding IDE. Days later, Anthropic cut off model access to Windsurf users — no warning.

In most of the tech industry, you do not just shut off an API. But the logic was clean: if your competitor acquires a tool that uses your model, they can observe how your model helps customers and distill that knowledge. Context leakage. Anthropic sealed the leak.

“That was one assumption that stopped scaling.”

The assumption being: if you are an application company, you can always rely on your model provider to keep giving you intelligence. Not anymore. The context wars have begun, and they are playing out across consumers, creators, companies, and countries.

Sovereign AI and the Cloud Act

Mistral, founded by the co-creators of Llama (Guillaume Lample) and the Chinchilla scaling laws (Arthur Mensch), was built on a specific bet: closed-source cloud models work fine for everyday software engineering, but mission-critical government workloads cannot live on someone else’s servers.

The Cloud Act — which Midha notes zero students in the room had heard of — gives the US government the ability to access data on US-controlled servers anywhere in the world. For many governments, that is a non-starter. This is why President Macron stood on stage in Paris next to a 33-year-old scientist (Mensch) and Jensen Huang and declared it the future of Europe.

AI workloads have graduated from chatbot assistants to mission-critical systems. RL now works with enough precision for sensitive contexts. The result: a global reshuffling of cloud infrastructure, the rise of “sovereign AI,” and startups getting a rare chance to unbundle the cloud oligopolies that have consolidated power for 15 years.

Compute Is Not a Commodity

This is where Midha’s infrastructure obsession takes center stage. For years, the conventional wisdom has been that chips are a commodity. They depreciate. Publicly traded companies are built on this assumption.

The data says otherwise. Amp maintains an internal system called the Amp Grid that tracks GPU rental prices. H100s — a chip over two years old — are not getting cheaper. They are getting more expensive. Average hourly rental has climbed well past the $1.73 it was two years ago. That morning, a founder who had raised roughly a billion dollars messaged Midha in a panic: need H100s, any quantity, right now, price not a problem.

“It’s a good time to be a drug dealer.”

Two problems make compute fundamentally unlike electricity today. First, it is not fungible — an H100 is different from a GB200 is different from a B300, even though they come from the same manufacturer. Second, demand is nearly impossible to forecast. Training is spiky (you experiment small, then spike for hero runs). Inference is cyclical (everyone uses the chatbot during the day, nobody at night). The result is hoarding: the five largest tech companies spent $300 billion on infrastructure last year, plan to spend $600 billion this year, and have announced $1.2 trillion for next year.

History Rhymes

Midha walks through the price histories of steel, fiber optics, DRAM, shipping (the Baltic Dry Index), and uranium. The pattern repeats: new general-purpose technology emerges, prices spike as players hoard, a panic triggers a crash, and eventually society figures out how to stabilize the resource — sometimes through industry self-regulation, sometimes through government intervention.

The typical cycle in digital infrastructure takes about 2.8 years from boom to stabilization. For physical infrastructure, 6.3 years. AI is an unusual hybrid — producing bits (software revenue, intelligence) from atoms (land, power, chips). Those worlds, Midha notes, do not like colliding.

The Path to Commodity Compute

History suggests two things are needed to turn a scarce, hoarded resource into a stable commodity. First: standards. AC/DC for electricity. TCP/IP for the internet. A common unit, a standard delivery interface, interconnection and pooling, metering and settlement, and the ability for buyers to substitute one supplier’s unit for another. None of that exists for compute today.

Second: institutions to enforce those standards. Because humans, left to their own devices, hoard.

“We are in the pre-standardization of compute era.”

Midha’s parting assignment to students: What will it take to ensure a peaceful transition on compute? And what is your part in it?

The Limits of RL

On the philosophical question of whether RL will generalize across all domains, Midha is skeptical. The optimistic view says: once a coding agent gets good enough, just tell it to build itself a materials science environment and run RL there. And so on, recursively, forever.

Midha’s empirical view: RL is producing relentless, seemingly unbounded progress within narrow, verifiable domains like coding. But it is not clear that it generalizes across domains. Coding to material science to biology — that transfer is not happening yet. The domains where progress will be fastest are the ones where you can check the answer. Everything else will be slower, messier, and more dependent on human taste.

He thinks about recursive self-improvement at the systems level — a team that keeps getting better at executing — rather than at the individual model level. Which is a notably grounded take from someone who has co-founded ten AI labs.

Claude’s Take

Midha is doing something unusual for a venture investor: giving away his framework in a university lecture, transparently disclosing his biases, and being honest about what he does not know. That alone is worth noting.

His “context is the moat” thesis is genuinely insightful and holds up well. The Windsurf example is a clean illustration of how the AI stack is rewiring itself in real time — model providers and application companies are no longer in a simple vendor-customer relationship. They are in a strategic context war. This is not widely understood outside the industry and is worth paying attention to.

The compute-is-not-a-commodity argument is well-supported by his data. GPU prices rising for a two-year-old chip does break the standard depreciation model. The historical analogies to steel, DRAM, and uranium cycles are directionally correct, though every infrastructure cycle has its own wrinkles. The specific claim that big tech will spend $1.2 trillion on CapEx next year is worth tracking — these numbers have been announced in earnings reports but could be revised.

Where I would push back: the “pre-standardization” framing is elegant but somewhat self-serving. Midha runs a company (Amp) that appears to be positioning itself as exactly the kind of institution that would help standardize and allocate compute. He discloses his biases, which is good, but the prescription (“we need institutions to manage compute allocation”) happens to describe his own business model. That does not make it wrong. It does mean you should notice it.

His skepticism about RL generalizing across domains is one of the more honest takes you will hear from someone in his position. Most people with his portfolio would be incentivized to hype the “recursive self-improvement leads to AGI” narrative. He is not doing that. He thinks progress will be fast in verifiable domains and slow everywhere else. That is a measured, defensible position.

The emotional moments — getting choked up telling students to invest in relationships, the wife in the audience, the two companies started with roommates — are genuine and land well. But there is an irony in someone who runs an AI infrastructure company telling Stanford students that the most important thing is human connection. He is right, of course. It is just that the industry he is building will make that advice harder to follow.