heading · body

YouTube

Demis Hassabis: Why AGI is Bigger than the Industrial Revolution & Where Are The Bottlenecks in AI

20VC with Harry Stebbings published 2026-04-07 added 2026-04-10
ai agi deepmind google demis-hassabis ai-safety drug-discovery scaling-laws europe-tech
watch on youtube → view transcript

ELI5 / TLDR

Demis Hassabis — the guy who built DeepMind and won a Nobel Prize for protein folding — thinks AGI is coming within five years and will be roughly ten times the impact of the Industrial Revolution, happening ten times faster. The biggest bottleneck is compute, not ideas. He’s not worried about scaling laws hitting a wall, but he is worried that the world needs international AI safety coordination at exactly the moment we’re worst at international coordination. His side quest: curing cancer through his drug discovery company Isomorphic Labs.

Summary

Harry Stebbings interviews Demis Hassabis at DeepMind’s London headquarters. The conversation covers what AGI actually means (a system with all human cognitive capabilities), when it’s arriving (likely within five years, roughly on the timeline DeepMind predicted back in 2010), and what’s still missing (continuous learning, better memory systems, long-term planning, and consistency — Hassabis calls current AI “jagged intelligences” because they’re brilliant one moment and bafflingly stupid the next).

Hassabis makes a strong case that the leading AI labs are pulling away from the pack — and that the advantage will increasingly go to those who can invent new algorithmic ideas, not just scale existing ones. He’s bullish on AI solving drug discovery, energy, and materials science, but candid that the labor displacement will be real and the philosophical questions (meaning, purpose, consciousness) are the ones nobody’s talking about enough. He also pitches London as an underrated AI hub and floats the idea that being far from Silicon Valley is actually an advantage for deep thinking.

Key Takeaways

  • AGI timeline: Very good chance within 5 years. DeepMind predicted ~20 years from their 2010 founding, and they’re roughly on track.
  • Scaling laws aren’t dead. The returns are diminishing compared to the early doublings, but they’re still substantial. The “scaling hit a wall” narrative is too simplistic.
  • Compute is the biggest bottleneck — and not just for training. You need massive compute just to test new ideas at meaningful scale. The cloud is the lab bench.
  • What’s still missing for AGI: Continuous learning (AI can’t learn after training ends), better memory architectures (long context windows are “brute force”), long-term planning, and consistency.
  • “Jagged intelligences”: Current AI systems are amazing at certain things posed a certain way, then fail at elementary tasks when the question is slightly rephrased. True general intelligence shouldn’t have these holes.
  • The top 3-4 labs are pulling away. Their AI tools help build the next generation of AI, creating a compounding advantage. Labs that can invent new algorithmic ideas will win; those just squeezing existing ideas will fall behind.
  • Open source stays one step behind the frontier, roughly 6 months back. Google is pushing Gemma as best-in-class for smaller models.
  • LLMs won’t be replaced — they’ll be built on top of. The question is whether AGI needs additional components beyond foundation models, not whether foundation models go away.
  • Drug discovery: Isomorphic Labs (spun out from AlphaFold) is building a general-purpose drug design platform. Full drug design engine expected in 5-10 years. The harder problem is speeding up clinical trials, which might eventually be partially replaced by AI simulation.
  • AI safety: Hassabis wants an international body like the atomic energy agency, with standardized benchmarks testing for things like deception. AI systems should never output tokens in non-human-readable languages.
  • AGI = 10x the Industrial Revolution at 10x the speed. A decade instead of a century. Child mortality was 40% pre-Industrial Revolution — you wouldn’t want it not to have happened, but you’d want to handle the transition better.
  • AI is overhyped on the 1-year timescale and still underappreciated on the 10-year timescale. Both things are true simultaneously.
  • Energy: AI will more than pay for its own energy costs by optimizing power grids (30-40% efficiency gains), improving weather modeling, and accelerating breakthroughs in fusion, batteries, and superconductors.
  • Europe’s missing piece: Great at starting companies, bad at funding the billion-dollar growth rounds needed to create trillion-dollar companies. Pension fund reform could help.
  • The question nobody’s asking: Once we solve the technical and economic problems of AGI, what happens to meaning, purpose, and consciousness? Hassabis thinks we’ll need great new philosophers.

Detailed Notes

Defining AGI

Hassabis defines AGI as a system that has all the cognitive capabilities of the human mind. Not some of them. All of them. His reasoning is simple: the human brain is the only proof we have that general intelligence is even possible. So that’s the bar. It’s a higher bar than most definitions floating around, which tend to focus on task performance or economic value.

The Timeline

DeepMind’s co-founder Shane Legg used to write blog posts in 2010 predicting when AGI would arrive. Back then, almost nobody was working on AI and it was considered a dead end. They extrapolated compute growth and algorithmic progress and predicted about 20 years. Hassabis says they’re pretty much on track. He puts it at a “very good chance within 5 years.”

Worth noting: this was the guy who predicted it when nobody believed him, and he’s not accelerating his timeline now that everyone else is. That’s either confidence in the original analysis or stubbornness. Probably the former.

Compute: The Real Bottleneck

Compute matters for two reasons people don’t always separate. First, the obvious one: you need it to train bigger models (scaling laws). Second, the less obvious one: you need it to run experiments. Every new algorithmic idea needs to be tested at meaningful scale or you can’t tell if it actually works. If you have hundreds of researchers with new ideas, you need an enormous amount of compute just to be a good laboratory. The cloud is the workbench.

Scaling Laws: Not Dead, Just Maturing

The early days of large language models saw near-doublings with each generation. That couldn’t last forever. The exponential growth has slowed — but that doesn’t mean returns are gone. Hassabis says they’re “still very substantial, although a bit less than at the start.” It’s the difference between a gold rush and a productive mine. The gold rush phase is over; the mine still produces plenty.

What’s Missing: The Gap List

Hassabis identifies several capabilities current AI systems lack:

  1. Continuous learning. Once you finish training a model, it stops learning. It can’t incorporate new information the way your brain does overnight while you sleep. The brain uses a process called consolidation — memories from the day get replayed during sleep and woven into your existing knowledge. Nobody’s figured out how to do this for AI yet.

  2. Better memory systems. Long context windows (where you stuff all the relevant information into the prompt) are brute force. Hassabis thinks there are smarter memory architectures waiting to be invented.

  3. Long-term planning. Current systems can’t plan over long time horizons — years into the future, the kind of thing human minds do routinely.

  4. Consistency. This is the “jagged intelligence” problem. An AI can ace a complex reasoning task, then fail at something a child could do when the question is phrased slightly differently. A general intelligence shouldn’t have these random holes in its abilities.

The Lab Race: Divergence, Not Commoditization

Everyone talks about models becoming commoditized. Hassabis disagrees. He thinks the top 3-4 labs are pulling away from the pack, not converging. The reason is compounding: AI coding tools and math tools help you build the next generation of AI. Labs that can invent new algorithmic breakthroughs will have an accelerating advantage. Labs that are just squeezing the last juice from existing ideas will fall behind as those ideas get fully exploited.

90% of the breakthroughs underpinning modern AI — AlphaGo, reinforcement learning, transformers — came from Google Brain, Google Research, or DeepMind. Hassabis is betting that track record continues.

DeepMind’s Acceleration

Why has DeepMind gotten noticeably better in the last 2-3 years? Organizational changes. They consolidated talent and compute that had been spread across multiple Google groups, pointed everyone in one direction, and started operating like a startup. Less bureaucracy, more focus, all the resources in one pot instead of three. The ingredients were always there; they just needed to be assembled.

Open Source: One Step Behind, By Design

Open source models will stay roughly 6 months behind the frontier. It takes the community that long to reverse-engineer and reimplement new ideas. Google supports this with Gemma, their suite of smaller open models aimed at developers, academics, startups, and edge computing. But the absolute cutting edge will remain proprietary.

LLMs: Not Going Away

Hassabis disagrees with Yann LeCun’s view that LLMs are a dead end. He thinks foundation models are here to stay. The open question isn’t whether LLMs get replaced — it’s whether AGI needs additional components built on top of them. He gives it 50/50 odds that we still need some breakthroughs (possibly in world models), but the foundation model layer isn’t going anywhere.

Drug Discovery and the Regulatory Bottleneck

After AlphaFold solved protein folding, Hassabis spun out Isomorphic Labs to tackle the rest of drug discovery: designing compounds, checking toxicity, optimizing all the properties a drug needs. He expects the full AI drug design engine in 5-10 years.

But then there’s the other problem: clinical trials still take years, and that’s regulatory, not technical. Hassabis’s plan is a two-step process. Step one: get a dozen or so AI-designed drugs through the full pipeline. Step two: once regulators have enough data to trust the AI’s predictions, start shortening the process — maybe skip animal testing, maybe accelerate dosage testing. This second step is probably 10+ years further out. AI can also help by simulating human metabolism and matching patients to the right drugs based on their genomic profile.

AI Safety: The Timing Problem

Hassabis identifies two safety risks. First, misuse by bad actors — these are dual-use technologies. Second, the technical challenge of keeping increasingly autonomous, agentic systems on the rails. His wish-list: an international body similar to the atomic energy agency, standardized benchmarks (especially testing for deception), a certification “kite mark” for safe models, and AI safety institutes in every leading country staffed with high-quality researchers.

He also flags one specific red line: AI systems should never output tokens in non-human-readable formats. If AI starts communicating in a machine language humans can’t understand, that’s a new vulnerability. Most leading labs would agree on this, he says.

The uncomfortable truth: this kind of international coordination is needed at precisely the moment the world is getting worse at international coordination. Hassabis acknowledges this directly.

AGI and the Economy

Hassabis frames AGI as “10 times the Industrial Revolution at 10 times the speed” — a decade instead of a century. The Industrial Revolution brought modern medicine and dropped child mortality from 40%, but also caused enormous upheaval. He hopes we do better this time.

On jobs: yes, displacement will happen. History says new, higher-paying jobs emerge to replace old ones. But Hassabis is careful not to claim this time is just like every other time. It’s bigger. Mark Andreessen called Stebbings a Marxist for worrying about labor displacement. Hassabis is more measured.

On inequality: pension funds and sovereign wealth funds should be investing in AI companies so everyone gets a piece. The productivity gains need redistribution mechanisms. But he also sees a scenario where AI solves energy (fusion, better batteries, superconductors) and materials science so fundamentally that the nature of the economy changes entirely.

On the hype cycle: AI is overhyped in the short term (1 year) and underappreciated in the long term (10 years). Both simultaneously.

Energy

AI will more than pay for its own energy consumption. Three ways: optimizing existing infrastructure (30-40% more efficiency from national power grids), better climate and weather modeling, and accelerating breakthrough energy technologies like fusion, new batteries, and superconductors. DeepMind works with Commonwealth Fusion. If fusion works, it means effectively unlimited rocket fuel — you can just catalyze seawater.

Why London

Hassabis stayed in London because of the talent pipeline (3-4 of the world’s top 10 universities are British), the deep scientific heritage (Turing, Hawking, Darwin, Newton), and less competition for top European talent. Being far from Silicon Valley meant less distraction from short-term trends and more space for the deep, patient thinking a 20-year mission requires. Palmer Luckey at Anduril talks about the same advantage of being 400 miles from the Valley.

Europe’s Trillion-Dollar Problem

Europe is great at starting companies. It’s bad at the growth-stage funding needed to turn them into global giants. The billion-dollar rounds required to take on American incumbents simply don’t exist here. Hassabis thinks unlocking pension fund investment could help. He’s personally trying to make Isomorphic Labs Europe’s first trillion-dollar company.

The Philosophical Horizon

The question Hassabis thinks about that nobody else is discussing: once we solve the technical and economic challenges of AGI, what happens to human meaning, purpose, and consciousness? We’ll need a new generation of great philosophers to navigate that. He frames this as coming after the technical and economic problems, which are hard enough on their own.

Quotes / Notable Moments

“I sometimes quantify the coming of AGI as 10 times the Industrial Revolution at 10 times the speed.”

“I sometimes call these systems jagged intelligences because they’re really amazing at certain things when you pose the question in a certain way, but if you pose a question in a slightly different way they can actually still fail at quite elementary things.”

“Those labs that have the capability to invent new algorithmic ideas are going to start having a bigger advantage over the next few years, as the last set of ideas — all the juice is being wrung out of them.”

“I do think like literally today, as of today and in the next year, things are a bit overhyped in AI. But on the other hand, I still think it’s very underappreciated how revolutionary this is going to be in the time scale of about 10 years.”

“Child mortality was at 40% back pre-Industrial Revolution. You wouldn’t want it not to have happened. But ideally this time around we mitigate some of the downsides a bit better.”

“I worry a lot about the philosophical questions — what is meaning, what is purpose, what consciousness is, what does it mean to be human. I think we need some great new philosophers.”

On meeting Elon Musk: They ran into each other at a Founders Fund portfolio conference around 2011-2012. Both were in Peter Thiel’s portfolio (SpaceX and DeepMind). Hassabis describes them as “people that were almost too ambitious in their thinking.” He mainly wanted an invite to the SpaceX rocket factory.

Claude’s Take

What’s solid: Hassabis is one of the few people whose AGI predictions deserve weight, because he and Shane Legg made their predictions in 2010 when AI was a backwater — and those predictions have roughly held up. When he says “within 5 years,” that’s not hype-cycle surfing. It’s an update from someone with a 15-year track record of being approximately right. His breakdown of what’s missing (continuous learning, memory, planning, consistency) is technically precise and matches what anyone working in the field would tell you privately.

The “jagged intelligences” framing is genuinely useful. It captures exactly what’s frustrating about current AI in a way most descriptions don’t.

What’s a bit convenient: The claim that 90% of AI breakthroughs came from Google/DeepMind is doing a lot of heavy lifting. It depends heavily on how you count. OpenAI’s work on RLHF, scaling laws themselves, and the GPT line of research were pivotal. Anthropic’s work on constitutional AI and interpretability. Meta’s contributions to open models. Hassabis is right that Google invented transformers and DeepMind did AlphaGo/AlphaFold, but “90%” is a number that flatters the home team.

What’s optimistic: The drug discovery timeline is long. “5-10 years for the drug design engine, then maybe 10 more years to reform the regulatory process” means we’re talking 15-20 years before AI-designed drugs are actually reaching patients at scale. That’s honest, but it’s a lot longer than the breathless “AI will cure cancer” headlines imply.

The energy argument — that AI will pay for its own energy consumption by optimizing grids and discovering fusion — is a promissory note. It might be right, but it’s being used to wave away a very real near-term problem with a medium-term hope. Grid optimization and fusion breakthroughs are on very different timelines.

What’s genuinely interesting: The philosophical question at the end. Most AI leaders talk about safety in technical terms or economic terms. Hassabis is the rare one who goes further: once we solve those problems, we still have to deal with what human existence means when machines can do everything we can. That he considers this the hardest question of all is telling.

What’s missing: No mention of AI’s current limitations in scientific reasoning — pattern matching is not the same as understanding causal mechanisms, and that distinction matters enormously for drug discovery and materials science. Also no discussion of the geopolitical dimension beyond “international coordination is hard.” The US-China AI race is the elephant in the room that never gets addressed.

Overall, this is a measured, thoughtful interview from someone who has earned the right to make big predictions. Hassabis is neither the most bullish nor the most bearish voice on AGI. He’s the one with the longest track record of being roughly correct, which makes his “5 years” estimate worth taking seriously — while remembering that even the best forecasters are guessing.