heading · body

YouTube

A Push-Up Contest with Pat Gelsinger (2026) // Ian Interviews #49

TechTechPotato published 2026-04-09 added 2026-04-12 score 7/10
semiconductors venture-capital computing AI hardware moore-law quantum-computing energy manufacturing
watch on youtube → view transcript

A Push-Up Contest with Pat Gelsinger (2026) // Ian Interviews #49

ELI5/TLDR

Pat Gelsinger left Intel and landed at Playground Global, a hard-tech VC firm, where he now sits on the boards of about 10 companies doing things like superconducting logic, next-gen lithography, nuclear energy, and programmable data flow chips. His big thesis: the future of computing is heterogeneous — classical CPUs, AI accelerators, and quantum machines all working together — and making inference 10,000x cheaper is the central engineering challenge of the decade. He also thinks Moore’s Law is just napping, not dead, and bet Ian a bottle of wine that sub-13.5nm lithography wavelengths will hit production within ten years.

The Full Story

Life After Intel

Gelsinger’s wife told him the day after he left Intel: “You’re not done yet.” He took that as instructions to not be home too much. A hundred meetings in a hundred days followed — VC firms, private equity, CEO gigs, government roles, university roles. Playground Global won out, alongside a faith-tech company called Gloo. He’s now a rookie in venture capital, 65 years old, stretching his brain into superconducting Josephson junctions, quantum cubits, bioengineering, and nuclear operations.

“The only disappointment I have at this phase of my career is I’m not 35 years younger. This is the greatest time to be a technologist in human history.”

His learning style is telling. He’d rather spend one hour with a company than ten hours reading papers. The interaction, the questioning, the way founders respond under pressure — that’s his real due diligence instrument.

The Trinity of Computing

Gelsinger’s framework for where computing goes: classical (CPUs for control flow, if-then-else, operating systems), AI accelerators (datacentric, matrix-heavy workloads), and quantum (problems that are literally uncomputable on the other two, unless you have a billion years to spare). He calls it “the trinity of computing.”

The punchline on Jensen Huang’s GTC keynote was sharp. Jensen spent the first half throwing CPUs under the bus, then announced Nvidia’s own CPU.

“First we’re going to spend half the keynote throwing it under the bus. And then we’re going to announce that we now have the best one because we need it.”

The answer, Gelsinger says, is that workloads need both. GPUs are terrible at if-then-else. CPUs are terrible at massive parallel matrix math. The workload should define the architecture, not the other way around.

The 10,000x Inference Problem

Making inference 10,000x better — not 10x, not 100x — is the number Gelsinger keeps returning to. His math comes from comparing where AI inference stands versus the energy and compute profiles of traditional search. Nvidia’s Groq acquisition validated the thesis that GPUs alone aren’t enough for optimized inference. Ian says he’s tracking 150 companies pursuing this space. SRAM-based, data flow, HBM, high-bandwidth flash — everyone’s taking a shot.

One startup called Talis baked a model directly into the metal communication layers of a structured ASIC and hit 10,000-14,000 tokens per second. Pure hardware commitment to one model. Gelsinger’s portfolio company Next Silicon is building a programmable data flow machine that can dynamically reconfigure its network topology and compute resources as the workload changes phases — prefill, decode, etc.

The DeepSeek Lesson

DeepSeek’s moment, about a year before this interview, demonstrated what happens when engineers are constrained. They tunneled through the software stack, understood exactly what the hardware was doing, and aligned their algorithms to the available silicon. Gelsinger sees this as a model.

“What does an engineer do? He produces great results in the constraints that he has.”

The Revenge of HPC

High-performance computing people are frustrated. Precision is getting slashed — from 64-bit to 16-bit to 8-bit to 4-bit — and the HPC community cares deeply about accuracy. Gelsinger’s prediction: as AI moves from language modeling into science modeling (CFD, molecular biology, hypersonics), precision comes roaring back. The LLM becomes a leaf node, not the core computational node. Real science needs real floating-point precision.

“I think there will be over the next couple of years the revenge of the HPC guys.”

Waking Moore’s Law from Its Nap

Gelsinger is not a Moore’s Law funeral attendee. Transistors aren’t dead — they just stopped getting cheaper. His company Xite is working on free electron lasers to produce next-generation EUV light at wavelengths below 13.5nm. More photons, better spectral purity, less stochastic noise. He claims 2,000-4,000 watts of delivered energy versus today’s 500-600 watts.

Ian is skeptical. The materials science, the inability to reflect shorter wavelengths, the need to scan wafers vertically. A formal bet was made: sub-13.5nm wavelengths in mass production within a decade.

The business model twist: “photons as a service.” Free electron lasers as external utilities piped into fabs, like chemical supply or power substations. Different capital pools, upgradable infrastructure, attachable to multiple equipment types.

Energy Is Economic Capacity

Gelsinger frames it simply: in the AI age, energy capacity equals economic capacity. China is building 39 nuclear reactors. The US is building zero. His portfolio company Alva is a nuclear operating company focused on squeezing more from the existing fleet and restarting the industry.

“A brownout in Taiwan has an economic impact that is twice as great as the Great Depression.”

Europe’s Two Problems

Europe produces great engineers and great early-stage startups, but has two structural weaknesses: no mid-stage capital formation (the jump from tens of millions to hundreds of millions), and suffocating regulation. Companies keep their European footprint but domicile in the US for speed and capital. When a UK minister showed Gelsinger their quantum investment proposal, he told them to 10x it.

Hardware Is Cool Again

The software-ate-the-world era produced a generation of AI dating apps. Now hardware is having its moment. A VC friend confided to Gelsinger: “I think we forgot how to do hard.” Universities like Purdue, Stanford, Berkeley, and MIT are seeing resurgence in material science, physics, chemistry, and chip design. Gelsinger is watching it with the enthusiasm of someone who never left.

Claude’s Take

This is a solid, wide-ranging conversation between two people who genuinely understand semiconductor technology. Ian Cutress asks sharp questions and pushes back where needed — the sub-13.5nm bet was a highlight precisely because Cutress didn’t just nod along.

Gelsinger is in full elder-statesman mode, and it suits him. The “trinity of computing” framework is genuinely useful shorthand. The 10,000x inference number is provocative but grounded in real math about energy and cost. The Nvidia keynote jab landed perfectly.

Where it gets a bit thin: the venture capital stuff is more CEO-on-tour than deep insight. “Can we make the tech work, can we get it to market, do we have the team” — that’s every VC’s framework. The faith-tech references are left completely unexplored, which is probably for the best in this context but leaves a dangling thread.

The strongest material is on lithography futures, heterogeneous computing, and the HPC revenge thesis. The weakest is the hand-wavy “hardware is cool again” section at the end — true, but not adding much.

Score: 7. Good conversation with genuine technical depth and a few memorable moments. Not quite excellent — it covers too much ground at medium depth rather than going truly deep on any one topic. But Gelsinger’s perspective as someone who ran Intel and now evaluates the next generation of compute companies is legitimately valuable.

Further Reading

  • Next Silicon — programmable data flow architecture company in Playground’s portfolio. Worth tracking for the dynamic reconfiguration approach to AI workloads.
  • Snowcap — superconducting logic company targeting near-absolute-zero computing. The satellite ambient-temperature angle is clever.
  • Xite — Gelsinger’s free electron laser company for next-gen semiconductor lithography. The “photons as a service” model is the interesting part.
  • Alva — nuclear operating company focused on restarting US nuclear capacity. Recently came out of stealth.
  • DeepSeek — the constrained-engineering approach to AI model training that tunneled through the full stack. The paper and its aftermath are worth understanding.
  • SPEC Benchmark — Gelsinger claims to have code still in it from the earliest days. A piece of computing history worth knowing about.