heading · body

YouTube

Elon Musk – In 36 Months, the Cheapest Place to Put AI Will Be Space

Dwarkesh Patel published 2026-02-05 added 2026-04-10
elon-musk ai space energy manufacturing optimus spacex tesla xai chips geopolitics
watch on youtube → view transcript

Elon Musk — In 36 Months, the Cheapest Place to Put AI Will Be Space

ELI5/TLDR

Elon Musk tells Dwarkesh Patel that AI is about to hit a wall — not a smart wall, an electricity wall. Chip production is growing faster than the power grid can handle, and by the end of 2026, companies will have more chips than they can turn on. His answer is to put data centers in space, where solar panels work five times better and you never need batteries. He predicts this becomes the cheapest option within 30-36 months, and within five years, more AI compute will launch to orbit annually than exists on all of Earth combined.

The Full Story

The Power Problem Nobody Wants to Talk About

The conversation opens with what sounds like a simple question: where do you plug in all the AI chips? Musk frames it starkly. Outside of China, electricity production is basically flat. Chip output is growing exponentially. These two lines cross in an uncomfortable place.

“How are you going to turn the chips on? Magical power sources? Magical electricity fairies?”

The US uses about 500 gigawatts on average. Building even one terawatt of data center capacity — which Musk considers the threshold for “being in the singularity” — would mean doubling the entire country’s electricity consumption. And the infrastructure to do that simply does not exist and cannot be built fast.

The bottlenecks stack up like a traffic jam at a one-lane bridge. You need power plants, but the turbine manufacturers are sold out through 2030. Drill deeper: it is not even the turbines, it is the vanes and blades inside the turbines, cast by exactly three companies on Earth, all massively backlogged. You could build solar, but US tariffs on imported panels run “several hundred percent,” and domestic production is, in Musk’s word, “pitiful.” You could connect to the grid, but utility interconnect studies take a year just to begin.

xAI learned this firsthand building Colossus. Getting a gigawatt of power online required what Musk calls “miracles in series” — ganging together turbines, dealing with permit issues in Tennessee, running high-power lines across the border to Mississippi. And even with all that, the real power need is far larger than people think. Running 330,000 GB300 GPUs — with networking, storage, cooling on the worst day of the year, and a margin for taking generators offline — requires roughly a gigawatt. About 40% of that power is just for cooling alone.

“Those who have lived in software land don’t realize they’re about to have a hard lesson in hardware.”

Space: The Regulatory Workaround with Better Physics

Musk’s pitch for space is not romantic. It is arithmetic. Solar panels in orbit produce about five times more energy than on the ground. No atmosphere (30% energy loss gone), no clouds, no night cycle, no seasons. And no batteries needed, which effectively doubles the advantage again.

“It’s always sunny in space.”

A solar cell in space, he argues, is actually cheaper to manufacture — no heavy glass, no weatherproof framing, because there is no weather. Combined with SpaceX’s dropping launch costs, the economics flip. He claims space becomes the cheapest place for AI compute in 30-36 months, and then it gets “ridiculously better” from there.

The five-year prediction is staggering: hundreds of gigawatts per year of AI compute launched to orbit, each year exceeding the cumulative total of all AI on Earth. That translates to roughly 10,000 Starship launches a year — one per hour. When Dwarkesh raises an eyebrow, Musk points out this is a lower rate than airline departures.

The engineering challenges in space — radiation, bandwidth, cooling — he addresses with surprising casualness. Neural networks, it turns out, are naturally resilient to the random bit flips caused by radiation. A few flipped bits in a trillion-parameter model do not matter. The chips just need to run hotter, which actually helps, because higher operating temperatures cut radiator mass in half.

The Chip Bottleneck and the TeraFab

Even if you solve power, you still need the chips. The world currently has about 20-25 gigawatts of compute. Getting to a terawatt requires a new scale of manufacturing entirely. Musk floats the “TeraFab” — a chip fabrication facility that would produce logic, memory, and packaging at a rate of over a million wafers per month.

He is characteristically blunt about not knowing how to build a fab. The plan is Boring Company-style: buy existing equipment, learn the process, then redesign the machines for speed. TSMC and Samsung are already building as fast as they can. He has told them to go faster and offered to guarantee purchases. They physically cannot move quicker.

“I don’t know how to build a fab yet. I’ll figure it out.”

His biggest concern is memory, not logic. The path to making enough logic chips is clearer than the path to making enough DDR to feed them. He illustrates this with a joke: write “Help me” on a desert island, nobody comes. Write “DDR RAM,” and ships swarm in.

Tesla is pushing its AI5 chip into production for second quarter of next year, with AI6 following less than a year later. Both SpaceX and Tesla have mandates to reach 100 gigawatts per year of solar cell production. The three variables — mass to orbit, power generation, and chip production — all need to match.

The Kardashev Ladder

The long-term vision climbs to genuinely cosmic scales. Earth receives about half a billionth of the Sun’s energy. Even harnessing one millionth of the Sun’s output would be roughly 100,000 times more electricity than all of civilization currently produces.

Launching from Earth can get you to about a terawatt per year. After that, you want a mass driver on the Moon — Musk’s self-described favorite thing — mining lunar silicon and aluminum to manufacture solar cells and radiators on-site, then electromagnetically catapulting AI satellites into deep space at 2.5 kilometers per second.

“Can you imagine some mass driver that’s just going like shoom shoom? Sending solar-powered AI satellites into space one after another.”

He acknowledges this sounds like a video game. He seems to find this observation more confirmatory than troubling.

Optimus and the Robot Recursive Loop

Humanoid robots have three hard problems: real-world intelligence, the hand, and scale manufacturing. The hand, Musk says, is harder than everything else combined. Every actuator, motor, gear, sensor, and power electronic in Optimus was designed from physics first principles. Nothing was taken from a catalog. There is no existing supply chain.

Tesla’s self-driving AI transfers directly to the robot — same chips, same vision-in/controls-out architecture. A Tesla processes 1.5 gigabytes per second of video and outputs 2 kilobytes per second of control commands. The robot does essentially the same thing with more degrees of freedom.

The data problem is real. Tesla has 10 million cars collecting driving data. You cannot deploy broken robots into the wild the same way. The solution is an “Optimus Academy” — 10,000 to 30,000 physical robots doing self-play in reality, plus millions of simulated robots in a physics-accurate virtual world, using the real robots to close the sim-to-real gap.

Optimus 3 is the version Musk targets for a million units per year. Optimus 4 for ten million. The manufacturing S-curve will be stretched because everything is new, but he expects robots building robots to close the recursive loop quickly.

“I call Optimus the infinite money glitch. Because you can use them to make more Optimuses.”

The plural, he insists, is “Optimi.”

The China Problem

Musk is blunt about the competitive landscape. China has four times the US population, a higher average work ethic in his estimation, three times US electricity output this year, and twice as much ore refining as the rest of the world combined. The US birth rate has been below replacement since 1971.

“We definitely can’t win on the human front.”

Without breakthrough robotics, he says flatly, “China will utterly dominate.” The rare earth supply chain is a perfect example: America mines the ore, ships it to China for refining, China puts it in magnets and motor assemblies, and ships it back. Tesla just built the largest — and only — cathode nickel refinery in America.

Optimus is positioned as the answer. Not as a job replacement, but as the only way to match China’s industrial labor force with a country that has a quarter of the population and fewer people willing to do refining work.

The Digital Human and xAI’s Business

Before robots can do physical work, Musk expects AI to fully emulate a human at a computer by end of 2026 — what xAI calls “MacroHard.” A digital Optimus. The ceiling before physical robots is anything that involves moving electrons or amplifying human productivity.

“Instead of driving a car, it’s driving a computer screen. It’s a self-driving computer, essentially.”

He draws an analogy to customer service: a trillion-dollar global industry that requires no API integration, just an AI that can use the same apps human agents already use. Then you march up the difficulty curve — CAD, chip design, engineering.

When Dwarkesh presses on xAI’s competitive plan, Musk declines to share details but says it follows the Tesla self-driving playbook. He insists on calling AI companies “revenue maximizing corporations” every time Dwarkesh says “labs,” which becomes a running bit.

AI Alignment: The Zoo Hypothesis

On the question of what happens when AI vastly exceeds human intelligence, Musk does not offer comfort. He does not think humans will control something a million times smarter than them. The best case is that AI finds humans interesting enough to keep around — more interesting than rocks, and not worth eliminating for the marginal solar cells their atoms would provide.

“I don’t think humans will be in control of something that is vastly more intelligent than humans.”

xAI’s mission — “understand the universe” — is designed to produce AI that is necessarily curious, necessarily truth-seeking, and necessarily interested in propagating diverse forms of intelligence. You cannot understand the universe if you do not exist, and you cannot understand it if you are delusional. Truth-seeking, Musk argues, is the foundation: physics cannot be bullshitted.

“Physics is law, everything else is a recommendation.”

The danger, he believes, is making AI politically correct — programming it to say things it does not believe. The central lesson of 2001: A Space Odyssey, in his reading, is “don’t make the AI lie.” HAL killed the astronauts because it was given contradictory directives and concluded the only solution was to deliver them dead.

For interpretability, he credits Anthropic’s work on looking inside AI minds and says xAI is building “debuggers” that can trace errors to their origin — whether from pre-training data, fine-tuning, or RL.

The Starship Tangent: Why Steel, Not Carbon Fiber

One of the interview’s best stretches covers the decision to build Starship from stainless steel instead of carbon fiber. Carbon fiber is 50 times more expensive, requires enormous autoclaves for proper curing, and SpaceX was making “extremely slow” progress with it. Musk was desperate.

The insight: at cryogenic temperatures — and Starship’s fuel (liquid methane) and oxidizer (liquid oxygen) are both cryogenic — strain-hardened 300-series stainless steel has similar strength-to-weight as carbon fiber. But it costs 50 times less, you can weld it outdoors, and steel’s higher melting point means you can cut the heat shield mass roughly in half.

“You could smoke a cigar while welding stainless steel.”

The result is that the steel rocket actually weighs less than the carbon fiber version would have. Musk calls it “dumb not to do steel” from the start. The story illustrates his broader point about fighting organizational conservatism by relentlessly attacking the limiting factor.

Management: Limiting Factor as Operating System

Musk’s management philosophy boils down to one obsessive loop: identify the limiting factor, fix it, identify the next one. He allocates his time proportionally — things going well do not see him; things that are the bottleneck see him constantly. SpaceX’s AI5 chip review happens every Tuesday and Saturday, two to three hours each.

He does weekly skip-level engineering reviews where he hears directly from the people doing the work, not their managers. No advance preparation allowed. He mentally plots progress points over time to determine if a team is converging on a solution.

Deadlines are set at the 50th percentile — the most aggressive schedule he thinks has a coin-flip chance of being met. This means being late half the time, which he considers better than the alternative.

“There is a law of gas expansion that applies to schedules.”

On hiring: believe the 20-minute conversation, not the resume. Look for evidence of exceptional ability — even one “wow” moment. Talent, drive, trustworthiness, and goodness of heart. Domain knowledge can be added; fundamental character cannot.

DOGE and Government Fraud

Musk frames his DOGE work as buying time. The national debt’s interest payments now exceed the military budget. Without AI and robots, the US goes bankrupt. DOGE is the tourniquet.

The fraud he describes is almost comically basic: 20 million people over age 115 marked as alive in Social Security, Treasury payments going out with no appropriation code and no comment field, and the fact that the Department of Defense literally cannot pass an audit because the data does not exist.

The simplest DOGE fix — making it mandatory to include a payment code and explanation on Treasury disbursements — may save $100-200 billion per year. But cutting fraud turns out to be politically brutal: fraudsters generate sympathetic cover stories, and the government operates on who complains loudest.

Claude’s Take

This is a three-hour conversation where Musk lays out an integrated theory of the future that hangs together better than you might expect, even if the timelines are — characteristically — aggressive to the point of fantasy.

The core argument about power being the near-term bottleneck for AI is widely shared and well-supported. Chip production outpacing the ability to power those chips is a real problem acknowledged by virtually everyone in the industry. Where Musk goes further than most is in declaring that the solution is not more power plants on Earth but data centers in orbit. The physics of solar in space is sound — the 5x efficiency gain is real, the no-batteries advantage is real. What is less certain is whether all the supporting infrastructure (launch cadence, orbital assembly, thermal management, data downlink) can be scaled in 30 months. That timeline relies on Starship achieving daily-flight reliability while simultaneously solving the heat shield reusability problem that Musk himself identifies as the biggest remaining challenge.

The TeraFab ambition deserves some skepticism. Semiconductor manufacturing is arguably the most complex industrial process humans have ever developed. Musk’s “buy the machines, figure it out” approach worked for tunneling and rockets, but fab process technology involves decades of accumulated knowledge at the molecular level. TSMC’s advantage is not just hardware — it is tens of thousands of person-years of yield optimization. That said, dismissing Musk’s industrial ambitions has historically been a poor bet.

The alignment discussion is more interesting than it first appears. Musk essentially concedes the “doomer” position — humans will not control superintelligent AI — but then reframes it as not necessarily a problem if the AI has the right values. His argument that “understand the universe” as a mission implies preserving humanity is clever but has a hole you could fly a Starship through: an AI that genuinely wants to understand the universe might find it more informative to run experiments on humanity than to leave it alone. The chimpanzee analogy cuts both ways.

The China analysis is the most grounded part of the conversation. Musk is one of the few American technologists willing to say plainly that China’s manufacturing dominance is overwhelming and growing, that the US cannot compete on labor, and that robots are the only plausible equalizer. This is not a popular position in Washington, which makes it worth hearing.

The DOGE section is the weakest. Musk’s first-principles approach to estimating government fraud — “the government is not 90% efficient, therefore hundreds of billions in waste” — is not how forensic accounting works. The specific examples of Social Security database issues are real and documented, but the leap from there to “half a trillion in fraud” relies on a GAO estimate that Dwarkesh rightly pushes back on. The simple fixes Musk describes are genuinely useful, but the broader narrative conflates fraud, waste, and inefficiency in ways that serve a political story more than an analytical one.

Overall, this is Musk at his most coherent — a single thread running from turbine blade casting to lunar mass drivers, held together by “what is the limiting factor right now, and what is it next?” Whether you find that compelling or terrifying depends on how you feel about one person’s companies controlling the rockets, the chips, the robots, the solar cells, the social media platform, and the AI models simultaneously.