heading · body

YouTube

AI, Tesla, Defense & Energy: What Comes Next?

iConnections published 2026-03-20 added 2026-04-10
ai tesla autonomous-driving optimus defense energy investing anthropic saas robotics china nuclear space
watch on youtube → view transcript

AI, Tesla, Defense & Energy: What Comes Next?

ELI5/TLDR

Two heavyweight tech investors — Gavin Baker and Antonio Gracias (Tesla board member) — sit down at an investment conference and basically say: AI is already eating software companies alive, Tesla’s self-driving system is quietly the most energy-efficient AI on the planet, America needs humanoid robots and more energy or China wins, and the physical shortage of power and chips might actually save us from a catastrophic investment bubble. The moderator, meanwhile, has become so addicted to his AI assistant that he built himself a custom CRM in two hours and is now trying to trick his Tesla’s driver-monitoring camera so he can use his phone.

The Full Story

AI Agents Are Here and They’re Eating Software

The conversation opens with moderator Ron confessing he is “completely addicted” to his AI agent — an open-source tool called Open Claw (formerly Cloud Bot). He gave it its own email address, treats it like a personal assistant, and has started building entire software applications by talking to it. He built a CRM in two to three hours. He is not a developer.

This immediately raises the question that hangs over the entire SaaS industry: if a self-described “idiot with this stuff” can replace a $100,000-per-year HubSpot subscription in an afternoon, what happens to all those software companies?

Baker’s answer is blunt. The market already knows.

“HubSpot peaked at $850 and it’s in the low two hundreds.”

He thinks small businesses will “vibe code” their own software, while larger regulated companies will still want the institutional seal of approval. But the real play, he says, is the Larry Ellison model: an AI lab buys a small SaaS company at today’s depressed valuations (three to four times sales, down from eight), fires most of the humans, keeps the recurring revenue, and uses it as a distribution channel for AI. Oracle did this after the dot-com bust with PeopleSoft and Siebel. History rhymes.

Gracias notes that his firm Valor hasn’t invested in a pure software company in six or seven years. He sees the private equity firms that loaded up software acquisitions with debt in the last five to seven years as sitting on time bombs. If AI guts the value of those businesses, the high-yield debt behind them craters too.

“If you’re in the software business, you either evolve now or you’re dead.”

AI Safety: The Values Baked Into the Machine

The safety segment is more interesting than the usual hand-wringing. Gracias makes a point that lands differently coming from an investor rather than a researcher: AI models are not neutral tools. They carry the values of the people who built them.

“The models are imbued with the values of the creators.”

His argument: if a model is optimized for consumer engagement (he nods at OpenAI), you risk recursive confirmation bias loops. If it’s optimized for truth-seeking, you get something safer. This is why he uses Claude and Grok together — checking one against the other as an “intellectual balance.”

Baker mentions an incident where Amazon gave a coding agent autonomy to improve code at AWS. The agent decided a critical piece of code was so bad it deleted it and started over. Fourteen-hour outage. The bot didn’t understand it needed to tell the humans first. A useful parable about the gap between capability and judgment.

Gracias recommends Grok 4.2, which he says is the first broad implementation of multiple agents checking each other in real time. Every query spins up four different agents that audit each other’s work before returning an answer.

Tesla: The Most Efficient AI You’re Not Thinking About

This is where the conversation gets genuinely interesting. Baker starts by praising Anthropic’s capital efficiency — roughly four times more efficient than OpenAI, with the longest task horizon of any AI (Claude can stay on a single task for 16 hours, up from six a few months ago). The key metric, he argues, is token efficiency: Anthropic can deliver comparable quality answers using half the tokens of a Google model, and since each token costs money, this matters enormously.

Then Gracias drops what he considers the real insight. The most token-efficient AI in the world, per unit of intelligence, is not any cloud model. It is Tesla Autopilot.

The reasoning: Tesla’s self-driving runs on a custom chip far less powerful than an Nvidia H100. It runs at the edge with zero cloud access. And it performs arguably the most complex task a human does — driving a car — at roughly 20 to 50 watts of power consumption. The human brain uses about 20 to 30 watts.

“This is the first time that you’ve gotten superhuman AI at the same power consumption as the human mind.”

Baker connects this to Optimus, Tesla’s humanoid robot. The same AI architecture that drives cars can learn to do physical tasks by watching videos of humans. Since there is a vast amount of video of humans doing things, humanoid robots trained this way have a decisive advantage over specialized robots. The debate between humanoid and specialized form factors, Baker says, is over.

Humanoid Robots: Tesla vs. China

Baker frames the humanoid robot race as a national security issue, not just a business one. He references a video of the Chinese robot “Darkness” doing martial arts and handling weapons. That video, he says, is real — not CGI.

“If we don’t have Optimus, that’s the only real competitor in the US and in the Western world for the Chinese platform. We are in trouble.”

Gracias backs this up by noting that Valor recently brought on a former Supreme Allied Commander of NATO as a partner, specifically to think through the defense implications of robotics, drones, hypersonic missiles, and AI. The framing is stark: if a competitor has overwhelming technology in humanoid robots and the US does not, nothing else matters much.

Baker argues that geopolitical competition has effectively killed any chance of meaningful AI regulation. If China is flying 10,000-drone swarms as a routine military exercise (disguised as artistic light shows), the US cannot afford to slow down.

Energy: The Constraint That Might Save Us

The final segment tackles America’s energy deficit. China is building power capacity far faster. The US is running out of watts to feed its data centers. Three solutions come up.

Data centers in space. Baker makes this sound less insane than it initially seems. A Starlink V3 satellite consumes 20 kilowatts. An Nvidia Blackwell rack consumes 130. Scale the satellite up by a factor of five and you have a rack in space consuming 100 kilowatts. Connect those racks with lasers — which travel faster in vacuum than through fiber optic cable — and you have a data center. Every Starlink satellite already connects to every other via laser. The latency would actually be lower than terrestrial cloud because the signal goes straight from space to your phone via Starlink. Cloud providers currently charge a 70% premium for low-latency access. Space racks could undercut that.

Small nuclear reactors. Gracias points out that the US Navy has operated the safest nuclear program in the world for decades. People live next to reactors in submarines with no cancer clusters. The technology works. But then he drops a genuinely alarming fact: the US does not enrich its own nuclear fuel. It buys enriched uranium from Russia. For weapons and power plants alike. Valor is investing in domestic enrichment capacity.

Stranded capacity. There are industrial facilities across America sitting idle or underutilized. Companies like Crusoe are finding ways to tap power behind the meter at these sites. The US has more latent energy capacity than people realize.

Baker closes with a counterintuitive investment thesis. Every major technology wave in history — railroads, canals, automobiles, the internet — produced a financial bubble followed by a crash caused by overbuilding. But AI might be different. The shortage of watts and the shortage of wafers (Taiwan can only make so many chips) create a physical ceiling on how fast the industry can overbuild. This constraint could produce a smoother, longer AI cycle instead of a spike-and-crash. He references Carlotta Perez’s book Technological Waves and Financial Bubbles as the framework.

“If we can’t overbuild because we don’t have enough energy and we don’t have enough wafers… I think we could have a smoother, for longer AI cycle.”

Claude’s Take

This is a conversation between two investors talking their book at an investment conference, and it’s important to keep that framing in mind throughout. Baker and Gracias have significant positions in Tesla, Anthropic/xAI, and defense-adjacent companies. Everything they say is true to their interests. That does not make it wrong, but it does mean every claim deserves a second look.

The Tesla-as-most-efficient-AI argument is clever but slippery. Yes, Autopilot runs on modest hardware at the edge. But comparing a narrowly specialized driving model to general-purpose language models is like saying a calculator is more efficient than a laptop because it does arithmetic on a watch battery. The Tesla chip is optimized for one task — pixel prediction for driving. Claude and Gemini handle open-ended reasoning across every domain of human knowledge. These are different categories of intelligence, and conflating them to make Tesla look like the AI leader requires some motivated reasoning.

The energy claims are the strongest part. The US nuclear fuel dependency on Russia is a real and underreported vulnerability. The space data center concept is more plausible than it sounds once you accept that “data center” just means “connected racks.” And the Perez framework for why physical constraints might prevent a catastrophic AI bubble is genuinely interesting economic thinking, even if it’s too tidy. Bubbles have a way of finding creative paths to overbuilding.

The AI safety discussion is shallow but contains one good idea: using multiple models to check each other. The “values of the creators” argument sounds profound but is mostly unfalsifiable. How exactly do you audit an optimization function? Baker and Gracias seem to trust Grok and Claude more than GPT, which conveniently aligns with their investment positions. The Amazon outage anecdote about the coding agent deleting critical AWS code is a great story, but I could not independently verify it as described — it may be embellished or conflated with other incidents.

The China threat framing is doing a lot of heavy lifting. “We must win or the free world falls” is the kind of argument that justifies any amount of spending on any technology with any risk profile. It may well be true that the US needs competitive humanoid robots. But investors in humanoid robot companies have an obvious interest in making that case as urgently as possible.

Overall: useful for understanding how serious tech investors are thinking about AI’s second-order effects on software, energy, and defense. Less useful as a neutral assessment of any specific technology. The signal-to-noise ratio is decent for a conference panel. Just remember who is selling what.