heading · body

Transcript

Ai Tesla Defense And Energy What Comes Next Antonio Gracias And Gavin Baker

read summary →

TITLE: AI, Tesla, Defense & Energy: What Comes Next? | Antonio Gracias & Gavin Baker CHANNEL: iConnections DATE: 2026-03-20 ---TRANSCRIPT--- If we’re looking at a Terminator scenario, I will be in the resistance. Gavin Baker Gavin Baker is with us, managing partner and CIO a trainees Gavin Eric to collaborate on artist intelligence investments. I feel safer when Antonio’s firm valor is invested and on the board. Antonio Grassi, a board member at Tesla and an early investor in the company. We were known at valor for investing in world changing companies, a shared passion, and that shared intellectual passion has led us to ideals together. If you do not start using AI regularly, you will be outcompeted by people who do. All right, guys. Thank you for joining us again. So we did this last year. Three of us. I still haven’t been able to get Gurley back. So you’re still stuck with me. We love you Ron. Thank you. I love you, too, Gavin. All right, so let’s start with AI because you guys are definitely experts in AI. And I need to tell you, I recently installed, Cloud Bot, which then was something else, and now is open claw. I am completely addicted to this thing. It is doing a million things for me. I haven’t given it access to anything. It doesn’t have access to my email or my phone, but I gave it its own email address and I kind of treat it like an assistant. It will talk to people and send information and analyze things. It’s unbelievable how good it is. So is someone I this thing scares me at the same time, right? Because it’s open source and I hear horror stories about, you know, things giving away like API keys and stuff. But it clearly is the next step in my mind in the evolution of AI. Do you either of you have one yet? And if not, will you soon? Yeah. Good. I hope I have one soon. I do agree I think, whether it’s open claw or something else, this is maybe, maybe the next kind of consumer and maybe even business, you know, UI, UX layer for AI. It ensures that the AI always has all the context and information necessary to be useful to you. It knows what’s going on in your life. The fact that it’s open source should make you feel better, not worse. Open source projects are generally more secure. They’re fixing bugs every time. It’s a tool. If you, You know, a knife can be dangerous if you use it in the wrong way. And I am encouraged that the Open cloud Foundation is staying open source even after, open AI. I acquired Peter Stein. Steinbrenner who? The creator. Who’s the creator. But I do think it’s something really, really important. And it’s a new game for the for these AI companies. AG yeah. So, I wanted to put one. I wanted to have my own in my compliance as well. You can’t do this. We do have, because I just want to try this stuff, you know? Yeah, we do have six developers of valor that we hired about my own volunteer, like, four years ago, because our view was like, we have to. We have to eat our own cooking and use our intelligence. So we started with, like, the boosted trees and then obviously, lens work. We we started using this probably, 2 or 3 weeks ago. And it’s extraordinary. I mean, I think it’s going to remake the way we think about the business and the way we do our underwriting, our analysis in total, uninterrupted, productive. You know, I already own the numbers. Yesterday, our developers sort of using, cloud code like two weeks ago, they’ve seen a, an uptick in their productivity. So yeah, it’s extraordinary. So I am now developing software with it. Like, I if I have an idea in a moment that I’d like a piece of software to do x, y, z, I just talk to this thing. It develops the software. I then tell it to have grok check all the code. Grok then says there are these critical security holes. I say fix those holes. I iterate like three times, and then it gets like a super high rating in terms of the quality of the code. And I have a software product, right. And I’m doing this like I built an entire CRM for myself in two hours. Three hours. I mean, it’s it’s really mind boggling. Do you think that element of it is potentially going to impact SaaS businesses like I’m, I’m thinking we pay 100,000 a year for HubSpot. Like if I it’s me doing it and I’m an idiot really with this stuff. If I gave someone that tool who’s really good, would they be able to redo our HubSpot? Much better and more precise for what we need? Yeah. Oh, I mean, well, one, this is 100% being priced into the market. You know, I think HubSpot peaked at $850 and it’s in the low two hundreds. Maybe I’m off by a little, on either, but, yeah, these these stocks are down a lot. So that risk is for sure being priced in. I think it will be, interesting to see if you are a, you know, how it bounces. Like, it may end up that small businesses are the ones that kind of vibe code their own software and larger, more regulated businesses once, you know, a Good Housekeeping seal of approval on it. But this is this is for sure a risk. And conversely, I also think with the valuations some of these public companies are at, particularly the ones that are smaller cap. It might make sense for one of the labs to acquire a few of these. The reason Larry Ellison is one of the wealthiest people in the world isn’t just, you know, he came up with this incredible, relational database. It’s that he was kind of the original. Vista Equity Partners ended after the bubble burst. He acquired PeopleSoft, he acquired Siebel, and he had the same playbook. He just took out 80% of the humans. Revenue is recurring, and a lab can do the same thing. You can acquire one of these small software companies, take out most of the humans for a couple of you’re kind of AI native experts in and use it as a distribution channel. So it’s going to be really interesting to see what happens. And I do think, you know, to quote, Maximus Decimus Meridius, the decisions the software company CEOs make now will echo in eternity. Currently, like they have 2 to 3 months to make good decisions. Maybe. I know we actually have not invested in the pure software company in, like 6 or 7 years. Wow. So it became clear to us that, our stock is going to be like a real what are we going to be? Would you stop doing you know, we don’t short when I leave a mark, which we can’t use for these things. And who knows when it’s already broken is going to break. I will say there are already companies that actually have from here in Miami. I know that’s that’s doing this. It’s it’s buying, private software companies and we’re making them using AI. We just brought on, you know, one of our newest partners, is a form of this, a guy. Look at this for us. And. Yeah, I mean, I think if you’re in the software business, you either you either evolve now, like we’re right now, or you’re dead. And I think the not going to fix that are going to be very extreme because many of these firms that were in those for the buyout world in the last 5 or 7 years have hired debt out. Right. You saw what happened. This is going to have implications throughout the markets and all the way to the debt markets. Again, I don’t play in those markets, but I’d be very worried if I were holding the, you know, 3 or 4 times levered, high yield security for a buyout done by a software firm two years ago. Right. I you worry about that. I think I would just say, I think it’s very interesting to me that you haven’t seen any of the traditional software firms make a purchase yet. You know, they’re all saying like, hey, we think it’s going to be amazing for software. Well you used to think eight times sales was an incredible deal that now you can buy stuff for three times sales. Yeah. Or four times sales. It’s a failure. So Dion where where are you. Right. Yeah that’s a great point. Can we talk about some of the safety issues? I know there have been the, like, lots of stuff on X about, people who’ve been able to get various AI tools to say some pretty scary stuff. To lie. I heard one story that, one of the bots actually threatened someone who was having an affair in the company with, like, hey, I’m going to tell everyone about your affair. If you don’t do x, y, z, I was cloud. That’s cool. Yeah. So, like, careful your cloud, by the way. That’s. Who knows that access. Yeah. That’s true. Keep it off your phone. So, like, how do we deal with that element? Because I love this thing. I don’t want to give it up, but I also don’t want it to kill me in my sleep some day. I mean, listen, it’s a tool like you do. You don’t drive your car drum for a reason. Yeah. Like, you know, you’re very careful when you give these bots. Your payment credentials are, you know, right. Access to your life in the sense of read access and write access. And, I mean, it’s these issues are only multiplying, like, Amazon had a significant outage because they have they gave a coding agent autonomy to improve code. And it decided that the code, a critical piece of code at AWS, was so bad that it just deleted it and started over. It sounds like you, by the way, it’s very, very loud. Very loud. But there was yelling on there was a 14 hour outage. You know, they were able to unplug the servers at X and cut over. Yeah. But you know, the bot didn’t understand that it needed to tell the humans that it was doing this. Okay. So it’s it’s a very real issue. Everyone should read the cloud. Safety reports and which are for sure the most fulsome. Although I do agree with what Elon said, that Tesla is the safest car and they don’t have a safety team because safety is everyone’s job. And so I think like having that ethos at a lab is very important. But yeah, it’s it’s a very real issue. I mean, the thing I might add to this, I think I think Kevin’s right. Is it unlike maybe a knife or a gun? Right. The model itself has values, right? The models are imbued with the values of the creators. We see that again. It’s very important. The models are imbued with values. The creators. So when you look at the people that are building these models, and you ask yourselves, what is your optimization function? If the optimization function is example consumer morality, AI, OpenAI, you might get, you know, a recursive learning loop that creates confirmation bias and it’s called commit suicide, which there are lawsuits about this today. What happened if it’s right or not with the lots of others today. You be careful with this. The reason we went so deep on, on AI is it’s optimization is true seeking. And I think that’s the best way to figure out if something is going to be safe for you to use. Is do you are you identifying with the values people building it? So one might be marginally better than other. Ultimately this is all going to be very compatible with the other. I think the world where we go out into who cares about what. So do you care about, truth and you probably use as you did. Right. You had Claude and Glock together, right? By the way, thank you for the fish. Thank you for the business, the tokens, I appreciate that. Happy. Thank you very much. But I think that’s interesting. That’s a very interesting. You do. Right? You use two of them to kind of balance each other, you know? Let’s check one against the other. I that’s really smart because you want to make sure you have, at some level, an intellectual balance in the models that are telling you the truth. And, look, anything could be wrong ultimately. But what is the optimization and who designed it? If it’s designed to be, you know, something you don’t like, you should be careful with it. And in service of that optimization. Greg 4.2, which I highly recommend everyone try, it’s the first, broad institution of multiple agents. So every time you use it, even on the first year, it spins up for different agents and you can see them checking each other. Yeah. Because it wants to be grounded in reality and truth. And I think that’s very good for humans as a species. Yeah. I didn’t even realize that was happening. I have super grok, and I see it’s doing stuff I guess I never look you you can set it to 4.2. It’ll show you the things where each other. I say, okay, that’s very cool. All right, can we talk, can we shift to talking a little bit about, autonomous driving and Tesla and also obviously Optimus, because I feel like I’ve heard at least 2 or 3 times people say, you know, ten years from now we’re going to say, hey, remember when Tesla was a car company? They don’t say that right now. I don’t know if it really applies just yet, but I will say I bought I bought my dad a Tesla like 6 or 7 months ago and just riding in it with him, I loved it. So then I got myself one and I literally don’t drive anymore. It drives me everywhere. And now I’m trying to figure out how to fake out the camera that yells at me if it sees me on my phone. But like, truthfully, it is so good. I do think it’s it’s at least as good as me, and I’m probably just not willing to admit that it’s better. I think statistically it probably is better than you. So I would say a few things before, turning it over to the true expert. So, and I’m going to start with Claude and Anthropic. So anthropic is roughly four times more capital efficient than OpenAI. And they’re growing really, really fast. And they’re not burning, burning that much. Cash. And on top of that, their AI currently can do the longest horizon task. And I think this is the most important access for AI capabilities. You know, the human we can work on the same task and make progress for days on end, weeks on end, months on end. You know, assuming we sleep at night. But I Peter out of 16 hours, only a few months ago, it was six hours. And if you think about it, text task length is, you know, correlated to a lot of accomplishments. And the reason cloud is so capital efficient and it has the longest task horizon, is they’re the most token efficient. So I believe Xxii has the lowest cost per token because they build data centers and Jensen’s word words in a superhuman way, faster and at a lower cost. And everyone and they run GPUs at a higher utilization. So they have the lowest cost per token. But anthropic is extremely token efficient. What that means is to with a reasoning model, the more tokens it generates, the better the answer. So, an anthropic model can give you a comparable quality answer to a, say, a Google model using half as many tokens. And each token is literally cost. So the sum of those two costs per token and token efficiency is going to be really, really important. And my good friend Antonio reminded me of something very important. This read this morning. Please. Brief read. Oh, Gavin, I heard a conversation about this morning, and, you know, I said, well, you know, the most efficient on a token per intelligence basis. Yeah, in the world is actually Tesla Autopilot because it is running on an A4 chip, which significant inferior, to AC 100. It’s running at the edge, so it has no access to that to the cloud. And it’s it’s doing the most complex thing human does. And so yeah, I think if you want to think when you think about Tesla, thinking about Optimus, you can ask by the fact that Tesla actually is today the most advanced AI in the world. It does the most complex thing even does drive a car. And what it does is it sees a predict. The pixel predicts a pixel. Use the command what you’re doing when you do close the cloud to make code, you’re trying to get it to create the command that makes the pixel. Imagine if, you’re in a world where with the AI does it sees a pixel, predicts the pixel is the command. It makes your software as if it were on the screen already. Like the developer sitting at the actually. Right. So that’s where this is all going ultimately. And as Gavin points out correctly, what, you know, I’ve got to be careful with why period verbs, basic AI but we’re we’re going after an AI is if the, the integration of mechanical engineering electrical engineering drives the data center efficiency. When you add being really good at software and integrating the software onto the chip and all the networking, which Tesla is really good at, and right when you kind of put that over, it’s anyone’s head into Xxii, which you get it some it’s much more efficient it producing a unit of intelligence. Ultimately, which is the ultimate answer. We’re how do we get there? You want the most efficient use of intelligence of tokens to use intelligence. And this is the path. Interesting. Yeah. And this is also, I think, why hopefully humans have a role for longer to play because, you know, cloud can do amazing things. Has can she has can Gemini. But they are all trained on hundreds of megawatts of power over many months. Our brains run on 20 to 30W. Right. And what Antonio just said is with Tesla’s AI, this is the first time that you’ve gotten superhuman AI at the same power consumption as the human mind. Is it literally the same power consumption? It’s very it’s low. I mean, 2020 wasn’t the at 20W of power is what your brain is. And, you know, the car is probably in that range. Wow. I think it’s 50. Yeah. But yeah, of course network element work. Yeah yeah yeah yeah that’s very impressive. Okay. So I assume this sets them up to introduce a robot. Optimus that can probably do things that other robots that we’ve seen, I imagine can’t come close to. I mean, are we going to are we going to see this thing? I’m saying you have no knowledge. I absolutely believe that. And I’m super excited for it. The only, I’ve a very happy marriage, but the but a source of marital conflict is, that I don’t always love to take out the trash immediately. I sometimes okay with leaving some dishes in the sink instead of putting them in the dishwasher immediately. You know, sometimes I’ve been known to leave dirty clothes on the floor of my closet instead of the hamper. And, this surprises literally. No. Exactly. You could just hire you. By the way, they need jobs, too. Well, yes, for sure. Humans need jobs. But my wife is a big believer in doing things ourselves. But I just don’t think that belief is going to extend to a robot, so I, I hope I have literally consumer number one or top ten of Optimus, because it’s going to make a meaningful difference in my in everything I scored. But I think it’s coming. I think it’s real. I don’t know if it’s later this year or if it’s 2027. But the big breakthrough, you know, there’s a huge debate in humanoid robots for specialized robots. And I kind of think that debate is over now. Now that they’ve figured out how, humanoid robot can learn from videos of humans doing things, and we have a vast amount of video of humans doing things. Yeah. And I think this is pretty decisive that humanoid robots are going to be a big winner. And I think it’s Tesla versus the Chinese. They’re in the same way. It’s they’re they’re in the EVs. Except this time, I guess, just like EVs, Tesla has a better AI. Even though. Yeah, I have to say, I think we have to win, but we have to have one. There’s no choice. So you probably saw like last week, there was a video of the the Chinese competitor darkness, which is like doing martial arts and humanoid machine guns. Yeah. And, the internet, that’s real. That’s real. It’s not teach you how. It’s actually real. And, you know, I think if we don’t have Optimus, that’s the only real, competitor in the US and in the Western world for the Chinese platform. We are in trouble. I mean, imagine a strategic competitive map where, a competitor has a overwhelming technology, and we don’t. I mean, if you’re worried about something. Yeah, worry about that across the defense tech ecosystem, whether it’s drones, humanoid robots, whatever. You’re whatever, part of that vote you’re looking at, we need to have at least 40 parity, if not better, 40%. So that’s a great segue into, chatting a little bit about defense. Obviously, China is really the only country we ever hear about in terms of a true adversary. We hear about their drones all the time, these drone swarms. Someone said, Joe Lansdale earlier today said, when you see the 10,000 drone, beautiful, display, all these beautiful artistic things that are forming, that’s actually a military exercise. They’re demonstrating. They’re drone capability to the rest of the military world. Give us your views on, I guess I in defense. And again, how do we how do we implement it in a way where it doesn’t turn on us like we want to defend against China, but we also don’t don’t want it to be used to track us to do things that are obviously bad for the country. I would just say, I think geopolitical fears have been weaponized such we’re not going to regulate any of this. And we can’t regulate it, if China is not and they’re clearly not I mean, they’re doing those, you know, drone drone shows with. Yeah, ten plus thousand regularly. So I think it’s essential that America win. I think America is very important for the world. That’s a country we stand for something and we cannot protect the free world and maintain the free world. And, you know, whatever people may say, I do think we are still very committed to a rules based order, which has served the world well for many years now. If we aren’t dominant in one robotics to mass production or robotics at, the very low cost and dominant in AI. So we have to win. But Antonio is is a true expert here. So please, I’m I wouldn’t say that we do have an expert actually, at our firm, we we’ve been lucky enough to bring on as a partner recently. Chris Paul earlier who’s the, former supreme allied commander of NATO and commander of U.S. force in Europe and, you know, spent 30 plus years in the Army thinking about these issues. He’s joined us. You know, we do a lot of national defense, as you know, and he is thinking he’s leading us through thinking about, all the issues Gavin just laid out. So everything from what is the right, in application, whether it’s drones, robots, where it might be hypersonic missiles, etc., and the supply chain in the US, as well as other things like energy, which we do as well. So yeah, these are very important issues. We’re working on it, you know, I’d say very definitely at our firm and trying to figure out how we can add value. We are we build manufacturing capacity or the last 30 years in our firm experience internally, and we’re we’re really leaning into helping all the firms we believe are going to make a difference here for America in the next, you know, 3 or 5 years. And Chris helping lead us through that. We have a few minutes left. Let’s talk about energy. Right. Because none of this is possible if we don’t solve the energy problem that we have in the US. China is outpacing us massively. You guys can quantify it better than I can. I just know that where we don’t really stand a chance, going toe to toe with them in terms of, creating more energy capacity in the country. We’ve heard Elon talk about putting data centers in space as a much more efficient way, to compete. I’d love to just get your thoughts on how how does the US ultimately deal with this issue, because we’re running out of energy watching or data centers in space. That’s really racks and space. I think people get hung up on data centers in space, and they’re picturing, you know, a building the size of the Pentagon floating around in space. And they’re saying that’s implausible. It’s racks in space. A Starlink V3 satellite today consumes 20kW. An Nvidia Blackwell rack consumes 130. So if you scale that up by a factor of five, the Starlink V3, maybe you have a rack in space that, consumes, you know, 100kW and has 50 or 60 separators. Well, what makes it data center? You connect those racks using lasers beyond a distance of ten of 10ft or 20ft, traveling over fiber optic cables. And then you have a data center. Every Starlink satellite is connected to every other Starlink satellite with lasers, and the only thing faster than a laser traveling over fiber optic cable is a laser in vacuum. Yeah. So picture kind of like a swarm of these satellites connected by laser. And that’s your data center. And cloud providers today get a 70% premium, kind of per GPU hour because they’re very close to the consumer. So explain a CDN, provide something like Cloudflare. Because there’s a lower latency experience as a user and people are willing to pay for that. So the the companies pay for it. Even lower latency will be the data centers in space because they’re connected to a Starlink, which comes right to your phone. Okay. So I think it’s very elegant. I think it’s it’s it’s very important for America. I guess what I could add is outside of the idea of solar and space, which are the it’s a great idea. You know, we’re thinking about what are other answers. You have to rest, really. And, look, I think the reason the Chinese got saw it is that they’re using a mixed solution of, you know, it’s it’s it’s coal, it’s solar. It’s, you know, it’s driving. Right. We have other, other things we could do here. I mean, we can’t do nuclear here. We’ve got a couple of investments in, fusion reactors, new technologies, small for energy reactors. You know, the US Navy has had the the safest nuclear program in the world. They use Italian. Right. But small nuclear reactors and people live next to them in submarines. There’s been no cancer. Right. So we know how to do this in this country. We know how to do this. Would send us down this really like this road was I wouldn’t and met with one of the forms. It does neighbors it. How do you guys do this? Is it super easy? We use polymers. Uranium is a little bit problem. But you can do this. So there are a few companies in America they’re doing that. You know we went down this on the supply chain. We figured out there there’s actually no Richmond America. I mean, the scariest might sound there’s like my space. Do you want any, moment where, you know, we found I’d learned that we were flying us rockets and Russian engines. That was disastrous, right? I mean, how could that possibly be? The only space program has is using, you know, Russian engines. We need to think about that. The same moment I had live bulb moment with nuclear, where I learned that we don’t enrich fuel at all in America. We buy from the Russians. I mean, just I mean, say it again that all the fuel for weapons and plants from the Russians. So, you know, we’re investing in a company, we invest that company when we add more to it that’s trying to build capacity us to to make fuel. And we’re we industrializing the base. Okay. We are many of us are working on this, reinforcing the base for multiple answers. I think solar and space is the great answer. But there’s terrestrial answers as well. And the last thing I’d say is we have we have companies that, you know, have an investor together that have found stranded capacity in America. The Americans are very scrappy, okay? We we are a nation of builders. And when faced the question if you’re how to get power down in Memphis, right, Crusoe, if you’re on a power behind the meter strain of wits trying to win in Texas. So there is power on this country. There’s a lots of industrial facilities on this country that are not being utilized that we can repurpose. So I don’t I think it’s wise to be like really down in the mouth and dire about American energy. We are America, okay? We have we we inverters. We’re going to figure this out. Okay, I believe that I believe we will rise to the challenge. I think our state capitals are good for us. Like we it’ll make us better and I think that’s our attitude. Let’s go figure it out. 100% on America. Never. I’ll always bet on Team Blue. But I that from an investment perspective, because this is, an investment, conference. I do think, though, the power shortage is actually really good for I for the following reason. The entire history of financial markets suggests Carlotta Perez. Perez wrote a great book about this, called, I think, Technological Waves and Financial and Financial Bubbles. Every time you get a fundamentally new technology for the last 100 years, we’ve had a bubble. And, you know, it started with South Sea Bubble. What was the new technology being able to cross the oceans? It was it was the understanding of longitude. You had a railroad bubble. You had a canal bubble, yet an automobile bubble. You had a TV bubble? Yeah. No, no, no, no bubble. And the shortages of watts and the shortages of wafers. I am hopeful a bubble is terrible. Nobody should want a bubble. It’s terrible to invest through it. Yeah, it’s hard on the way up. It’s hard on the way down. But the shortages of watts and also wafers, I think, are going to keep us from having an overbuild. Because normally you get a financial bubble that leads to a giant overbuild, then you have a crash. If we can’t overbuild because we don’t have enough energy and we don’t have enough wafers because Taiwan simply won’t make the wafers like, I think we could have a smoother for longer AI cycle, and that is good for everyone in this room and avoid the bubble or in a bubble. But I think we have a question how big the bubble? How big is the bubble? Bubble? It’s definitely bubble. That’s not it’s not like one thing about what they’re a little bubbles here. They’re bubbles around the different areas and different segments. But how big the bubble is? The question how big the break is. Yeah for sure. Well guys, you never disappoint. I’m always blown away by your depth of knowledge on on all of this stuff. And, I also just love your optimism about the future in the country. So in America and America wave it in the opposite, which is way better. Thank you both for being here. Thank you. Ron, thank you so much. Thank you.