heading · body

Transcript

Demis Hassabis Why Agi Is Bigger Than The Industrial Revolution

read summary →

TITLE: Demis Hassabis: Why AGI is Bigger than the Industrial Revolution & Where Are The Bottlenecks in AI CHANNEL: 20VC with Harry Stebbings URL: https://youtu.be/SSya123u9Yk


I would say about 90% of the breakthroughs that underpin the modern AI industry were done either by Google brain or Google research or deep mind. So one of our groups the returns are kind of still very substantial although they’re a bit less than they were obviously at the start of all of this scaling. We have amazing guests on the show but very few honestly will be considered in the same realm as Newton Turing Einstein. Our guest today is one of the greatest minds on the planet and I consider myself incredibly lucky to have had the chance to sit down with him. Those labs that have capability to invent new algorithmic ideas are going to start having bigger advantage over the next few years as the last set of ideas all the juices being rung out of them.

This is a truly special one and one that I’ll remember for a very long time. I think we could probably get 30 40% more efficiency out of our national grids. Enjoy the episode and I so appreciate the time we had with a very special human being. I sometimes quantify the coming of AGI is 10 times the industrial revolution at 10 times the speed. Thrilled to welcome Damis Albus at Deep Mind. Ready to go. Deis, I’m so excited to be doing this. Thank you so much for joining me today. Great to be here. Now, there are many places that we could have start, but I was watching actually the documentary that you did, which was fantastic, and I actually wanted to start on AGI. Mhm. Definitions are very varying. You’ve been very thoughtful about what it means to you. And so I wanted to start, can you explain to me how you think about it today so we get that as a kind of ground center? Yeah. Uh well, we’ve we’ve always defined we’ve been very consistent how we define AGI as basically a system that exhibits all the cognitive capabilities the human mind has. And that’s important because the brain is the only existence proof we have that we know of in maybe in the universe uh that general intelligence is possible. So that for me is the bar for what AGI should be. It’s the worst question. How close are we? Everyone everyone says different things and it’s very difficult when you have you know very prominent figures saying it could be as early as you know 2026

Yeah I mean I think look I’ve got a probability distribution around um the timings but I I would say there’s a very good chance of it being within the next 5 years. So that’s not long at all. Is that closer than you thought? Has that changed over time? Not really. I mean actually when you when you uh it’s funny um my co-founder Shane Le who’s chief scientist here um uh when we started out Deep Mind back in 2010 he used to write blog posts sort of predicting about uh when AGI would happen. And bearing in mind in 2010 when we started almost nobody was working in AI and everyone thought AI no one was reading it was a dead end. No. And but they’re still there on the internet for people to check. And uh we used to do this extrapolation of compute and algorithmic uh progress. And basically we predicted around 20 years it would take from when we started out and I think we’re pretty much on track. What are the biggest bottlenecks when you look today? You know in in the documentary you said you just never have enough compute. What are the biggest bottlenecks when you look at where we are today? I think compute is the big one. Not just for the obvious reason of scaling up uh your ideas and your systems as as you know the scaling laws as they’re called you know keeping on building bigger and bigger um architectures with more and more parameters. Um and as you do that you get more intelligent systems but the other thing you need a lot of compute for is for doing experiments. So um the computers the cloud is our workbench basically. So if you have a new idea, a new algorithmic idea, but you want to test it, you kind of got to test it at a reasonable scale, otherwise it won’t hold when you actually put it into the main system. So um you need quite a lot of compute if you have a lot of researchers with lots of new ideas. You mentioned the word scaling laws. A lot of people suggest that we’re hitting scaling laws and we’re starting to see that plateauing effect. Yeah. Do you think that’s true? No, I don’t think so. I think it’s a bit more nuanced than that. So um of course when uh the leading companies all started building these large language models you’re getting enormous jumps with each generation of new system. Um you know maybe they’re almost like doubling in performance. Uh at some point that had to slow down. So it’s not kind of continuing to be exponential but that doesn’t mean there isn’t great returns uh still for scaling the existing you know systems up further. So, and we and the other frontier labs are getting uh a lot of great returns on on that kind of compute expansion. Um, so I would say the returns are kind of um still very substantial, although they’re a bit less than they were obviously at the start of all of this scaling. Where are we behind where you thought we would be? Um, I think actually in most areas we are ahead of where I thought we would be. If you think about things like um the video models or um even now with our newest systems like Genie, they’re interactive world models. Um which I think is kind of incredible if you sort of step back and think about it. I think if you’d shown me that 5 10 years ago, I would have been pretty amazed. Um so I think in most domains we’re we we are ahead of where um the field thought. Um there’s still some big things missing though like continual learning. These systems don’t learn uh after you finish training them, after you put them out into the into the world. You know, they’re not very good at learning further things. And I think some critical capabilities that I’m sorry to ask blunt and basic questions. Why do we not have continuous learning today? Um well, people haven’t quite figured out yet and all the leading labs are working on this like how to integrate new learning into the existing systems that you know you spent months training. Um so of course the brain does this very elegantly, right? And um probably through things like sleep reinforcement learning. So you know you just kind of get consolidation it’s called in the brain where you know your memories during the day are replayed and then some of that information is elegantly incorporated into your existing knowledge base and perhaps we I thought for a while maybe we need something like that uh to incorporate new information along with uh uh the existing information base. You mentioned video models, you mentioned kind of media and image. It seems that DeepMind has progressed very quickly and caught up slashovertaken other providers. I think I’ve tweeted I think you liked it, but I basically tweeted um what I used and how it’s changed over time and Deep Mind Now is my number one for research for new shows. It wasn’t that way before. what has led to the acceleration and progression of deep mind in a way that it wasn’t maybe there 2 to 3 years ago? Yeah. Well, we made some organizational changes. So, I think we’ve always had the deepest and broadest research bench at Google and at DeepMind. I mean, if you look at the last decade uh or plus, you know, 15 years, but I would say about 90% of the breakthroughs that underpin the modern AI industry were done by either by Google Brain or uh Google research or deep mind. one of our groups um if you think of like Alph Go and reinforcement learning and of course transformers you know these are all the key breakthroughs so I would back us to sort of um make those breakthroughs in the future uh if there are any missing ones um and I think we’ve basically helped put together all the talent from around the company sort of pushing in one direction uh and then we talked earlier just about you know compute resources it was also about combining all of our resources together so we could build the biggest models rather than having two or three versions uh around the the company. So I think a lot of it was assembling together all the ingredients we already had and then kind of pushing with relentless sort of focus and and and pace um acting almost like a startup really uh to get back to the the frontier and and be ahead in in many areas. You say if anyone’s going to do the breakthrough it could and should be us. When you think about that is continuous learning the next breakthrough that you’re most excited by? I think there’s quite a few things that are missing. There’s there’s continual learning. I think there’s a lot of uh I think a lot of mileage in looking at different memory systems. Um at the moment we have these long context windows which are kind of a bit brute force. You just put everything in them. Um I think there’s there’s there’s a lot of uh uh interesting probably architectures to be invented there. Um and then there’s stuff like uh long-term planning, you know, hierarchical planning. These systems are not very good at planning at long time horizons, you know, many years into the future. uh which we as you know with our minds we can do so um there’s quite a lot of uh uh problems I think that’s still left to overcome maybe one of the biggest is consistency so you know the I sometimes call these systems jagged intelligences because they’re really amazing at certain things uh when you pose the question in a certain way but in if you pose a question in a slightly different way they can actually still fail at quite elementary things so a general intelligence shouldn’t be that sort of jagged when you reposition files and you set up agents to perform in certain ways and then the files fall over configure it completely falls over. Exactly. 100%. That’s a disaster. Yeah. Well, I mean the general intelligence, you know, if you think about how our minds work, it shouldn’t have those kinds of holes in it. We said about a plateauing of scaling was everyone talks about a commoditization of models in terms of capabilities. Do you think we see that or do you think we see one to two continuously accelerate ahead of the others? Yeah, I feel like uh maybe you know the the the the three or four leading labs now of which we’re one I think the gap is sort of um starting to pull away because uh a lot of these tools also of course help you build the next generation. So things like coding tools, math tools and it’s getting harder and harder I would say to kind of ek out the same uh gains from just the same ideas. So I think those labs that have capability to you know invent new algorithmic ideas are going to start having bigger advantage over the next few years as as the the last set of ideas are sort of um you know all the juices being rung out of them. I mean you know you were very open with a lot of your research for years and we see many very good quality open models. How do you think about the future of open? I have many portfolio companies that kind of use frontier models and then they use that to set a benchmark and then they use open models to kind of get as close as possible but with more cost effectiveness. What does that future look like? Yeah, I think it’s probably similar to what we’re seeing today. I mean we’re we’re big supporters of of open science and and open models and we’ve done many many things obviously from from the original Transformers to to AlphaFold you know these are all uh things we sort of given out into the world and to help the the the research community and we plan to continue to do that especially in applied domains you know scientific domains applying AI to science which is obviously my passion um but uh I I think increasingly um you know what you’re going to see is the open source models probably one step back from the absolute frontier. Um you know it usually takes about 6 months for the open source community to sort of reimplement and figure out what those ideas are. Um but we are also uh pushing hard on a kind of suite of open source models called Gemma which are you know we’re determined to kind of make best-in-class for their sizes. So specifically for small developers or um academics uh or or you know the beginnings of a startup I think they’re perfect for that and also edge computing too. So we’re very interested in open source models for certain types of um applications. How do you think about a world post LLMs? You have different people with different views. You Yan Lun with very different views. For me, I don’t think it’s uh you know, I kind of disagree with Yan on a few things in terms of um I think there might be there’s a 50/50 chance there’s some things maybe missing that we still need to make breakthroughs in perhaps their world models. um uh these kinds of uh approaches. But my betting is uh pretty strongly is we’ve seen how successful these foundation models have been. They can do incredibly impressive things. I don’t think that’s going to go away. We’re still seeing seeing you know gains from the returns from the scaling laws. Um so my I think the only question really is when you think about a future AGI system is you know is an LLM foundation model going to be the key component only or is it the total system right so I just think it’s it’s a question of um uh you know is there anything else needed not is it not I don’t think it’s going to get replaced I think it’s going to get built on top of these foundation models just like the way we do with our world models when we think about that future 5 years out as you said potentially with AGI what does that world look like? Many people have different concerns. Yeah. If we just start generally, what does that world look like to you? I think on the positive side and the things obviously I’ve I’ve spent my whole career in life building towards AGI is I think it will be the ultimate tool for science and medicine. So in terms of advancing scientific discovery um finding cures to diseases, I think we need that kind of technology. And so I’m hoping um in 5 years plus time we’ll be sort of entering a new golden era, golden age of scientific discovery. Uh so my mother’s got multiple cerosis. So it’s like something it’s the thing that I’m always most excited about. The thing I worry about is actually kind of drug discovery, the process of getting it through all the trials and knowing that it takes a decade before my mother will actually get any benefits from it. How do we solve that? I think we’ll get to that point soon. First of all, what we’re doing is, you know, after we did the alpha fold project to do protein folding, um, then we spun out a company called Isomorphic Labs, which is doing extremely well. And that is supposed to, you know, the idea there is we’re focusing on solving the rest of the drug discovery process, which is a lot of chemistry, designing the compounds, uh, checking it’s not toxic and all the different properties you need for for drugs to be safe. Um, I think we’ll have that whole drug design engine ready in, you know, the next 5 plus 5 to 10 years. then you’re right. The next problem is the clinical trials still take uh many many years, right? Um and but I think AI can help there in terms of um maybe simulating uh parts of the human uh metabolism. Um also stratifying patients to make sure that certain patients get exactly the right type of drug that’s suitable for their uh genomic makeup. Um and so I think AI can help there too. But I think the real revolution will come when a few maybe a dozen or so AI drugs get through the whole process. Uh and then the government and the regulatory bodies see that and they have enough data to sort of uh back test the predictions of those models and then maybe what we can do will be in the future where maybe 10 further years where um we can really just trust the predictions uh that the models are making and actually then maybe skip out some steps perhaps like the animal testing is not needed anymore. maybe we can go up the dosage uh uh uh ladder quicker um because you can rely on these models. So I think we got to do in two steps. Solve the drug design problem first and then look at the regulatory uh length of time it takes. Speaking of regulatory AI safety is a big topic and a big concern. I think it was again I watched it last night over dinner which was a great watch which is obviously the documentary and I think it was Stephen Hawking who said we must get it right because we might not get another chance. Do you think that’s right? Yeah, I do think that’s right. I think that is the the the the stakes uh that that uh you know we have to deal with and um you know there’s two things I worry about. One is the misuse of these systems by bad actors and they can be repurposed. These are dualpurpose technologies. They can be used for incredible good in science and health as we’ve just discussed but they can also be repurposed for harmful ends by a bad actor. So that’s one issue. Second issue is a technical one. Making sure these systems as they get more powerful, not today’s systems, but maybe in a year or two’s time when they become more agentic, more autonomous, as we get towards AGI, um can they be kept on the guardrails that we want. Um, and I think regulation, the right kind of regulation could help here in terms of making sure there’s at least sort of minimum standards from all of the uh uh leading providers, but it needs to ideally be a kind of international uh standards. What is the right kind of regulation? And again, I’m kind of quoting yourself back from this documentary. You’re like, I think we need more global coordination, which worries me because we’re getting worse at it. Yes. Which I think would be an unwavering truth. Yes, for sure. I mean, that’s it’s sort of crazy the timing that we’re in, right, with this most consequential maybe technology the world’s ever seen. Um, at the same time as a very fragmented sort of international uh uh system and uh it’s not ideal, but I think we’re going to have to try and do the best we can to at least come up with a sort of set of min maybe minimum standards, some benchmarks that test for undesirable properties. For example, deception. you don’t you know nobody wants should be building systems that are capable of deception because then um they could be getting around other safeguards u and then I imagine you know if things go well some kind of certification process that basically it’s almost like a kite mark of you know quality that this model um has certain uh uh safeguards and certain guarantees uh and so therefore um consumers and companies can safely sort of build on top of it and I think that is how it should go ideally. Um but it does have to be international because of course these systems are crossborder and you know they’re they’re cross territory. Who is that like ultimate verification system? I you know you obviously started with theme park. Yes. Uh yeah brilliant. Don’t put the burgers down too close to the roller coaster. Um but you know obviously as a media company I go through any media platform saying I don’t know what’s real or fake. I’m always having to ask what’s real or fake. Who is that arbiter of verification? Yeah. Well, I think there I mean ultimately it’s got to be government I think but um you know the kind of technical bodies that would um be able to do the technical work would be like maybe the AI safety institutes you know there’s a very good one in the UK that uh uh you know was set up under Prime Minister Sak and I think is doing great work and there’s one in the US and maybe some of the leading countries that have the best research should also have an equivalent body that is staffed with highquality researchers too um that can actually evaluate and audit these kinds of systems uh against certain benchmarks and um I kind of like independently check whether they are meeting the right standards. If I could give you like a magic wand that was only applicable to AI safety sav uh what would you do? what would be your implementation idea program that you would put in place with this magic wand? Yeah, I think we need some kind of um uh international body maybe similar to the atomic agency something like that that perhaps the the AI safety institutes sort of feed into and the research community has to also do this and be involved in like what are the right set of benchmarks to check what types of traits what types of capabilities uh maybe there are other safeguards too like um you know it’s it wouldn’t be desirable to have uh AI systems um output tokens that are not human readable. So you know in some kind of machine language that we couldn’t understand. I think that would in you know introduce a new vulnerability. So there’s quite a few sort of things like that which I think most of the leading labs uh would agree are probably not best to do. Um and then these uh these bodies would uh you know these institutions would test against those things and I think that would give the public confidence and um and you know academia could be involved as well as well as civil society that these uh systems which are going to get incredibly powerful um have been independently uh checked and audited. That’s it. Your magic wand’s done now. That was the one. Maybe I used on the wrong thing but time will tell. Yes. Exactly. you said there about um science being one of the most exciting areas in five years time. I have to ask it because it’s one of the biggest concerns is the labor displacement problem. I just had Mark Andre on the show actually and he said that I was a he said I was a Marxist for I know which I was like for bringing Yeah. Mark’s wonderful so I’m not blaming him but he was like it’s completely rubbish. Yeah. I don’t agree with it at all. We’ve always over pass overcome it. M how do you think about the labor displacement problem when you look at how truly capable these systems are and what that does to labor markets? Well, certainly you know in the past uh with every new revolutionary technology there’s been a lot of uh jobs uh disruption. So that’s for sure and I think that’s definitely going to happen. So a lot of old jobs you know go away or not viable anymore but then actually uh the history of it is that um a whole set of new jobs arrive that maybe one can’t even imagine before and those are high quality higher paying. So that’s the normal course. Of course you have to be very careful to say this time is different and um I guess that’s what people like Mark are claiming is like you know it’s the same as as as the last sort of you know 10 massive breakthroughs like the internet, mobile and so on. Um I do think this is going to be bigger uh than all of those previous uh uh breakthroughs uh technological breakthroughs. I mean I sometimes quantify like AGI at the coming of AGI as like 10 times the industrial revolution uh at 10 times the speed. So unfolding over a decade instead of a century. So if you you know I’ve been reading a lot about the industrial revolution. There’s a lot of great books about it and um that caused a huge amount of upheaval as well as a lot of advances. I mean we wouldn’t have modern medicine today. Child mortality was at 40% back in back pre-industrial revolution. So things things you wouldn’t want it not to have happened but ideally this time around we uh mitigate some of the downsides a bit better than we did during the industrial revolution. I often listen to amazing voices like yours and I get very excited by how fast it’s coming. Yeah. And then I try and stop myself from being too useful and think ah I should be more wise and I’m told that you know we always overestimate what can be done in a year and underestimate what can be done in 10. Is that the truth here or is it actually coming faster than we No, I I think that’s still the truth. I mean maybe all the both time scales of short-term and long-term are nearer than than than other technologies. But I do think like literally today as of today and in the next year things are a bit overhyped in AI. I mean there couldn’t be any more hyped in some ways. Uh but on the other hand interestingly I still think it’s still very underappreciated how revolutionary this is going to be in the in the sort of time scale of about 10 years. So we could call that long term. So there’s still that dichotomy even even today with AI with the concern around labor markets. There’s also a concern around income inequality and the concentration of wealth to few players. How do you see that shaping out with the comment on industrial revolution and what happens there? Well, I think there’s different ways that could play out. So, um, you know, maybe pension funds should be buying into all the big AI companies and making sure that everyone has a piece of that or sovereign funds, maybe everyone, every country should have a sovereign wealth fund that does that. that would be the sort of um uh investment way of doing it. I think also there needs to be thinking thought about if there is uh this massive uh productivity gain but it’s sort of narrow where that occurs you know how do we redistribute and and how do we distribute that um so that everyone benefits from uh uh these huge gains and I can see all sorts of ways that could be done including like providing sort of infrastructure and other things um with that additional productivity gain I mean there could be unbelievable things happening in the 5 to 10 year time scale including like a breakthrough through in some kind of renewable free energy. You know, maybe we sold fusion. Uh we’re working on that, right, with with with our partners at Commonwealth Fusion. Um uh I think AI is going to usher in, you know, maybe we have amazing new superconductors, better batteries, you know, material science. There’s all sorts of ways I could see that completely changing the nature of the economy. How how do we solve the energy crisis that comes with an AI revolution? What it means in terms of energy requirements is unprecedented. I know it’s an incredibly hard question which I’m delving from really hard question to really but how do we solve that unprecedented need for new energy? Well, I think actually um AI will in the in the medium to long run uh more than pay for itself. I think in terms of energy costs in so you know we work on all these projects of like optimizing existing infrastructure like optimizing the grid. I think we could probably get 30 40% more efficiency out of our national grids. Um and then there’s like modeling the climate and weather and we have all sorts of the best kind of weather modeling systems in in in the world. So that helps us work out where the effects are really happening to mitigate that. Uh and then finally the most exciting maybe is like these new breakthrough technologies like fusion like new batteries uh superconductors that I think uh AI will be essential for helping us reach and then I think we’ll be in a completely new energy situation than we’ve ever been as humanity where uh and then that will of course help with things like the climate and environment um and eventually also help us um get into space much more cheaply because if you have a you know an incredible energy source like fusion um then you have effectively unlimited rocket fuel because you can just um distill catalyze sea water. I’m not going to ask you to solve space. Don’t worry then. My my question was on being in the UK. Yeah, you’re in London. I’m in London. I’m very proud to be in the UK. You have been, I’m sure, pushed or prodded at every turn to move to the US. Why have you stayed? Well, um I should ask you that question, too. But I think uh I think I saw in London when we started Deep Mind as a place that and and the UK in general and and Europe in some to some degree there’s incredible talent here. You know we’ve always had I don’t know what it is three or four of the top 10 universities in the world with Cambridge and Oxford Imperial or UCL these kind of universities. So we’re producing um kind of the envy of the world really these amazing graduates and PhD students. Um we have incredible scientists here. We got rich heritage of that all the way from you know cheuring and and hawking and Darwin uh Newton. So you know we have this incredible history of of of scientific breakthroughs and having great thinkers. So I felt we had all the ingredients uh and the talent and great engineers here but it just hadn’t been galvanized into uh an ambitious startup idea deep tech startup idea. and and that’s what I but I I felt it was possible and I felt that there was actually less competition here for that sort of talent and we could even draw in the best talent from the top uh European universities and that’s what it was like in the early days of deep mind. So I think it was a huge structural advantage for us and then the final thing is maybe being a bit away from the valley. There is some disadvantage in that you’re not plugged into the network and the gossip and the the latest trends and vibes and all these things. We’re a little bit out of it here, but um it does I think it’s very conducive to to thinking deeply about things, being more original about how you think. And I think that’s great for things like deep tech where you know you don’t want to be distracted by the latest fad. You want to you you know it’s going to be a 20 emission which is what we knew at the beginning of deep mind. So I think being a little bit away from that um maelstrom is quite good. I mean, Palmer lucky at Angel often talks about being 400 miles away from the valley. It’s core to his kind of innovative thinking. Yes, we’re a few thousand miles away, but yeah, terrible question. Will Europe have a trillion dollar company? You know, you see the Americans always bash us for our lack of large companies. I ping Daniel Ek and be like, “Come on, dude. That’s exactly but we don’t have a trillion dollar company.” Not yet. I mean, Daniel may well get there with one of his companies. You know, Spotify, Hell Singh, I think those are two good options. I think there’s no reason why we can’t have that. I’m I’m going to try and do that with Isomorphic U which is headquartered here uh and I think has the potential to be that. But I think that’s one of the disadvantages of Europe is obviously we’re combination of you know um smaller markets. So that’s one thing we have to kind of overcome. Maybe this EU inc thing um could be a good innovation. I’m pulling out the magic wand again. You can change you’ve got the magic but this time applied to European technology. What would you do to implement a growth mindset? a ability to build that trillion dollar company that we don’t have today. I think in the UK, I mean this may apply to other European countries too. I think unlocking what pension funds can invest in or just for the kind of growth stage, I think we’re brilliant at doing the startup idea and getting it to a certain level like we did with Deep Mind. But then if you really want to cross that sort of chasm into the trillion dollar uh global, you know, player, then where are the billion dollar rounds going to come from? uh where you can really take on those that the you know the existing incumbents and I think that certainly was missing 10 years ago when I was doing fundraising for Deep Mind and um I think it’s still kind of missing today just that kind of level of ambition and and the amount the capital markets can can support. I read about some of your early rounds raising in the S Malibu families kids. Exactly. Um okay we’re going to do a quick fire round meeting Elon for the first time. How was that? Oh yeah, it was amazing. Um, it was at a it was at a founders fund because we were both SpaceX and DeepMind were part of a same portfolio, a kind of amazing portfolio that Peter Teal had at Founders Fund and uh I think we were both invited I think I was invited to my first portfolio kind of conference and I think it must be back in 2011 or 2012 very early days. So we were the small little upcoming thing and I had a small speaking slot and then and then and then and then Elon was the you know big thing in that portfolio. So, he had the keynote, but then we met afterwards. I think it was in Elon says it was like we were passing each other in the bathroom or something and uh we said hi and we both hit off, you know, immediately like uh as sort of, you know, people that were uh almost too ambitious in their thinking perhaps and love sci-fi and um and and I really wanted to visit his rocket factory. So, I was sort of trying to get an Angular invite to to SpaceX and and uh in LA and I think I got there a couple, you know, he invited me at the end of that meeting and and that was our second meeting in in the space effects factory. I love it. Not even speaking slots as big as his. I don’t know about that. Healthcare revolution disease eradication that you’re most excited about. Again, for me it’s specifically with multiple scerosis. Yeah. Well, look, I want to literally cure cancer. I know people say that’s the cliche, but I actually what we’re building at isomorphic is general purpose. So we’re trying to build a a platform, a drug design platform that will be applicable to any therapeutic area. So ideally it will help with everything from neurogeneration, cardiovascular, immunology, cancer. Those are the ones we’re we’re focusing first, but eventually it should be applicable to every disease area. What are you thinking about that you’re not reading about or seeing anyone talk about? Um I think it’s more so I think a lot of people are worrying about the economic questions around AGI uh that we talked about earlier but I I worry a lot about the philosophical questions around it like when it comes let’s say assume we get the technical right let’s assume we get the economical economics part of it right both of those are hard then there’s a philosophical question of what is meaning what is purpose um we’ll find out won’t be what consciousness is um what does it mean to be human I think that’s uh uh what’s coming down the road and I think we need some great new philosophers to help us to help us uh navigate that. Hard final question. There are many different ways you could describe what you do. What would you most like to be remembered for your legacy to be? Um I would like uh my legacy to sort of be remembered for like advancing science um and doing uh building technologies that bring incredible benefits into the world like curing terrible diseases. Dis, thank you so much for putting up with my meandering conversation. You’ve been fantastic. I really appreciate it. Thank you very much.