Date of conversation: May 9 2023. https://www.youtube.com/watch?v=1egAKCKPKCk Patrick Collison 0:00 Patrick, over to you. Thank you, Graham. And thank you, Sam, for being with us. Last year, I actually interviewed as Sam Backman freed, which was, which was clearly the wrong Sam to be interviewing. So it said it's good to correct it this year with the right, Sam. So so what we'll start out with the topic on our on everyone's mind. So when will we all get our world coin? Sam Altman 0:29 I think if you're not in the US, you can get one in a few weeks. If you're in the US maybe never, I don't know, it depends how truly dedicated the US government is to banning crypto. Patrick Collison 0:38 So world coin launched around a year ago or so. It actually Sam Altman 0:42 the the it has not actually, it's been in beta for maybe a year, but it will go live relatively soon outside of the US and in the US, you just won't be able to do it maybe ever? I don't know. Patrick Collison 0:52 All right. Um, so just a crazy thing Sam Altman 0:56 to think about that like this is, you know, think whatever you want about crypto and the ups and downs, but the fact that the US is the worst country in the world to have a crypto company in or you just can't offer it at all, is sort of a big statement. Like historically big statement. Yes. Yes, it Patrick Collison 1:16 is. Yeah, it's hard to think of the last technology for which Sam Altman 1:20 that was the case. Maybe the Europeans are supposed to do it's not us. Yes, supersonic air Patrick Collison 1:25 travel or something or like, then yeah. Alright, so I presume almost everyone, the audience is a chat GVT user, what is your most common chat up use case like now, when you're testing something, just you actually want to get a chat UBT is purely an instrumental tool for you. Sam Altman 1:45 summarization. By far, I I've gotten like, I don't know how I would still keep, I wouldn't still keep up with email, and slack without it. But you know, posting a bunch of email or slack messages into it, you hopefully will, like build some better plugins for this over time. But even doing it the manual way, it works pretty well. Patrick Collison 2:04 I have any plugins become part of your workflow Sam Altman 2:07 yet browsing in the code interpreter once in a while, but honestly, they have not. For me, personally, they have not yet kind of like tipped into a daily habit. Patrick Collison 2:18 And so obviously, it seems very plausible that we're on a trend of super linear realized returns in terms of the capabilities of these models. But who knows, maybe we'll ask them to vote soon. Not saying they're likely. But it's at least a possibility. If we end up in a world where we ask them to vote soon. What do you think kind of ex post, we will look back on the reason as having been too little data and not enough compute? What's the most likely bottleneck? Sam Altman 2:51 So yeah, look, I really don't think it's going to happen. But if it does, I think it would be. I think it'd be that there's something fundamental about our current architectures, that limits us in a way that is not obvious today. So like, you know, maybe we can never get the systems to be very robust. And thus, we can never get them to, like, reliably stay on track and reason and understand when they're making mistakes, or, you know, and thus, they can't really like figure out new knowledge very well at scale. But I don't have any reason to believe that's the case. Patrick Collison 3:24 And some people have made the case that we're now training on kind of order of all of the internet's tokens, and you can't grow that, you know, another two orders of magnitude. I guess you could kind of if you have the synthetic data generation, do you think data bottlenecks matter at all? Sam Altman 3:42 I think you just touched on it like is, as long as you can get to like over this synthetic data. Event Horizon where that the model is smart enough to make good synthetic data. I think it should be alright. We will need new techniques for sure. I don't want to like pretend otherwise in any way that like naive plan and just like scale up a transformer with pre training tokens from the Internet that will run out but like, that's not the plan. Patrick Collison 4:10 If so, one of the big breakthroughs in I guess, GPA 3.5, and four is our lhf. You know, if you you, Sam, personally sat down and did oral all of the RLA. Jeff, would the model be significantly smarter? Like does it matter who's getting the feedback? Sam Altman 4:32 I think we are getting to the phase where you really do want smart experts giving the feedback in certain areas to get the model to be as generally smart as possible. So Patrick Collison 4:45 So will this create like a crazy battle for the smartest grad students? Sam Altman 4:51 I think so. I don't know how crazy of a battle it'll be because there's like a lot of smart grad students in the world but smart grad students I think will be very important. Patrick Collison 5:00 And how many? How many? Like, how should one think about the question of how many spark grad students would need? Like is what enough for do you need like 10,000 Sam Altman 5:09 It's worth studying this right now, we really don't know, like how much leverage you can get out of one really smart person where it kind of the model can help. And the model can like, do some of its own. rln like, this is we're deeply interested in this, but it's a very open question. Patrick Collison 5:27 Should nuclear secrets be classified? Sam Altman 5:31 Um, probably yes. I don't know how effective we've been there. I think the reason that we have avoided nuclear disaster is not solely attributable to the fact that we classified the secrets but that we did something, we did a number of smart things. And we got lucky, you know, the like amount of energy needed, at least for a long time was like huge and sort of required the power of nations. And we made the IAEA, which I think was a good decision on the whole, and a whole bunch of other things, too. So like, Yeah, I think probably anything you can do, there to increase probability of a good outcome is worth doing classification of nuclear secrets probably helps. Doesn't seem to make a lot of sense to not classify it. On the other hand, I don't think it'd be a complete solution. Patrick Collison 6:22 But what's the biggest lesson we should take from our experience with nuclear Non Proliferation, the broader sense as we think about all the AI safety considerations that are now Sam Altman 6:30 Central? So first of all, I think it is always a mistake to draw too much inspiration from a previous technology. Everybody wants the analogy, everybody wants to say, oh, it's like this, or it's like that, or we did it like this. So we're gonna do it like that again. And the shape of every technology is just different. However, I think nuclear materials and AI supercomputers do have some similarities. And this is a place where we can draw more than usual parallels and inspiration. But I would caution people to over learn the lessons of the last thing. I think something like an IAEA for AI, like, and I realized how naive the sounds and how difficult it is to do. But getting a global regulatory agency that everybody, everybody signs up for, for extremely powerful AI training systems seems to me like a very important thing to do. So I think that's like one lesson we could learn. Patrick Collison 7:26 And if it's established, exists tomorrow, what's the first thing that you do? Sam Altman 7:32 Any systems over? You we the easiest way to implement this would be a computer threshold, the best way to implement this would be a capability special. But that's harder to measure any any system came over that threshold, I think should submit to audits, full visibility to that organization be required to pass certain safety evals, before releasing systems. That would be the first thing. Patrick Collison 7:59 And some people on the I don't know how I would characterize the side, but let's say the more pugilistic side, I would say that all sounds great. But But China's not going to do that. And, and therefore, we'll just be handicapping ourselves. And consequently, it's a less good idea than it seems Sam Altman 8:22 on the surface. There are a lot of people who make incredibly strong statements about what China will or won't do, that have like, never been to China never spoken to. And someone who has worked on diplomacy with China in the past, really kind of know nothing about complex high stakes international relations. I think it is obviously super hard. But also I think no one wants to destroy the whole world. And there is reason to at least try here. So one of the most. So I think there's like a bunch of unusual things about this, why it's dangerous to learn from any technological analogy of the past. There's a bunch of unusual things here. There's of course, the energy signature in the amount of energy needed. But there aren't that many people that are making the most capable GPUs. And you know, you could require them all to put in some sort of monitoring thing that say, if you're talking to more than 10,000 Other GPUs like you got it, whatever. There's options. Yeah. Patrick Collison 9:19 So one of the big surprises for me this year, has been the progress in the open source models. And it's been this kind of frenzied pace for the last 60 days or something. And how good do you think the open source models will be in a year say? Well, I'll just Sam Altman 9:37 ask that first. Yeah, good. I mean, I think there's going to be two thrusts to development here, there will be the hyperscalers best closed source models. And that will be the progress that the open source community makes and it'll be, you know, a few years behind or whatever, a couple years behind maybe. But I think we're gonna be in a world where there's very capable open source As models, and people use them for all sorts of things, and the creative power of the whole community is going to impress all of us. And then there will be the frontier of what people with the giant clusters can do. And that will be fairly far ahead. And I think that's good, because we get more time to figure out how to deal with some of the scarier things. Patrick Collison 10:21 David Luan made the case to me that, like, the, the set of economically useful activities, is a is a is a curious subset of all possible activities, and that pretty good models might be sufficient to address most of that first set. And so maybe the super large models will be very scientifically interesting. And maybe you'll need them to do things like generate further AI progress or something. But promotions like practical day to day cases, maybe an open source model will be sufficient. How likely do you think that future is? Sam Altman 11:01 I think, for many super economically valuable things, yes, the smaller open source model will be sufficient, but you actually just touched on the one thing I would say, which is like help us invent super intelligence, that's a pretty economically valuable activity. So it's like cure all cancer or discover new physics or whatever else. And that will happen with the biggest models first. Patrick Collison 11:20 Should Facebook Open Source llama? Sam Altman 11:23 At this point? Probably? Patrick Collison 11:27 Should should they adopt a strategy of open sourcing their foundation models slash LM or just llama in particular? Sam Altman 11:35 I think Facebook's AI strategy has been like confused at best for some time, but I think they're now getting really serious, and they have extremely competent people. And I expect a more cohesive strategy from them. Soon, I think there'll be a surprising new real player here. Patrick Collison 11:57 Is there any new discovery that could be made? That would meaningfully change your P Doom probability? Either by elevating it or by decreasing it? Sam Altman 12:10 Um, yeah, I mean, a lot. Like I think that's most of the new work between here and super intelligence will move that probability up or down. Patrick Collison 12:19 Okay, is there anything you're particularly paying attention to any kind of contingent fact you'd love to know? Sam Altman 12:25 If we could? So first of all, I don't think our lhf is the right long term solution, I think we can like rely on that, I think it's helpful, it certainly makes these models easier to use. But what you really want is to understand what's happening in the internals of the models, and be able to align that, you know, say like, exactly here is the circuit or the set of artificial neurons where something is happening and tweak that in a way that then gives a robust change to the performance of the model? And if we can get that Patrick Collison 12:59 through the mechanistic interpretability stuff? Yeah. Well, that Sam Altman 13:03 and then beyond like, there's a whole bunch of things beyond that. But but that direction, if we can get that to reliably work, I think everybody's PDM would go down a lot. Patrick Collison 13:12 And do you think sufficient interpretability work is happening? No. Why not? You know, a lot of people say the word AI safety. So it seems you know, superficially surprising. Sam Altman 13:25 Most of the people who say they're really worried about AI safety just seem to spend their days on Twitter saying they're really worried about AI safety or you know, any number of other things. There are people who are worried about very worried about your safety and doing great technical work there. But we need a lot more of them. We're certainly shifting a lot more effort inside a lot more like technical people inside open AI to work on that. But what the world needs is not more AI safety people who like post on Twitter and write long philosophical diatribes, it needs more people who are like going to do the technical work to make these systems safe and reliably aligned. And I think that's happening. It'll be a combination of people that have good ml researchers shifting their focus and new people coming into the field. A Patrick Collison 14:17 lot of people on this call are active philanthropists. And most of them don't post very much on Twitter. They hear this exchange like, oh, maybe I should help fund something in the interpretability space. You know, if they're having that thought, you know, what's the next step? Sam Altman 14:33 One strategy that I think has not happened enough is grants like grants to single people or small groups of people that are very technical that want to push for the technical solution. And are, you know, maybe in grad school or just out or an undergrad or whatever. I think that is well worth trying. They need access to fairly powerful models and opening eyes trying to like figure out programs to support independent alumni researchers. But I think giving those people financial support is like a very good step. Patrick Collison 15:06 To what degree in addition to being somewhat capital bottleneck is the field skill bottleneck where there are people who maybe have the intrinsic characteristics required, but don't have the four years of learning or something like that, that are also a prerequisite for their being effective. Sam Altman 15:22 I think if you have a smart person who has learned to do good research and has the right sort of mindset, it only takes about six months to make them, you know, take a smart physics researcher and make them into a productive AI researcher. So we don't have enough talent in the field yet, but it's coming soon. We have a program at open AI that does exactly this. And I'm astonished how well it works. Patrick Collison 15:44 It seems that pretty soon we'll have agents that you can converse with in very natural form, low latency full duplex can interrupt them, like the whole thing. And obviously, we're already seeing with things like character and replica that, you know, even nascent products in this direction are getting pretty remarkable traction. It seems to me that these are likely to be a huge deal, and maybe we're substantially under estimating it again, especially once you can converse through voice. III think that's right, and then be if that's right, you know, what, what do you what do you think the likely consequences are? Sam Altman 16:26 Yeah, I do think it's right for sure. Like I've, you know, a thing someone said to me recently that has stuck with them is that they're pretty sure their kids are going to have more AI friends than human friends. And I don't know what the consequences are going to be. I one thing that I think is important, is that we established a societal norm soon that you know, if you're talking to an AI or a human, or a sort of like, weird AI assisted human situation. But people, people seem to have a hard time kind of differentiated in their head, even with these very early weak systems, like, you know, replica that you mentioned. It's whatever the circuits in our brain are that crave social interaction seems satisfiable with like, for some people, in some cases with an AI friend. And so we're gonna handle that I think is tricky. Patrick Collison 17:19 Someone recently told me that a frequent topic of discussion on the replica subreddit is how to handle the emotional challenges and trauma upgrades to the replica models, because suddenly, your friend becomes no somewhat lobotomized, or at least a somewhat different person. And, you know, presumably, these interlocutors all know that replicate is, in fact, an AI. But somehow, to your point, the sort of our emotional response doesn't necessarily seem all that different. Sam Altman 17:47 What I think we're heading to is a society. Like, I think what most people assume that we're heading to, as a society with one sort of supreme being super intelligence, you know, floating in the sky, or whatever. And I think where we're heading to, which is sort of less scary, but in some senses, still, as weird, is a society that just has a lot of AIS integrated along with humans. And even movies about this for like, a long time. Like, there's like, you know, see Threepio, or whatever you want in Star Wars, like, people know, its own AI, still useful, they still interact with it, it's kind of like, cute and person, like, although, you know, it's not a person. And in that world, where we just have like a lot of AIS that are contributing to the societal infrastructure, we all build up together, that feels manageable, and less scary to me than the sort of single big super intelligence. Yeah. Patrick Collison 18:43 Well, this is a financial event. So how will Sam Altman 18:48 you know how well Patrick Collison 18:50 this kind of debate in economics as to whether changes in the working age population, push real interest rates up or down? Because, you know, you've a sort of whole bunch of countervailing effects and, you know, the other more productive, but you also need capital investment to kind of make them productive and so forth. Will, how will AI change real interest rates? Sam Altman 19:17 I try not to make macro predictions, I'll say, Patrick Collison 19:23 Okay, well, if how will it change? Me measured economic growth. Sam Altman 19:36 I think it should lead to a massive increase in real economic growth and I presume we'll be able to measure that reasonably well. And we'll Patrick Collison 19:45 we'll at least the early stages of that via an incredibly capital intensive period because you know, we now know which cancer curing factories or you know, pharma companies we should build and you know, what exactly the right you know, reactor designs are in so forth. Sam Altman 20:01 I would take the other side of that, again, we don't know. But I would say that like, human capital allocation is so horrible that if we know exactly what to do, even if it's Patrick Collison 20:14 expensive, even like the present day capital allocation done by humans, yeah. Are you mean, like the allocation of the actual people themselves across society into different roles? Sam Altman 20:25 No, I meant the way that we allocate, like, like, how much do you have an allocation Patrick Collison 20:28 done by humans? Yeah, Sam Altman 20:30 but done by humans? How much do you think we spend on cancer research today? How much we spent on cancer research a year? Patrick Collison 20:37 I don't remember probably, well, it depends, if you can for pharma companies, but it's probably about like, eight ish, 9 billion from the NIH, and then not so much the drug companies spend, but I don't know, probably some small bottle of that again. But if it's like, under 50 billion, Sam Altman 20:51 okay, I was gonna, I was gonna guess totally gas between 50 and 100 billion per year. And if an AI could tell us exactly what to do, and spend like $500 million a year for one single project, which would be huge for a single project, but it was the right answer. Yep. That would be a great efficiency again. Yep. Okay, Patrick Collison 21:10 so we will actually become significantly more capital efficient, once, once this technology. Sam Altman 21:16 That's my guest. Patrick Collison 21:20 For open AI, you know, obviously, you guys want to be and are a preeminent research organization. But with respect to commercialization, is it more important to be a consumer company or an infrastructure company? Sam Altman 21:35 Oh, I am a believer as a business strategy and platform plus killer app. I think that's like worked for a bunch of businesses over time, for good reason. I think the fact that we're doing a consumer product is helping us make our platform much better. And I hope over time that we figure out how to like have the platform make the consumer app much better, too. So I think it's like a good cohesive strategy to do them to do them together. But as you pointed out, really what we're about, we'd like to be the best research org in the world. And that is more important to us than any productization. And building the org that can make these repeated breakthroughs, littered all work, you know, we like when we've gone down some bad paths. But we have figured out more than our fair share of the paradigm shifts. And I think we're have the next big ones will come from here, too. And that's really kind of what is important to us to build. Patrick Collison 22:33 Which breakthrough? are you most proud of opening I having made? Sam Altman 22:39 The whole GPT paradigm I think, like that was I think that was a kind of thing that that has been transformative and an important contribution back to the world and comes from the sort of work the multiple kinds of work that open AI is good at combining. Patrick Collison 22:59 If Google IO is tomorrow, if anchor starts tomorrow, and if you're a CEO of Google, how would you do? Sam Altman 23:10 I think Google is doing a good job. I think they they they have had like quite a lot of focus and intensity recently, and are really trying to figure out how to how they can move to really remake a lot of the company for this, this new technology. So I've been I've been impressed Patrick Collison 23:33 by these models, and their attendant capabilities, actually a threat to search, or is that just a sort of superficial response? That is a bit too hasty. Sam Altman 23:44 Um, I suspect that they mean search is going to change in some big ways, but not not a threat to the existence of search. So I think it would be like a threat to Google if Google did nothing, but Google is clearly not going to do nothing. Patrick Collison 24:05 How much important ml research comes out of China. Sam Altman 24:10 I would say go ahead. I would love to know the answer to that question. Like how much does it come out of China that we get to see not very nice i Yes, Patrick Collison 24:20 yes. I mean, from the published literature. Sam Altman 24:25 Nonzero but not a giant amount. Patrick Collison 24:29 Do you have any sense of why because, you know, the, like the CarNet. Like the the number of published papers is very large, and for a lot of Chinese researchers in the US who do fantastic work. And so why is the kind of per paper impact on the Chinese stuff? Relatively low? Sam Altman 24:49 I mean, what a lot of people suspect is they're just not publishing the stuff that is most important. Do you think that's likely to be true? I have I don't trust my into Since here, I just feel confused. Patrick Collison 25:05 Would you prefer open AI to to figure out a 10x improvement to training efficiency or to inference efficiency? Sam Altman 25:14 It's a good question. It sort of depends on how important synthetic data turns out to be. I mean, I guess, if forced to choose, I would choose inference efficiency. But I think the right metric is to think about like all the compute that will ever be spent on a model, training plus all inference and try to optimize that. Right. Patrick Collison 25:47 And, and you say, inference efficiency, because that is likely the dominant term in that equation. Sam Altman 25:53 Probably, I mean, if we're doing our jobs, right. Patrick Collison 25:57 When GBD, two came out, like only a very small number of people noticed, sort of that that had happened and you know, really understood what it signifies, to your point about the importance of the breakthrough. Is there a GPT two moment happening now. Sam Altman 26:16 There's a lot of things we're working on that I think will be GPT to like moments, if they come together, but nothing, there's nothing like released that I could point to yet and with high confidence, say this is the GPT, two of 2023. But I hope, I hope by the end of this year, by next year, so that will change. Patrick Collison 26:36 What's the what's the best non open ai ai product that you use? Sam Altman 26:51 Ah, honestly, the only like product that I think of is like really true, I don't use a lot of things, I kind of like have a very narrow view of the world. But Chechi mi T is the only AI product I use daily. Patrick Collison 27:07 Is there a is there any AI products that you wish existed and that you think the capability that our current capabilities make possible or will very soon make possible that you're looking forward to? Sam Altman 27:17 I would like a co pilot like product that controls my entire computer. So they can like look at my slack and my email and zoom and iMessages and my like, massive To Do List documents and just like kind of do most of my work Patrick Collison 27:31 with some kind of surgery plus plus Sam Altman 27:33 sort of thing. And Patrick Collison 27:36 you mentioned, you know, curing cancer. Is there an obvious application of these techniques and technologies to science that again, you think we have architecting capabilities for that you don't see people obviously pursuing today, Sam Altman 27:52 there's a boring one and an exciting one, the boring answer is that if you can just make really good tools like that one I just mentioned, and accelerate individual scientists each by a factor of three, or five, or 10, or whatever. That probably increases the rate of scientific discovery by a lot, even though it's like not directly doing science itself. The more exciting one is, I do think that same. A similar system could go off and start to read all of the literature, think of new ideas, do some limited tests in simulation, email a scientist and say, Hey, can you run this for me in the white lab, and probably make real progress. Patrick Collison 28:36 And that's that, how exactly you kind of the ontology works here. But you, you can imagine, building these better sort of general purpose models that are, you know, kind of like a human will go read a lot of literature, etc, and maybe smarter than the human better memory, you know, who knows. And then you can imagine, you know, models trained on certain datasets that are doing something nothing like what a human does, you know, you're mapping, I don't know, crispers to, you know, edit accuracies or something like that. And it really is a special purpose model, and you have some particular domain. And do you think that the apple, scientifically useful application of these models will come more from the first category where we're kind of creating better humans? Or from the second category, we're recruiting these predictive architectures for problem domains that we that are not currently easy to work with? Sam Altman 29:34 I really don't know. I this is like, you know, most areas, I am willing to like kind of give some rough opinion in this one. I never have an I don't feel like I have a deep enough understanding of the process of science and how great scientists actually work. Just say that I like I mean, I guess I would say if if we can figure out someday how to build have models that are really great at reasoning, then then I think they should be able to make some scientific leaps on themselves but by themselves, but that that requires more work. Patrick Collison 30:14 Open AI has a has done a super impressive job of fundraising and has a very unusual capital structure for the nonprofit and Microsoft do unlike other things are weird capital structures underrated? Like, should organizations and companies and founders be thinking more expansively about people default? Are these historically defaulted? Like, alright, we're just like a Delaware C Corp. Yeah, open AI, as you pointed out, broke all the rules should be we'll be breaking more corporate structure rules. Sam Altman 30:43 I suspect not I suspect it's like a horrible thing to I suspect, it's like a horrible thing to innovate on, like you should innovate on like products and science and not corporate structures are like, the shape of our problem is just so weird that despite our best efforts, we had to do something strange. But it's been like, it's been an unpleasant experience on time suck on the whole. And if, like, you know, the other efforts I'm involved in have always had normal capital structures. And I think that's better. And Patrick Collison 31:19 we underestimate the extent so a lot of companies you're involved with are very capital intensive. And maybe open AI is perhaps the most capital intensive, although, who knows, maybe he'll be on one particular something, but and do we underestimate the extent to which capital is a bottleneck, the bottleneck on unrealized innovation? Like is that some kind of, you know, common theme running through the various efforts you're involved with? i Sam Altman 31:46 Yes, I mean, that is my, there's basically like, there's like four companies that I'm, I would say involved with other than just like having written a check as an investor. And all of them are super uranium writers for this audience. It open ideally on are the things I spent the most time on, and then also retro and world coin. But, you know, all of them raised minimum nine digits before any product at all. And, you know, in open eyes case, much more than that, Patrick Collison 32:20 and I have no and all have raised in the nine digits Sam Altman 32:23 before as like either a first round or before releasing a product. And they all take a long time, you know, many years to get to a release of a product. And I think this is just like, there's a lot of value in being willing to do stuff like this, and it fell out of favor in Silicon Valley at some point. And I understand why like, it's also great for companies that like only ever have to raise a few 100,000 or million dollars and get to profitability. But I think we over pivoted in that direction. And we have forgotten collectively, how to like, do the high risk, high reward hugely capital and time intensive bets. And those are also valuable, we should be able to support both. Patrick Collison 33:08 And this touches on the question of, Why aren't there more humans? And that? The I guess, the two most successful hardware companies in the broadest sense, start in the last 20 years, we're both serving the same person. You know, that seems like a pretty surprising fact. And obviously, Elon is, you know, singular in many respects. But do you? You know, what's? What's your answer to that question? You know, do we do we live people with his particular set of circumstances, he's had actually a capital story along the lines of what you're saying, if it was your job, to cause there to be more you know, SpaceX is and Tesla's in the world. And, you know, maybe you're trying to do some of that yourself. But like, if you had to kind of push in that direction systematically, what would you be trying to change? Sam Altman 33:57 I have never met another Elon, I have never met a another person that I think I can that can be developed easily into another Elon, he is sort of this like, strange and of one character. I'm happy he exists in the world, of course, but, you know, also a complex person. I, I don't know how you get more people like that. Like, it's, I don't know. I don't know. I don't know what you think about how to make more. I'm curious. Patrick Collison 34:32 I don't know. I suspect though, I mean, I suspect there's something in the culture on both the founder and the capital side, the kinds of companies the founders wanted to create and then the disposition and and to some extent that maybe to a lesser extent the the fun structure of the of the sources of capital, we get a surprise for me. You know, as I've Learn more about the space where the last 15 years is the extent to which the the there's a finite or essentially finite set of, of funding models in the world. And each has a particular set of incentives. And for the most part, a particular sociology have that evolve over time, like venture capital life itself and investment. P E, and its modern form was, was essentially an invention. And so it's a VC with an investment of invention. And, and so, you know, I doubt you're done with that process of funding model invention. And I suspect there are models that are at least somewhat different to, you know, those that prevailed today that are somewhat more amenable to this kind of innovation. Okay, Sam Altman 35:51 so one thing I'm excited about is, I think, and you're a great example of this, but I think all of the people who became tech billionaires in the last cycle are pretty not most are pretty interested in putting serious capital into long term projects, and the availability of capital for significant blocks of capital upfront for high risk, high reward long term, long duration projects that rely on fundamental like science and innovation is going to or already has dramatically changed. So I think there's like going to be a lot more capital available for this, you still need like the Elan like people to do it. And, but like one project, I've always been tempted to do is say, Okay, we're going to identify the, let's say, 100, most talented people we can find that want to work on these sort of projects, we're going to give them like 250k a year, so like, enough money for 10 years or something. So it's like, you know, give a 20 year old tenure or something that feels like tenure, let them go off. And without the kind of pressure that most people feel have the certainty to go off and explore for a long period of time, and like, you know, not feel that like very understandable pressure to make a ton of money first. And put them together with great mentors and a sort of a great peer group. And then the financial model would be like, if you start a company, if not, that's fine, like, go be a writer, politician, think whatever, if you start a company, the vehicle gets to invest, like on predefined terms. I think that would pay off, and someone should do it. That's kind Patrick Collison 37:41 of the university model, I guess. And I don't mean, like, this already exists, you know, you're just, you know, reinventing the bus or something. I mean, that it's me suggests evidence that it can work. And universities are usually not that good at good at supporting their spinouts. What happens to at least some extent, and yes, one of the Theses for our compact is that by, you know, maybe formalizing this somewhat more than it is by encouraging us somewhat more than it tends to be that that actually might be a pretty effective model. So Silvana you know, my co founder, and our, you know, I've known her since we were teenagers, you know, more than half our lives. And Patrick sue the other co founder, you know, she, she did her PhD with, and so she known him for a long time. And so to your point about sort of the long term investment, you know, part of how I was comfortable with it is, you know, I'd known this person for, again, a really extended period, as you think that's something like you mentioned, retro or some of these other companies where you didn't, how do you decide whether the person is the kind of person you can, you know, undertake this, this super long term expedition with? Sam Altman 38:48 Actually I had known Joe for a long time. He was like, I mean, that's a bad example. All right. Well, I Patrick Collison 38:53 guess that's the question is that, in fact, do you need to have known the person for a long time, Sam Altman 38:57 it's, it's super important that it doesn't always work. But I try to work with people that I've known for like a decade plus, at this point, you know, you don't want to only do that you want some new energy and volatility in the mix. But having a significant proportion that people that you've known for a long time worked with for a long time. I think that's really valuable. Like in the case of open AI. I had known Greg Brockman for a long time. I met Ilya for maybe only like, a year before you and a little bit less than we started the company but spent a lot of time with him together. And that was like a really good combination. But I derive like great pleasure from work having like working relationships with people over decades, through multiple projects, and like, it's a lot of fun to like, feel like you're building together towards something over that has a very long arc. Agreed. Patrick Collison 39:50 Which, which company that is not thought of as an AI company will benefit the most from Ai over the next five years. Sam Altman 40:03 I think some sort of investing vehicle is going to figure out how to use AI to be like an unbelievable investor and just have a crazy outperformance Patrick Collison 40:11 but like RENNtech, with these new technologies, is there like an operating company that you look at? Hmm. Sam Altman 40:23 What do you think of Microsoft as an AI company? Patrick Collison 40:27 Let's say no for the purpose of this question. Okay, I Sam Altman 40:29 think Microsoft will transform themselves across almost every axis with AI. Patrick Collison 40:34 And is that because they're just taking more seriously or because there's something about the nature of Microsoft that makes them particularly, you know, suited to this Sam Altman 40:42 understood it sooner than others and have been taking it more seriously than others? Patrick Collison 40:52 What do you think the likelihood is that we will come to realize that GPT four is somehow significantly overfit on the problems? You know, in the domains that was trained on? Or how would we know if it was? Or do you even think about overfitting as a kind of concern? Again, the code forces problems, you know, before 21 versus after 21, or does better on the earlier ones, etc. Sam Altman 41:20 I think the base model is not significantly overfit. But there's we don't understand the lhf process as well. And we may be doing more like brain damage to the model in that then we than we even realize. You know, Patrick Collison 41:40 do you think that gee, like the generalized measure of intelligence exists in humans as anything other than a statistical artifact? And if the answer to that is yes, do you think there exists an analogous sort of common factor in in models? Sam Altman 42:04 I think it's a very imprecise notion. But there's clearly something real, that it's getting at in humans, and from models as well. So I think we probably like over. There's like, way too many significant figures when people try to talk about it. But, you know, it's definitely my experience that very smart people can learn. I won't say arbitrary things, but a lot of things very quickly. There's also some people who are just much better at one kind of thing than another. And in know, I don't want to like debate the details too much here. But I'll say as a general thing, I believe that model intelligence will also be somewhat fungible. Patrick Collison 42:55 And based on your experience, anything at all this AI safety stuff? How if at all, do you think synthetic biology should be regulated? Sam Altman 43:08 I mean, I would like to not have another synthetic pathogen cause a global pandemic. I think we can all agree that wasn't a great experience. wasn't that bad compared to what it could have been? But I'm surprised there has not been more global coordination after that. And I think we should have more. Patrick Collison 43:27 So why don't we actually do because I think some of the same challenges apply as an AI in this is a production apparatus for synthetic pathogens is not necessarily that large, and the the observability. And Telemetry is difficult. And Sam Altman 43:42 no, I think this one's a lot harder than the AI challenge, where we do have some of these characteristics like tremendous amounts of energy and, and, you know, lots of GPUs. I haven't thought about this as much, I would ask you what we should do. I think that like, if someone told me what, you know, this is a problem, what do we do? I would call you so what should we do? Patrick Collison 44:04 I don't know that I've read the prescription we clear. I mean, the easy thing to say I'm not sure how much it helps is we need a lot more generalisability wastewater sequencing, think that we should that regardless, you know, doesn't help us synthetic biology attacks, and the fact that we don't have a giant correlational dataset of the pathogens that people are infected with, and then sort of longitudinal, you know, health outcomes is just like a crazy fact, in general. And then, obviously, there's a somewhat innumerable set of infectious diseases, like classes of infectious diseases that, you know, people tend to be most susceptible to and, you know, obviously COVID itself being being an example of this. And so I think we could make a lot more progress in in time variant, both treatments and vaccines than we do and And then we have. And so I think that particular thing like if it is true that COVID was engineered, I think, you know, instances of that set of slight modifications to already existing infectious diseases, we can probably significantly improve our protections to, obviously, the concerning category would be, you know, completely novel pathogens, and that's presumably a an sort of an infinite search space, then you think get into how do you I mean, there's a, again, finite set of, you know, ways to enter cells and, and receptors, and so forth. So maybe you can use that to kind of tile the space of possible treatments, but and you need to invest in a lot more surplus manufacturing capacity than we have for for novel vaccines, and hopefully mRNA platforms and similar make it easier to have general purpose manufacturing capabilities there. But I as you can tell from this kind of long answer, I don't think there's a silver bullet. And it's, I think, plausible that even if you did everything that I just said, Well, you still would Sam Altman 46:04 not have enough. So I think it's hard, getting way better at rapid response, treatment, vaccination, whatever that that all seems like just an obvious thing to do that I would have, again, hoped for more progress on by now. Yeah, Patrick Collison 46:15 I very much agree with that. And clinical trials. Like that, that was the the limiting step in in COVID. And I think it's, you know, it's, at this point been widely reported and remarked upon that, you know, we had the, we had the vaccine candidate in January, and, you know, everything after that, I mean, so some of what happened after that was obviously manufacturing scale up, but but much of what happened after that was just, like, how long it took us to tell that, you know, this actually works, and this is, this is sufficiently safe, and, and that seems among the lowest hanging fruit in the entire biomedical ecosystem to me under percent. But I guess, because your investment in Charles Park is, is insistent that observation. So as recline and Derek Thompson, are writing a book about sort of the idea of an abundance agenda, and that, you know, so much of the, of the left of the liberal sensibility is about sort of forbearance and, you know, some kind of quasi Neo Puritanism, etc. And they believe, and I guess our will have been making the case in some of their respective public writings thus far, but but for the purpose of this book, the argument that actually for for society is equal and prosperous, and environmentally friendly, and, you know, so forth to actually realize me these values we care about, will be just like a lot more stuff in many, many different domains, more kind of the Henry Adams curve realized. And they frequently observed that permitting, in the broadest sense, all sorts of well intentioned but self imposed restrictions are, are the rate limiting factor in making this happen? It may be most obviously, with the energy transition across all the different things that you're involved with, you know, to what degree do you think this dynamic of self imposed restrictions and strictures is, is the relevant variable in the progress that actually exists? Sam Altman 48:42 It definitely seems huge, but I don't, I think also, there's a lot of people who like to say, well, this is the only problem and if we just could resolve like, permitting, writ large, we'd all be happy. And I don't think it's quite that simple, either. I do think that the current system, so I totally agree that we need much more abundance. And, you know, my personal beliefs are abundant energy and abundant intelligence are going to be two super important factors there, but there's many others. Certainly, with as we start to, like, get closer on being able to deliver a lot of fusion to the world, understanding just how painful the process to like go get these things out is like, disheartening to say the least. And it's pushing us to look at all sorts of like, very strange things that we can do sooner rather than like wait for all of the permitting processes that will need to happen to connect these to the grid. It's like much easier to go desalinate water in some country that you know, just has their nuclear authority or whatever. Think it is a real problem. And I think we don't have that much societal will to fix it, which makes it even worse. But I don't think it's the only problem. Patrick Collison 50:00 Um, if Ezra and Derek interviewed you, and I guess they should, for this book and asked you for your number one diagnosis as to, you know, that which is limiting the abundance agenda prompted you how'd you nominate? Sam Altman 50:18 I'm like, societal collective belief, we can actually make the future better. Like, and the level of effort we put on it, at every, every, like additional sort of, like, gate you put on something, when these things are like fragile anyway, I think makes them like tremendously less likely to happen. And so you have, like, you know, it's like, really hard to like, start a new company, it's really hard to convince people, it's a good thing to do, like, right now, in particular, there's just like, a lot of skepticism of that, then you have this like, regulatory thing it's gonna take, you know, it's gonna take a long time. So maybe don't even try that. And then you know, it's gonna like, would be way more expensive. So it's just like, there's too much friction and doubt at every stage of the process of idea to like mass deployment in the world. And I think it makes people just try less than they used to, or believe less. Patrick Collison 51:10 When we first met, when I was 15 or so years ago, Mark Zuckerberg was preeminent in the technology industry and in his 20s, and, you know, not that long before then, you know, Marc Andreessen was preeminent in the industry and in his 20s, and other long before then, you know, Bill Gates and Steve Jobs and so forth. And like, generally speaking, for most of the history of, of the software sector, in one of the top three people has been in their 20s. And it doesn't seem that that's true to me today. I mean, there's some great people in their 20s, but I'm not sure that Sam Altman 51:52 this problem. Yeah, it's not good. It's, it's something has really gone wrong. There's a lot of there's a lot of discussion about what this is, but like, where are the great founders in their, in their 20s? It's not so obvious. There's, you know, there's definitely some I hope, we'll see a bunch. And I hope this was just like a weird Accident history. But maybe something's really gone wrong in our educational system or our society or just like how we think about companies and what people do aspire to. But I think it is, it is worth significant concern and study. Patrick Collison 52:34 On that note, I think we're out of time. Thank you so much for doing this interview. Thank you very much. And thank you to folks at selling and to end to gram for for hosting.