Date of conversation: Nov 18, 2022. https://www.youtube.com/watch?v=DEvbDq6BOVM Speaker 1 0:03 So, Sam, thank you for taking the time. As you know, a core focus of the event this year is really focusing on big trends over the next decade, so we can inspire the next generation of founders. And you know, as I look back over the last decade, it's pretty clear that iPhone mobile was really the big paradigm shift of the 2010s. And in a recent interview you did you mentioned AI as being the paradigm shift of this next decade. Can you elaborate on that? And if you're a student or founder getting started today, what would your advice be, for what they can do to prepare for this paradigm shift? Sam Altman 0:42 Yeah, look, paradigm shifts are always hard to call. So I could be could be wrong here. But it feels to me at this point, like the, if you just look at the sort of Cambrian explosion of new things being built that are, you know, really delighting people getting real significant usage, it seems to me like AI is the platform that the industry has been waiting for in that, in that sense. And what people are doing with building on top of generative models, and all these different modalities is really quite a mess. Unknown Speaker 1:14 And I'm excited to see it. Yeah. And, Speaker 1 1:17 you know, the, the mobile shift was easy for people to wrap their heads around, right? Because you have a mobile phone in your hand. And I can't remember who said it, but it's kind of a remote control for your life. How do you Sam Altman 1:32 see is that it was a great, it was a great quote it? Was it really kind of like got that? Speaker 1 1:36 Yeah, it's like you press a button, something happens in the world. What is that pithy statement, a thing for AI as a paradigm shift. I mean, the way Sam Altman 1:49 that I talked about it, it's just like, on demand intelligence, that is, like, you know, way less expensive and way faster than for these narrow areas than what you could get from a human. So I think like a category people are quite familiar with now is these these image generators, you can definitely like go off and have, you know, like, artistically talented person generate images for you. But it took a while, and it was not like close to free. And one of the things that I think we're seeing, which is true for much of technology is when you make something way faster and way less expensive. The it is a qualitative shift in what's possible. And, you know, like people pointed out, when the kind of iPhone has remote control for the world that came along, they're like, Well, why do I need this thing called Uber, I can just like call a taxi. And you know, when the when the very first image generators launched, it's like, Well, why do I need this? I can just like pay an illustrator to do it. But it turns out that like, the world doesn't quite work that way. And when these things change by like orders of magnitude on these two axes, it's like really different. Unknown Speaker 3:00 Yeah, yeah, Sam Altman 3:03 I've heard you say before that the cost of labor will go to zero. And the new, new energy, use the electricity, whatever. Right? And I really mean that for like the cost of cognitive labor, physical labor, I think is further off. Speaker 1 3:20 Right? So if the cost of cognitive labor is going to near zero, is it the cost of any type of cognitive labor, so you think that any any type of work that requires cognitive function that's not tied to physical labor effectively is a game for AI to rewrite and change how we do work? Sam Altman 3:45 Well, there will be categories where we want them to be expensive, you know, like the famous example, there's some name for these kinds of goods, Veblen goods, maybe, but like, people want wine to be more expensive. People like like handmade things, they, you know, they want certain art to be really expensive. So there'll be plenty of things where like, it's related to status or exclusivity or something. And, you know, we want that to be expensive. There will also be categories where we like, think, you know, really want a human involved, we don't just want to be talking to a computer. And there will be a lot of things there too. And then I assume, although I don't know exactly what there will be that for quite some time, there will be certain categories, where humans are just much better than AI. The way that we're building at least right now, is a very alien intelligence, in the same way that like, birds sort of inspired us to build airplanes, but airplanes work super differently than birds, or insects or whatever. The like, the intelligence that we're building now is like, you know, somewhat inspired by like, how we believe it works in humans, but but like, very, very, very different approach. And so I suspect there will be like surprising strengths and weaknesses to both to both kinds. And thus there will be like plenty of roles for humans. I also think like, whenever you're on the left side of this sort of, you know, major technological revolution in human history, and you try to predict the jobs on the right side, like, that's always really hard to do. But yeah, always something. Speaker 1 5:21 Yes. Speaking of predictions, then 111 thing that's been on my mind is the role of AI an increase in the productivity per engineer, or per scientists, or per designer, I actually just went through a, I was actually using grep. Lit and their AI assistant, and I got I was programming something, and I got that done in a fraction of the time would have taken me even even a few months ago, it got me thinking about what is the role of AI going to look like, in the next five years as a programmer? Is it? Is it one where it's an assistant? And then you still need to curate the code? Or is it truly going to turn into you just speak, like, speak to the AI in terms of what you're trying to get done? And it'll just do all of the code for you, like Y axis literally Sam Altman 6:13 a question of timing. I mean, what I what I expect to happen is that the assistant just makes you more and more efficient, and learns to do more and more things with less and less supervision from you. And eventually it gets like really good at the whole thing. But that will take a while. Does that happen in five years? Like yeah, maybe for some tasks, I sort of doubt for all right, but like, five years at the rate the field is moving is a very long time. And so I wouldn't want to make any strong bets about what it won't be able to do. Yeah. What's your limit? Guys? Speaker 1 6:46 I was gonna say like, what's the limiting factor that would prevent it from doing all of it? Like, what what is that change that's required? Sam Altman 6:56 Well, I don't think we know yet where it's gonna break down. But there's like many places where I can imagine our current techniques hit some sort of bottleneck. I don't think it'll be anything fatal. But we'll have to go off and do some new research to figure out some, you know, medium size, or maybe even large size, no idea. But like, our the way we have made progress, open AI, is we just like knock down whatever's in front of us. And then when we get stuck on something new, we go do more research and figure that out. And, you know, when we started open AI, we were kind of felt like more on our own in this way of thinking. But now there's like, a lot of the world is operating kind of in this direction, and with a lot of excitement and energy. And so I think like things can happen even faster now. Speaker 1 7:41 Yeah. You also had a tweet recently that piqued my curiosity. And also, what led to this ask. I'll say that AI is a rare example of an extremely hyped thing that almost everyone still underestimates the impact of even the medium term. I'm curious, like, what are the high probability non consensus examples of this in the medium term? Sam Altman 8:12 I think everyone is like, you know, AI is the new hot thing, I get it, it's cool. It generates these images for me. And people even say, like, okay, you know what, maybe it's going to do all cognitive labor, or 90% of cognitive labor at 1/1000 of the cost or whatever. Unknown Speaker 8:31 But if you actually stop Sam Altman 8:32 and think about what that means. And if you actually stop and think about even further than that, like, Okay, once it's going there, and once like aI learns how to do science, and we just the rate of scientific discovery, scientific progress goes up by like a factor of 1000. What that means for the world. People may say the words, but they're certainly they don't seem to be acting like we're heading towards that kind of world very quickly. Speaker 1 9:03 And that's actually fascinating. You know, on the one hand, AI today is an amplifier for humans, and can assist humans in adding to the wealth of scientific knowledge. But there's another world that you're describing where AI is adding to scientific knowledge on its own. Don't understand that right? Sam Altman 9:25 I actually don't think it matters if it's fully autonomous, or if it's helping with humans. What matters is like is the pace of that scientific discovery is happening 1000 times faster than the world today? Yeah. And if it is, that has sort of like breathtaking consequences for society. But whether it's fully autonomous or not, Speaker 1 9:46 yeah. And what do you think the limiters would be in that world is it is the rate of adding to scientific knowledge then really going to be limited by physical items? Or you think Sam Altman 9:58 I should add by the way, you know, we We said earlier on this call that the price of intelligence with cost of intelligence trends toward near zero. That's what I hope happens. But maybe there's like a bunch of other reasons where like electricity gets super expensive GPUs gets super expensive and like, it kind of just, there's like a market price for intelligence. And it doesn't fall as fast as we'd hoped. Because the kind of the inputs just sort of like rise in price like crazy. I don't think that's what's going to happen. But but it, Unknown Speaker 10:32 it could. Sam Altman 10:35 But assuming it doesn't like, think about how much society has changed in the last few 100 years. Now, there are just some limits to how fast I think society can change and how quickly people and institutions take to update and do things in new ways. And whatever, you know, you see where people get pretty set in their ways, in not always negative or bad ways. But that that could happen. But imagine that all of the scientific progress since the beginning of the Enlightenment till now, we had in a one year period. Like, you know, and also it's like this exponential curve. So it's really steep in the next 500 years or whatever. Like, what what happens if you compress all that into one, you know, everything that we're doing on the current trajectory, and in 500 years into one? Chaotic for sure, society has a hard time adapting for sure. But like, we can cure all disease, we can travel to the stars, we have like unlimited power. Speaker 3 11:40 Who knows? But that's something that I think, yeah, that's in the underhyped category. Speaker 1 11:47 Yeah. I know that opening I does a lot of work around societal structures as well, right? How does society need to evolve? What are the top three things or even the top thing you're thinking of, to help society adapt to this rate of change, because what you're talking about is the rate of change is increasing year by year. And society itself, even if we look at it today is had a hard time adapting to the rate of change for the last decade. Theoretically, if you end up adding all of the scientific knowledge of the past several 100 years or one year, things, right, so I'm curious. Sam Altman 12:26 So there's a lot of things that like people have been talking about a lot that I'm not going to rehash, like, how do we rescale people for new jobs? How do we build resilience? How do we build like better judgment? So people know if they're kind of looking at, you know, fake content generated content, or not? All of the stuff that people have already discussed, I will talk about three other things that I think are under discussed. Rather than rehash the same, even though I think those are important, huge. Number one is, I think we're really just going to have to think about how we share wealth in a very different way than we have in the past, like, the fundamental forces of capitalism, and what makes that work, I think, break down a little bit. And so, you know, is it like, some version of like a basic income or basic wealth redistribution, or like, you know, some sort of version like that? We're trying to study that I think it's collectively under explored. And again, I think people just haven't internalized what happens if the Plainfield shifts this much. So they assume we can just sort of like tweak, a, you know, a small tweak on capitalism will will work. Number two is how access to these systems work. So the resource that I think should matter most in a world with AGI isn't it is like, who gets time slices to use the AGI what we decide to get used for stuff like that. And then, and I think like, that's just gonna require thinking about access to that limited resource in a very different way. And then the third one is governance. Like, who decides what you get to use the systems for what the rules are, what they will do and not do? Like? How do we get to like global agreement, treaties, whatever it's going to take on on that topic? How do we agree on what the set of values is going to be? And I don't think we're like those. So those are three things that I think were like, are going to be really important and that we're not well suited for. Speaker 1 14:26 Yeah, how do you that last point is really interesting on the governance piece, how does it How does it not turn into who has the biggest guns gets to control the you know, how to govern this type of system? Sam Altman 14:42 Um, I think there's like a bunch of examples of why Unknown Speaker 14:46 it's not Sam Altman 14:49 going to go that way. I think like the kind of nuclear mutually assured destruction rollout was bad for a bunch of reasons, but it's like it's Still, like interesting to study. And it's even more interesting to study like the paths we didn't take there that were also considered. But I think that you have something that, like, threatens to upset that global equilibrium. There's just like a lot of possibility there. Speaker 1 15:16 And you said something else that was really interesting, around the basic assumptions of capitalism or capitalist society starts breaking down, especially as you have this AGI that takes off. Can you expand on that a little bit more? Is it that you think most of the problems will effectively be solved? Or if there is a problem, then the AGI can solve it, and it's the cost intelligence is getting to near zero? So the assumptions that we have had of a past Society of entrepreneurs and capitalists around it and creating value like all of that changes, Sam Altman 15:50 is that weird? I mean, if you let yourself like think out sci fi fiction dream world really far, where you can be like, okay, AGI, start a new trillion dollar company. Yeah. And it can like be the CEO, it can figure out how to go raise capital, it can do all of the engineering, it can do the marketing, it can do the distribution, or whatever. And you type this like one thing into a prompt, and push to enter. And it like, went off and did that. Like, that's weird. And how we like think about a world like that is not it's not super obvious. Yeah. Yeah, it's Speaker 1 16:23 I mean, the theme of the event is over the next decade. And but I had a question over the next century. What does this look like? And Sam Altman 16:30 centuries too far? I don't know how to predict the phone. That's in the fog of war. Speaker 1 16:35 Yeah, yeah. But digging into a bit of what you're saying? thought I had was, what if you had an AI that can actually send prompts and frustrations that people are having determines what the product is? The company needs to be designed and writes the code ships it? Like, where are people actually needed? And yes, value chain? Right. Awesome, good question about the chain. And you just have this system that just keeps feeding on itself and goes, I am here to serve humanity. And I will solve any and all problems for humanity, and I'll capture some value to pay for my energy. And then Sam Altman 17:13 fine, but then like, who gets that the excess value created? Like who gets that? Exactly? Yes. Answer under our current system? I don't think it would quite work. Speaker 1 17:23 Yeah. Yeah. Whoever puts in that first line that says Build a trillion dollar company. Yeah. And related to this, what are the quality of life impacts that you think we're gonna see? So assuming we figure out the governance mechanisms, and we figure out a lot of the hairier questions to solve around who gets who gets the end value? Putting that aside? I've heard you say, we're going to have the ability to have incredible educators. So very personal, personalized, educating education system for each person, medical care, what were some of the other things that you think will be significantly better in a world with with AGI. Sam Altman 18:11 I think there's probably nothing that I can say here that is new and insightful. I like I kind of believe it's like all going to get a lot better, really quickly. And it won't always be in the ways that we think like if you ask people what they wanted from an AI a year ago, even the people who spend all day every day generating images now and find all these great use cases for it probably wouldn't have said that. You know, they might have said like, I want a better search engine, or maybe they would have said I want you know, or something. But like also before we launched copilot like no one believed models could write code this well. And so like, I kind of think the answer is we're just going to like, when we when we see something useful, we think the technology can do we're just going to try to put it out there in the world, and then hope it keeps getting copied. Yep, Speaker 1 19:00 makes sense. So it's a little bit more emergent than anything else. Right? So we're essentially discovering new ways using technology versus imagining some like future that's far out and saying this is exactly what what it's going to turn into. Yep. So that makes sense. And then how do you think about the value chain? So getting a little bit more into startup generation and circulation around AI? How do you think about what that value chain ends up looking? Like? Is it that there's one company like AI, open AI that everyone feeds into for the for the model? I've already described? There's a middle layer where you have more specific models, but then what happens at the tail end of that is a lot is that really just consumer interface that sits on that model? How do you think about the value chain evolving? Sam Altman 19:46 Um, my guess is, well, there will be areas where you can go far with small models, like image generation, for example, and those will just widely proliferate. My guess is that the most powerful models will be quite large. there'll be a relatively small part of the world that can number of companies in the world that can train them. But yes, then the value that is built on top of those with fine tuning or whatever else will just be absolutely tremendous. Unknown Speaker 20:14 And what, Speaker 1 20:16 what are some examples of the fine tuning in? Like, why is that at a different layer than the than the bigger model. Sam Altman 20:22 Um, once you have trained a pretty good base model, then let's say you have like, now a pretty good general purpose text model, but what you really want is a legal model like aI lawyer base, right. And you could go train from scratch, you know, something that would like try to say, like, you know, do everything that a first year legal associate could do. But that'd be hard for a bunch of reasons, one of which is the model wouldn't have learned basic reasoning and others, it wouldn't have all the world's knowledge of everything else. But if you start with this model that knows everything a little bit, and then give it just a little bit of data to push it in the direction of being a really good lawyer, I think that's a much easier path and sort of what I would expect more of Speaker 1 21:07 got it. And as you think about the base model, and what it needs to learn, relative to the more specific model, what's the difference? What's an example that the base model just needs to learn? Because that's the base for everything. And then what's an example that for that, like, that was a good good framing for the first legal associate what's something that that needs to learn on specifically? Sam Altman 21:29 Well, that I mean, it needs to be you know, familiar with, like all, I don't know, case law ever example. And it needs to be able to, like know, the standard kinds of things that first year associates are expected to do and it needs to have like practice drafting documents, and getting comments back from a partner and incorporating those and everything like that. Right. Speaker 1 21:48 Got it. So the base model is basically training on everything I understand the whole world, you know, yeah. I got to understand the whole world and now to understand the legal world and so that's how you think about these more specific models and how they get trained. Yep. Got it. Got it. Fascinating. Um, and so if someone kind of going back now to the one of the first questions I asked if you're a student or a founder today, if you could just if you were to point them in a single direction for how to prepare for this world would you say go work for a company like open AI? Would it be just start doing some research or just start doing anything in the field like Sam Altman 22:24 start building come to open AI whatever you just like get like Don't Don't miss out on this one. Like just Yeah, speed now will be mine.