Sam Altman's Latest Interview: Zuckerberg Failed to Lure OpenAI Researchers with $100 Million (Complete Transcript of 10,000 Words)

06/18 2025 471

Key Points:

1. Sam Altman believes that the core competitiveness of AI is not burning money, but a culture of "repeatable innovation." Zuckerberg offered a $100 million signing bonus to poach OpenAI researchers, but they did not accept the offer and switch jobs.

2. OpenAI's development path is the opposite of most tech companies. It was first an excellent research company and later added other businesses. Most tech companies first become well-managed tech companies and product companies, and later add a poorly managed research department.

3. Sam Altman mentioned a viewpoint: ChatGPT can make users like themselves more, while some social media platforms make people feel worse and become worse versions of themselves. This is not just a difference in product experience, but also a reflection on the impact on human nature and society.

Editor's Highlights

On June 17, OpenAI CEO Sam Altman made a rare appearance on his brother Jack Altman's podcast. During the conversation, Sam Altman revealed the fierce talent competition among Silicon Valley tech giants and introduced OpenAI's unique innovation culture and future ambitions.

The core competitiveness of AI is not burning money, but a culture of "repeatable innovation"

When talking about competition with Meta, Sam Altman's words exuded confidence and even a hint of disdain. He acknowledged that it was rational for Meta to consider OpenAI as its biggest competitor, but he poured cold water on Meta's innovation capabilities and poaching strategies.

"Meta's current AI work is not achieving the expected results." Sam Altman bluntly pointed out that Meta's investment in AI is not proportional to its wealth and hinted that Meta has not yet found a truly effective path in the AI field.

Sam Altman publicly revealed for the first time the "astronomical" offer made by Meta to poach top OpenAI researchers: "They started making these huge offers, wanting, for example, a $100 million signing bonus, with even more compensation annually." But Sam immediately added that OpenAI's best talents did not accept the offer and switch jobs.

"The incentive mechanism prioritizes mission, followed by other financial rewards." This is the key to retaining talent at OpenAI, as explained by Sam Altman. He believes that Meta's strategy of large-scale upfront investment to guarantee compensation will make talents focus on compensation rather than the mission itself, which will not create a good cultural atmosphere. OpenAI has successfully built a "mission-first" culture, making researchers believe that OpenAI will be the first company to achieve AGI and ultimately be more valuable.

OpenAI's road to success lies in transitioning from a research lab to a great product company

Sam Altman admitted that OpenAI's development path is the opposite of most tech companies. "We were an excellent research company and later added other businesses. Most tech companies first become well-managed tech companies and product companies, and later add a poorly managed research department." OpenAI started as a pure research lab and gradually shifted towards productization and commercialization.

The path to productization is full of challenges. Sam Altman acknowledged that OpenAI is still relatively new in terms of products, and although it is getting better, it is still difficult to build such a large-scale product company in just two and a half years. Compared to cultivating top AI researchers from scratch, finding talents with commercialization experience to help the company develop products and businesses is relatively easier.

Transitioning from research to products requires a huge organizational transformation and cultural integration. Sam Altman hinted that OpenAI is successfully completing this transformation and moving towards "a great product company." This transformation requires not only technical strength but also a deep understanding of products, markets, and user needs by leaders.

The debate on AI values: Making you "like yourself more" or "feel worse"?

In the interview, Sam Altman mentioned a very interesting and profound viewpoint: ChatGPT can make users like themselves more, while some social media platforms make people feel worse and become worse versions of themselves. This is not just a difference in product experience but also a reflection on the impact on human nature and society.

Sam Altman believes that Google always tries to provide worse search results and actively pushes advertisements; Meta tries to crack the human brain and keep people constantly active; Apple's phones, although great, constantly send notifications that distract people. He directly criticized the traditional social media model: it feels like endlessly scrolling through negative news online, making you feel worse and become a worse version of yourself. ChatGPT, on the other hand, makes users like themselves more.

Sam Altman envisioned a "very cool, aligned version" of social interaction, where users can prompt AI to help them achieve long-term goals (e.g., "I want to become healthier" or "I want to learn more about current events"), and AI provides content from a neutral perspective rather than "stimulating angry algorithm recommendations."

This is also the reason why OpenAI acquired the company of former Apple Chief Designer Jony Ive to create intelligent hardware. In an era of information explosion and emotional overflow, a tool that can truly help users achieve their goals and improve themselves will be far more valuable than those that merely provide entertainment or information. This is not only an innovation in business models but also an elevation of the ethics of future digital life.

The future of AI: AI will truly discover new scientific knowledge, and everyone will have an AI companion in the future

Besides competition, Sam Altman also clearly outlined OpenAI's grand vision for the future of AI, a theme that speaks of tremendous changes from basic science to daily life.

AI will truly discover new scientific knowledge. Sam Altman believes that AI has already cracked the reasoning ability in models and can reason like an excellent PhD student. He predicts that in the next decade, AI will be able to conduct scientific research autonomously, discover new physical phenomena, and may even achieve breakthroughs first in the field of astrophysics, where there is a huge amount of data.

AI companions are the ultimate consumer need. Sam Altman envisions that in the future, consumers will have an AI companion that exists implicitly in their lives, helping them through various interfaces (including possibly new devices), understanding their goals and information, and even being able to actively engage with content or observe and learn.

Building the entire industrial chain of AI factories. He believes that the world needs to build the entire AI supply chain, called the "AI factory" or even the "meta-factory," because it can create its own replicas. This includes energy, hardware, etc.

Below is the full transcript of the interview:

I. AI Discovering New Science

Jack Altman: Today, I'm with Sam. Sam, do you have anything to say before we start?

Sam Altman: You're really my podcast brother now.

Jack Altman: I want to start by talking about the future of AI. I want to talk about the medium-term outlook. Because the short term is not that interesting to me. In the long run, who knows? But what I'm most interested in is talking about the next five to ten years. And I kind of want to try to dig out your best guesses on a bunch of specific things from you. One of the places I want to start is software. It seems like the most effective use cases so far, and I'm curious if you agree, but it seems to be programming, then chatting, and then programming again. I'm curious what will be next, what kind of new things will emerge next? And then what will happen right after that?

Sam Altman: I think there will be incredible things that happen. Just like other products, there will be crazy new social experiences. There will be AI workflows similar to Google Docs, which will be much more efficient. You'll start to see, for example, you'll have these virtual employees. But I think the most impactful thing in the next five to ten years is that AI will truly discover new scientific knowledge. It's a crazy statement, but I think it's true. If it's correct, over time, I think it will far surpass everything else.

Jack Altman: Why do you think it will discover new science?

Sam Altman: I think we've cracked the reasoning ability in models. We still have a long way to go. I think we know how to do it. You know, O3 is already pretty smart. You'll hear people say, "Wow, it's like an excellent PhD student."

Jack Altman: What does cracking reasoning ability mean?

Sam Altman: These models are now capable of the kind of reasoning you would expect a PhD student in that specific field to be able to do. In a sense, we're like, "Oh, okay." These AIs are like the world's top competitive programmers now, or AIs that can score highly in the world's hardest math competitions, or AIs that can solve problems like this. This is what I would expect a PhD student in my field to do. We're not particularly surprised, although it's crazy, it's really a crazy thing. You know, the reasoning ability of models in the past year.

Jack Altman: Were you surprised?

Sam Altman: Yes, definitely surprised.

Jack Altman: You thought it would just be like the next token type.

Sam Altman: I thought it would take longer to get to where we are now. The progress in the last year has been faster in terms of the way reasoning is done.

Jack Altman: Did things develop as you expected?

Sam Altman: As often happens in OpenAI's history, sometimes the dumbest initial approach works. So I guess I shouldn't be surprised anymore. But I am still a little surprised every time.

Jack Altman: So will reasoning ability make scientific progress faster, bring new things, or both?

Sam Altman: Both. I mean, you've already heard scientists say they work faster with AI. It's like we don't have AI conducting scientific research autonomously, but if a human scientist uses O3 and is three times more efficient, that's still a pretty significant improvement. Yes. And as this process continues, AI can conduct scientific research autonomously and discover new physical phenomena.

Jack Altman: Is all of this happening now as a co-pilot?

Sam Altman: Yes, definitely not, "Hey, ChatGPT, help me figure out new physics and expect it to work." So I think it's currently a co-pilot. But I've heard anecdotal reports from some biologists that it's like, "Wow, it really came up with an idea that I need to develop further, but it laid the groundwork for a leap." Yes.

Jack Altman: Would it be easier to have AI help you build a complete business, like helping you build a complete e-commerce business, or something similar. Is doing a more difficult scientific project easier or medium-difficulty?

Sam Altman: I'm thinking about what would happen if you could use AI to build a particle accelerator worth hundreds of billions of dollars. And say it's up to you to make the decisions. You look at the data, tell us what experiments you know we should do, and we'll go find the materials and execute. So do you spend a hundred billion dollars doing that, or do you spend a hundred billion dollars building IT infrastructure that connects the economy? Which one is more likely to produce extraordinary achievements? I think physics is a clearer problem. You know, I think if you can do something like... new high-energy physics data and then AI can conduct experiments, I think that's a purer problem. I've heard people say that they expect the first area of science, this is like me, I don't know if this is accurate, but I've heard people say they expect the first area of science will be astrophysics where AI enables autonomous new discoveries because there's a huge amount of data, and we don't have enough PhDs to analyze it all. Maybe discovering new things isn't that hard, but I'm really not sure.

Jack Altman: Okay, so science will get better. Coding and chatting will also continue to progress. Will this lead to progress in business? Can you also do it? Do you have the ability to complete an entire business just with prompts? Like, can you just say, "Help me build this type of business, and this is what it will look like," and it will happen?

Sam Altman: People are already doing these small things. You know, you'll hear stories of people doing these things, using AI for market research, then developing new products, emailing manufacturers, making something silly, then selling it on Amazon and running ads. Some people have actually figured out on a small scale, in the most boring way, how to put a dollar into AI and have AI run a toy business, but it actually works. Yes. So, that will scale up along the gradient. Yes.

II. Humanoid Robots Are the Future

Jack Altman: What about the physical world? Because I understand, I mean, it seems obvious to me that software is headed in this direction. In terms of science, I understand less. I trust your words. What about moving physical objects, for example?

Sam Altman: It's a bit behind, but I think we'll get there. For example, I think we have some new technologies that might... just in terms of autonomous driving for standard cars, do better than any existing methods. This might not be in the sense of humanoid robots that you're talking about, but if our AI technology can really drive cars, that's still very cool. Yes. Humanoid robots are obviously the dream. I really care about that. I think we'll eventually get there. It's always been like a difficult mechanical engineering problem. That's the bigger issue. No, both of these things are difficult, but even if we had a perfect brain now, I don't think we have the body yet. Actually, we worked on this robot hand very early at OpenAI. And the difficulties we encountered were entirely due to the wrong reasons. Like, this thing would always break. The simulator was a bit off. Wow. But you know, we'll get there. Yes. I think we'll have great humanoid robots in five to ten years. Yes. It's amazing. They'll be walking around the streets like humans, doing all kinds of things. Yes.

Jack Altman: I mean, you would think there'd be a ton of breakthroughs, right?

Sam Altman: I think it will be a moment where it's not just unlocking a bunch of stuff in the world. I think it will feel the weirdest. We'll get used to a lot of things. We'll get used to stuff like ChatGPT. Doing these things five years ago sounded like magic. But if you walk down the street and half the stuff is robots, will you get used to that immediately? I don't know. You probably will. But it feels very different.

Jack Altman: It'll feel like a new species is taking over.

Sam Altman: Yeah. I don't think it'll feel like a new species or like being taken over, but I think it'll feel like the future. In a way that ChatGPT still doesn't feel like. I also think if we can find amazing new ways to compute and devices get built that can feel like the future, but even though ChatGPT is incredibly impressive or these new coding agents, they're also incredibly impressive. They still feel like they're in the past in form factor.

Jack Altman: Yeah. It's also trapped in a computer.

Sam Altman: Yeah. There's definitely something to that. It can only do things on a computer, but I don't know what it is.

Jack Altman: Like, how much of the economy, all the value, do you think there is in the world of cognitive labor that can be done behind a computer?

Sam Altman: About half. I was going to say a quarter, but maybe half. I don't know, but it's definitely a big number. Yeah.

Jack Altman: Does it get more dangerous once we have super embodied intelligence, because those things will also be way more powerful than us?

Sam Altman: I'm not sure it gets way more dangerous. I think, like, the ability to make bows and arrows or like destroy an entire country's power grid, you can do pretty destructive things without physical tools. It gets more dangerous in absurd ways. Like, I would be afraid of having a humanoid robot walking around my house that might fall on my baby, unless I really, really trusted it.

III. Superintelligent World

Jack Altman: What's like if you're thinking, ten years from now, we come back here and have this conversation again. What would we ask? Did AI live up to our expectations? What metrics would you expect? Like, was there an inflection point in the GDP growth curve? Or was life expectancy increased? Was poverty reduced? Or something completely different?

Sam Altman: So every year before the end, like, maybe up until last year, I would say, hey, I think this is going to go really far, but it still feels like we have a lot to figure out. I feel very confident at this moment. The most confident I've ever felt that we roughly know how to get to incredibly super powerful AI systems. If something goes wrong, I would say it's in the sense that we build true superintelligence but it doesn't make the world better. Things don't change as much as it sounds. It sounds crazy. Yeah. Yeah, but I don't know if I told you in 2020, maybe I did, that we would have something like Chad GPT that would reach the intelligence level of a PhD student in most domains, and we would deploy it and you know a significant fraction of the world would use it, and use it a lot. Maybe you would believe that, maybe not, but under that condition, I bet you would say, okay, if that happens, the world looks way more different than it does now. Yeah. So it's like we have this crazy thing. Yeah.

Jack Altman: I mean, there's something like the Turing test, you know, everyone thought that was just a thing of the past. No one really cares. I don't know how to explain this.

Sam Altman: The fact that you can have this thing do these amazing things for you, amazing things prepared for you, and you live a life that's roughly similar to two years ago. Your work is roughly similar to two years ago.

Jack Altman: Do you think it's possible to have that crazy superintelligence, like an IQ of 400, and we're still in that state?

Sam Altman: I totally think that's possible. If it starts discovering new science for us, eventually society will find ways to cope, but it could be very slow.

Jack Altman: It's interesting, if it looks like a co-pilot, you might still give credit to the pilot using it in the lab, you know, the agent behind the 400 IQ.

Sam Altman: I think you would anyway. Humans are inherently concerned with other humans. We need characters in the story, you know?

Jack Altman: We need to talk about that guy who did that thing. Or made this decision, or made this mistake, or had this whatever. That's why I'm surprised you don't think if we have a super precise embodied robot, we won't start projecting some of these things onto the robot.

Sam Altman: I think we will. We'll see. I could be wrong. I think we'll have more relationships than we do now. It's more concrete. But I think we're deeply inherently concerned with other humans, and that will ultimately prove to be a beautiful deep biology thing that if you know it's a robot, no matter how human-like it seems in other ways, you might not care as much. That's just a guess.

IV. Medium-Term Predictions

Jack Altman: So reasoning is like a component of intelligence that kind of got figured out. Is there another thread running through, like the topic of agency or the concept of self-direction? Is that really a thing that exists?

Sam Altman: The ability to work towards goals over a long period of time with a lot of complex factors, I think that's probably the step in the process that you're gesturing at. Yeah, exactly. That's definitely what we're working towards.

Jack Altman: What does the technology roadmap look like going forward? Would you say it's inevitable at this point? What parts do you still feel uncertain about how it will unfold?

Sam Altman: I think we'll get extremely intelligent and capable models. Able to discover important new ideas, able to automate a lot of work. But then I'm completely mystified about what society will look like if that happens. So I'm most interested in the capability question, but I think maybe by this point, there should be more discussion about how we make sure society captures the value from it. I think those questions have become more complex and less clear in some ways. I mean, it's a crazy statement to make, like we'll solve the superintelligence problem, but society could still suck. Yeah, I'm speechless. It feels right to me, but like.

Jack Altman: I can't tell sometimes if people don't react to certain statements because they just kind of believe it, maybe that's part of it, but I agree. I mean, that's just how you know, a lot of things in history have gone. Like what Gates said. People didn't quite believe it at first, and then it happened, and people adapted. So, I don't know how to interpret those either.

Sam Altman: I think our technology predictions are pretty accurate. And then I somehow thought society would feel more different than it does if we really delivered on these promises. But I don't even think that's necessarily a bad thing.

Jack Altman: One of the more obvious impacts in the short term might be, or would you say it's the impact on employment. It seems like the kind of thing like… We don't need to like you know, we don't need to believe in crazy leaps to see that there should be some impact like this is going to happen in customer support, very obviously now. For example.

Sam Altman: My view is that a lot of jobs will go away. Many jobs will change dramatically. But we've been really good at finding new things to do and new ways to fill ourselves up and status games or ways to be useful to each other. I don't believe that stuff runs out. Yeah. I think now I do feel like it's maybe getting sillier and sillier to observe from our current perspective. Like the podcast brothers, that's not really a job, is it? So long ago, you figured out how to make money, you did great, we're all happy for you. We're very happy. But wanting to make a living, is a farmer looking at this and saying, this is a job, or this is you playing a game to entertain yourself? I think they'll subscribe to the podcast. I bet they will. They'll like it. But I do think there's a big problem here in the short term. I think in the long term, who knows?

Jack Altman: I mean, one of the things I'm curious about is overall, like from the time when everyone was a farmer and nothing we do now seems to make any sense, like now you have all these things. Is this different, if there's enough resources to go around? Like at some point, are resources enough and distributable like that's the difference, and now people just don't create new jobs? This Venode was on the podcast last week. He we're going to release, say this first, so this won't be out by then, but his view is that people will just consume more leisure time. So he kind of feels like this time around, resources will be very abundant. Everyone will have what they need. We'll be able to build, you know, buildings and people can just enjoy their lives now.

Sam Altman: I think the relativity framework is important for us. I bet like someday when I'm retired, I'll miss it and feel like, oh, this is kind of boring now, that was really cool. It's really important. Feels very fulfilling. I feel incredibly grateful to be doing this. And I enjoy it almost all the time, but... God, it's all-consuming, overwhelming, feels very intense. Really, I think I've been more in the trenches than I ever imagined I would be. It's not really what you set out to do in the first place. I mean, like, most of the time when someone starts a company like software, they expect it to be a software company. I don't think you expected there to be this much stuff. This was supposed to be my retirement job. I was supposed to go run a small research lab. Yeah.

Jack Altman: I mean, there is a world where this doesn't happen. Obviously, there are many worlds where this doesn't happen. Besides liking it, besides the time you've spent, do you feel like it's heavy and important, or is it more of an interesting, playful puzzle?

Sam Altman: Both at the same time, very important. I feel like that's true. Clearly, from a societal impact perspective, or at least potential impact, this is the most important and impactful work I've ever been involved in. I don't know. I don't want to be too self-congratulatory, but you know, maybe this will be a historically significant work. And then I feel like when I have time to step back and think about it, in the day-to-day, it feels like dealing with a bunch of smaller things. I do find a lot of joy in the small things. Like, I really like the people I work with. Being able to be involved in so many things is really fun. Some parts are very intense and painful, but it feels more like an interesting puzzle than important work.

Jack Altman: You also get to talk to anyone you want. That's cool. You can think about anything you want.

Sam Altman: I can't think about anything I want. I have some problems. I feel like a pre-trained model that just woke up in the morning. I have an hour to myself, and then I'm live. Stuff, I have to react to it.

Jack Altman: Okay, fair enough. I guess you could. You could input into the model that everything's on the table, even if you can't control how much time you can put into it.

Sam Altman: I don't think it feels that way. I mean, it feels like there's some short period of time before the day really starts where I have some autonomy to make decisions about things and then I feel like it's pretty passive. Interesting. Yeah, given the speed at which skills are developing, I'm afraid there's not much way around it. I'm trying to find it. It's hard.

V. OpenAI Hardware

Jack Altman: Yeah, I guess you know what you're getting into when you keep hiring experts and hope to reach a certain stage at some point. What about your own experience? Because, for example, what's interesting to me is, you know, obviously, I don't see you any differently, but in a way, I think it's really cool, but obviously, you've gone from that kind of tech echo chamber fame to regular New York Times fame. Is there any upside to that, or is it all downside?

Sam Altman: I think there are a lot of upsides to being a tech celebrity. I think it's the perfect degree of it. Like, if you're a tech celebrity, you can kind of…do a lot of interesting things. You can meet anyone you want. It's like that feeling. You can kind of encourage people to do things. There are a lot of great opportunities. It feels good. I mean, I don't feel like I'm famous like a celebrity. That would really suck, but I already feel like I can't live a completely normal life. It has a different texture.

Jack Altman: Like you'll never be as cool as Tom Cruise.

Sam Altman: But I like, you know, I can still walk down the street.

Jack Altman: I didn't mention this to you, but we were at the Exploratorium last weekend and in the gift shop, someone said, "The CEO of ChatGPT is here." That was nice. My son still calls ChatGPT Siri, and I find that really interesting. I think we'll still be talking about that.

Sam Altman: Yeah, I talked to Siri this morning. It was really funny when you called this the weekend. He was like, "Are you really really are you really really?" Yeah. The inventor of Siri is right here.

Jack Altman: Speaking of kids, given the trajectory that things are on, do you think about how that changes your view of what kids should be learning, or changes what you intend to teach your own kids? My kid rolled over yesterday. That was pretty cool. Yeah, I was impressed.

Sam Altman: Brain power. Incredible. I don't think so. Like me, I mean, these things will never feel weird to him, right? It's like he's going to grow up in a world where computers are certainly smarter than him. Smarter than him. He'll figure out how to use them very fluently to do amazing things, and it'll just seem like. That's good. Hopefully, that's how it goes.

Jack Altman: After this, when you go back to YC, will you go back? How will you feel? Oh, it's like, you know, because obviously you were in YC, so that's like a whole chapter of your life. Looking back now, how do you feel about it? Is it like something very quaint?

Sam Altman: You know, none of that. I love YC.

Jack Altman: I mean, I didn't mean it negatively. I meant, like, a simpler, simpler time kind of feeling. A little bit nostalgic for that feeling.

Sam Altman: It's really fun to always go back and talk to that group of people. But, yeah, it's super nostalgic.

Jack Altman: I just feel like that was the purest. That was like one component of Silicon Valley that I've seen.

Sam Altman: I completely agree. You know what I mean? A hundred percent. It was just so joyful. It was sincere and positive and vibrant and happy, and it clearly worked really well.

Jack Altman: Let's talk about OpenAI, okay? Okay. Thanks. So with OpenAI, we have a consumer-facing business so far. Obviously, there's a B2B business. Jony Ive is involved in hardware. And a bunch of other potential things that seem up in the air. Um, can you talk about what the potential is? What the full setup is, or at least the setup for a certain period of time?

Sam Altman: Yeah, I think what consumers ultimately want is an AI companion, for lack of a better word, that lives in the ether that's helping them in all these ways through all these interfaces and all these products, and it understands you and your goals and what you want to accomplish and your information. Sometimes you're typing into ChatGPT. Sometimes you're using a more entertainment-oriented version. Sometimes you're using other services that are integrated with our platform. Sometimes you're using our new devices, but what you'll have is this thing that will help you do anything you want to do. Sometimes it's pushing content to you, sometimes you're asking it questions, sometimes it's just there, observing and getting better for the future. But that's ultimately, I think, how I feel about it. We don't quite have the right word for it yet. My AI companion is the best I can do right now.

Jack Altman: Do you think that the form factor we have now is the wrong one, like, is that what we're all stuck with right now, like computers?

Sam Altman: Yeah. I'd go so far as to say that's wrong. I think we're not at the optimal state yet. We've basically had two revolutions in the form factor of computers, the interface, or whatever you want to call it, and I think those are both really important. I mean, there was something, a long time ago, but you know, I wasn't paying attention at the time. But in our lifetimes, there was this like computer, like a keyboard and mouse and monitor, which was really great and versatile. And then there was the touch devices that you carry around with you. And those are, honestly, the primary ones. Both of those are limited by not having AI. So there are some things that you have to build, or you can introduce. Or not introduce. If you have this incredible new technology, you might be able to get closer to that kind of computer that exists in science fiction.

Jack Altman: It would be the same intelligence just in a new form factor that lets you use it in different ways.

Sam Altman: Yeah, but the form factor really matters because it's with you all the time. That's probably one reason why it really matters. If it's with you all the time, and it's full of sensors, and it can understand what you mean, what's going on, you know, tracking, tracking a lot of things. If you believe that, with just a very simple instruction, you can make complex things happen and happen correctly, like you can imagine all sorts of very different devices.

Jack Altman: What are some of the other components that you're thinking about right now? Like, obviously, there's the consumer use of chat. The way chat is being used by consumers. There's an API that startups are using everywhere. And then there's this device. What are some of the other big pillars that are like fulcrums?

Sam Altman: I think the most important one is something that the world hasn't really thought about, which is that this means it's a platform, and it's able to... everything integrates into it, and it's seamlessly integrated everywhere, so when you're using other things, when you're in your car, or using other websites or anything, it's just perfect. Coherence. And I think that's going to be really important. There are whole new types of things that can be built, like whole new types of productivity thinking, whole new types of social entertainment thinking. But I think that universality is going to be one of the deciding factors.

Jack Altman: Given that intelligence has such a powerful influence over all of this. Intelligence encompasses all these subcomponents like layers on top of the stack, you know, you even talked about energy. You're obviously very involved in the energy space. There's a lot of stuff in between, and hardware. And all these other things. Do you think that's important, or just for countries, given all these implications, how important is this whole suite of technologies that OpenAI is working on?

Sam Altman: Super important. I mean, I think countries need to think, or like the world, whether it's countries or whatever, needs to think differently about everything from electrons to ChatGPT queries. There's a lot in between. I started calling it the AI factory, because it's a factory that can make more copies of itself, but regardless, we have to complete the whole supply chain, we have to complete the whole supply chain around the world.

Jack Altman: Is it important for OpenAI to do a lot of that work?

Sam Altman: I mean, I think vertical integration can be good in some ways, but it's not important for us to... if we can just make sure that the whole process happens at a sufficient scale, we can do a lot of work, so in many areas, through partnerships, we can drive tremendous progress.

Jack Altman: And there's little risk of losing part of it.

Sam Altman: Yeah.

Jack Altman: In terms of energy, are we going to consume a ton of energy? Is that basically the only end state?

Sam Altman: I mean, I certainly hope so. Like, I think the most relevant thing is the increasing abundance of energy as the quality of life has improved over history. I have no reason to believe that's going to stop.

Jack Altman: Is there any sort of climate thing that you're worried about, or do you think all of this will just be solved and not worth worrying about at all?

Sam Altman: I think nuclear fusion will happen, and new ways will emerge.

Jack Altman: How confident are you in nuclear fusion? Are you fully confident?

Sam Altman: I never say absolutely, but pretty confident, pretty confident, it becomes like a large percentage that I think. In terms of energy, but like the next generation is also great. I know most about this one company called Oaklo, but I think there are other companies doing great work as well, and that's a huge win. Solar and storage, but I hope that eventually humans consume vastly more energy than we can produce on Earth. Even if we go fully to nuclear fusion, like, if you scale up the current amount of energy on Earth to some degree, use it 10 times more or whatever, you know, 100 times or whatever, you're overheating the Earth because of waste heat. Yeah. But we have a huge solar system out there.

Jack Altman: Doesn't all the stuff we're talking about imply that space is both very important and more possible? In general?

Sam Altman: Yeah. Like, we're going to go to space. I hope so. It would be a shame if we didn't.

Jack Altman: This seems like a fun one. Should I start a rocket company? I told you, I think you should start a rocket company. There are so many things I think you should do. I don't know why not.

Sam Altman: What else is on the list?

Jack Altman: I have so many things for you. I would do rockets. I would do social. I would basically do as many as possible.

Sam Altman: Why not? I kind of like doing one thing. I like my family. It's pretty busy already. Yeah.

6. Competition between Meta and OpenAI

Jack Altman: Speaking of social, actually, can I ask you about Meta? What's going on over there?

Sam Altman: Listen, I've heard that Meta thinks we're their biggest competitor, and you know, I think it's rational for them to keep trying. Their current AI work is not working as well as they hoped. I respect that aggressive, try-new-things attitude. And, again, I think it's rational, and I expect they'll keep trying new things if this doesn't work. I remember hearing Zuck talk about, you know, Google in the early days of Facebook, and it was rational for them to try social even though it was obvious to people that Facebook was the way to go. I kind of have a similar feeling here. But they started making these huge offers, wanting to... you know, a lot of people on our team, like a $100 million signing bonus and more salary every year. It's actually crazy. And I'm really glad so far that our best talent hasn't decided to take their offers.

I think people will look at the two paths and say, okay, the OpenAI option is actually more likely to achieve the goal of superintelligence and ultimately probably be a more valuable company. But I think the strategy is a lot of upfront guaranteed compensation, and that's really what you're telling people to join for, the degree to which they care about that, rather than the work itself or the mission. I don't think that creates a good cultural atmosphere. You know, I want us to be the best place in the world to do this kind of research. I think we've created something really special culturally for that, and I think our setup is such that if we succeed in that aspect, and many of the people on our research team believe that we will, or we have a good chance of succeeding, then everyone will be financially very well off, and I think that incentivizes mission first, and then the other economic rewards and everything that comes with that. So I think that's good.

I have a lot of respect for Meta as a company in many ways. But I don't think they're a company that's good at innovation. I think what's special about OpenAI is that we've succeeded in creating a culture that's good at repeatable innovation. I think we have a deep understanding of many things that they don't understand about what it takes to succeed. But I don't know, it's like in a sense, I think it's a clarification for our team. Good luck to them.

Jack Altman: I guess this might come down to some extent to how much you think AI work to date has been copying AI work to date has been sufficient, or how much innovation is still ahead.

Sam Altman: I don't think it's sufficient. I think a lot of people at Meta will have a new one emerge that says we just want to copy OpenAI. We're going to love I mean, if you look at what a lot of other companies are doing, chat apps look like ChatGPT, even down to the bugs in the interface, research work is just trying to get to where we are now, which is crazy. That's a lesson I learned at YC. Basically, that approach never works. You're always like going where your competitor is, and you don't build up a culture of what innovation looks like. I think that's a lot harder than people realize, and once you're in that state, the challenges are much deeper.

Jack Altman: How do you balance both? Because it's not common to have a company that's both very commercial and very research-oriented at the same time. There aren't too many examples of that. I understand how you did it before, you were really commercial, but now you have both and it still works.

Sam Altman: We're still pretty new on the product side. We're still pretty new. I think we didn't have to fight for it, that wouldn't have worked. We're doing okay. We're getting better and better, but we're like the history of many tech companies, where you start a well-run tech company, a product company, and then you later bolt on a poorly-run research department or we're the exact opposite. We're the only example I know of that's the opposite. We started as a great research company and then bolted on the rest. And it was poorly run at first, but the product company is getting better and better. I think ultimately we'll be a great product company, and I'm really proud of the team's efforts there. But it was like two and a half years ago that we were just a research lab. We had to build up this whole huge company. And what the people there are doing is amazing. But ChatGPT launched on November 30, 2023.

Jack Altman: I mean, it's easier to get people together who know how to build a company than those who know how to do that kind of research.

Sam Altman: It's still hard, like most companies that have to build products at that scale, there's way more than two and a half years to do it. Yeah.

Jack Altman: Why is Meta so competitively viewing you? I obviously understand that they might just see AI as the whole game, and maybe that's enough of an explanation.

Sam Altman: Just because people who used to work at Meta have told me, like, you know, in the rest of the world, people think of ChatGPT as a replacement for Google, but in the metaverse, people are thinking of Chadbt as like a replacement for Facebook, because people are spending almost all their time talking to it, because they're having conversations with it in a way that they can't in other ways, and they prefer it as a source of attention and focus. It's not just a matter of time. It's those people, which of course, there's also a time competition, but it's those people, like the feeling of endlessly scrolling through negative news online, feeling like it's making you worse, it might feel good in the moment, but it's making you worse, it's making you a worse version of yourself. And one of the things we're really proud of is that when people talk about Chatbot, they say, actually, I prefer myself when I'm talking to it, it's like it's helping me, it's like it's helping me achieve my goals, I feel like it's actually one of the best compliments I've ever heard about OpenAI. Someone said it's the only company they've ever been at where they didn't feel somewhat adversarial. You know, Google's always trying to show me worse and worse search results and push more ads. I love Google. I love all these companies. I don't think it's totally fair. You know, Meta is like trying to hack my brain to keep me scrolling. You have Apple making this phone that I love, but it's like, you know, constantly sending me notifications that distract me from other things that I can't quit. And then you have something like ChatGPT... I feel like it's like kind of just trying to help me with anything I ask and that's kind of a nice thing.

Jack Altman: Is there a way of socializing that has both the interactive component of people, but is also full of energy?

Sam Altman: One version I'm curious about, though I don't know what it means, is could you have an input that has no default settings, but you could prompt it and say, hey, I want to get healthier. Can you show me some things that would help with that? Or I want to learn more about current events. Can you show me some content? From a neutral perspective that won't make me angry, but actually like, do I actually want to do this? This would obviously get less watch time than algorithms that incite anger. But I think that would be a really cool, aligned version. AI helping you get the long-term social experience you actually want. I don't know. I feel like every morning when I wake up, I'm like this recharged person who knows what I really want out of life, and I have great intentions and I can promise to do things that day, and then by like 10 PM I'm like, "ugh" I wasn't going to drink tonight, but I'll have a whiskey. I wasn't going to look at TikTok, but I'm going to scroll for two minutes. You shouldn't work this hard. I agree. But if I could be the version of myself that wakes up in the morning all the time, if I could have the technology to do what I want to do, I think I'd be great.

Jack Altman: I lived with you, like a decade ago. Yeah. And even then I would say you were, and you were running YC at the time. You were like, I would say very dominant in the sense that you just did what you wanted to do, and there were no rules. But I think since then, especially recently, it feels like there really aren't any rules. You know the Stargate thing. It's like bringing Fiji into the company. Like Johnny, honestly, there's a lot of things I'm curious if there's any psychological updates or something that you can point to or share that allows you to operate that way.

Sam Altman: I think our grandma used to say, oh my god, no, it's just one of those great things about getting old is you stop caring what other people think. I've had that feeling too. I'm just, you know? And I've just been in the trenches long enough. But I do think there's something liberating about getting older and caring less about what other people think.

Jack Altman: Are there things that you hesitate on? Like, could you have acted in a more autonomous way?

Sam Altman: Have you ever had the thought, I wanted to do that, but something stopped me? It's like a practical constraint. Going into a place with more resources and more potential, we could do so much more. So there's still a lot of things I really want to do, like build a Dyson sphere around the solar system, and then you know, build the world's largest data center, harnessing the full power output of the sun, but obviously we can't do that now, so that's decades away. Um, but I think there's a lot more we can do now, we're really capable of doing a lot more now.

Jack Altman: How do you choose when to do things? Because the other problem is you have the curse of choice. So you could launch some rockets, you could do a social network, you could go all in on whatever you want. You could go all in on robots. How do you make decisions when you have so many choices?

Sam Altman: I have almost no bandwidth for anything else right now. So like, and I never wanted to run even one business, let alone many. Yeah. I mean, I thought I'd just be an investor. I thought you'd be an investor too. Life was good. Life was great. I wanted to do this well, and I kind of trusted that other people would start great rocket companies.

Jack Altman: Would you say on Blend that you, you generally really enjoy it? Because it exceeded your expectations? I mean, I feel incredibly grateful and lucky.

Sam Altman: I have no doubt that like, someday in the future, when I'm retired, I'll miss it and be like, oh, this is kind of boring now, that was really cool. It's been really important. It's been really fulfilling. I feel incredibly grateful to be able to do it. And I've enjoyed it almost all the time, but... God, it's just all-consuming, overwhelming, it's intense. And really, I think I've been more in that trench position than I ever imagined I would be. It's not actually what you set out to do. I mean, like, most of the time when people start a company like this, they expect it to be a software company. I don't think you expected there to be so much of this. This was supposed to be my retirement job. I was supposed to go run a small research lab. Yeah.

Jack Altman: Alright, let's change the subject. I don't like where this is going. Let's talk about the intro.

Sam Altman: One of the benefits of brothers is they can really call you out on your shit, which is really helpful.

Jack Altman: I appreciate it, it's awesome. Mom, I hope you're watching this. This is so funny. Thanks Sam.

Sam Altman: Thank you, Jack.

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.