OpenAI CEO Sam Altman has openly welcomed the idea of Artificial General Intelligence, also called Superintelligence. As the head of the company behind ChatGPT, he has shown a strong focus on creating this kind of advanced technology. Even though experts like Microsoft’s AI chief, Mustafa Suleyman, have issued multiple warnings about the serious risks and the need for caution regarding AGI, Altman does not seem worried. During a recent Y Combinator podcast, the startup investment company’s president and CEO, Garry Tan, asked Altman what he is most excited about for 2025. He said that working toward AGI is “probably the thing I am most excited for ever in life.”“AGI. Excited for that. All I am excited for. More than my kid, more excited for that. Probably that’s the thing I am most excited for ever in life,” Altman noted.This comes after Suleyman has repeatedly warned about the risks of building superintelligent AI without clear limits. Microsoft’s AI chief has been firm that raw power should not be the primary focus in the race to create advanced systems. “We can’t build superintelligence just for superintelligence’s sake,” Suleyman warned and stressed that it must be developed in a way that benefits people. He said: “It’s got to be for humanity’s sake, for a future we actually want to live in. It’s not going to be a better world if we lose control of it.”
What Sam Altman said about OpenAI’s AGI bet
Commenting on OpenAI’s AGI bet, Altman said: “We said from the very beginning we were going to go after AGI at a time when in the field you weren’t allowed to say that because that just seemed impossibly crazy, borderline irresponsible to talk.We really wanted to push on that and we were far less resourced than Deep Mind and others, and so we said okay they’re going to try a lot of things, and we’ve just got to pick one and really concentrate and that’s how we can win here. Most of the world still does not understand the value of a fairly extreme level of conviction on one bet; that’s why I’m so excited for startups right now, it is because the world is still sleeping on all this to such an astonishing degree.We realised that AGI had become this badly overloaded word and people meant all kinds of different things. We tried to just say, okay here’s our best guess, roughly of the order of things.”Explaining how AGI works, Altman said: “You have these level one systems, which are these chatbots, there’d be level two that would come, which would be these reasoners. We think we got there earlier this year with the o1 release. Three is Agents with the ability to go off and do these longer-term tasks, like multiple interactions with an environment, asking people for help when they need it, working together, all of that. I think we’re going to get there faster than people expect. As innovators, like a scientist, you know that’s the ability to go explore a not well-understood phenomenon over a long period of time and understand what’s just kind of go and figure it out. And then level five, this is the sort of slightly amorphous. Like do that, but at the scale of the whole company, or you know a whole organisation or whatever. That’s going to be a pretty powerful thing.”
How far is OpenAI from building AGI
Talking about how far OpenAI is from developing AGI, Altman noted: “This is the first time ever where I felt we actually know what to do. I think from here to building an AGI will still take a huge amount of work. There are some known unknowns, but I think we basically know what to do. It’ll take a while, it’ll be hard, but that’s tremendously exciting. I also think on the product side, there’s more to figure out, but roughly we know what to shoot at and what we want to optimise for. That’s a really exciting time, and when you have that clarity, I think you can go pretty fast. If you’re willing to say we’re going to do these few things, we’re going to try to do them very well, and our research path is fairly clear, our infrastructure path is fairly clear, our product path is getting clearer. You can orient around that, super well. We did not have that for a long time. We were a true research lab, and even when you know that it’s hard to act with conviction on it because there are so many other good things you’d like to do. But the degree to which you can get everybody aligned and pointed at the same thing [AGI] is a significant determinant in how fast you can move.”
