7 minutes
AGI and Paperclips in 2025
With significant developments in artificial intelligence in the last couple of years, most notably in large language models, thanks to companies such as OpenAI, Anthropic, Google and others, we’ve had more discussions—and worries—about artificial general intelligence.
How soon exactly we should expect AGI to be achieved has been a main topic of heated discussion in many rooms filled with natural intelligence to the brim, all over the world. The fact of the matter is that we don’t really know. There are many estimations of course, and many have their own, even very strong, opinions and timeframes in mind, ready to defend them in the comments sections under every AI article, not unlike this one.
I have no strong opinions about the matter, at least not in terms of being convinced that AGI will come in the next decade, or never. Or that AGI will bring the end of humanity as we know it, or even lead to our extinction. It may, but that is true for many things, and the future of AGI is so unclear at the moment that it doesn’t make much sense to me to worry about its potential catastrophic effect on the entire human population on this poor planet.
Is It Hot, or Not?
On the other hand, it is undeniable that artificial intelligence, and consequently artificial general intelligence, is a hot topic. Companies are tripping over themselves to bring new, better AI-enabled products to market, and the mastodons of AI are laser-focused on developing new capabilities of their always-evolving, always-growing, and always more energy-hungry large language models (LLMs). They are so laser-focused, in fact, that when recent reports brought to light some concerning information about LLM performance issues—including problems with degrading performance in long contexts, model drift over time, and the challenges of diminishing returns from pure scaling—we can almost see the smoke coming off of their headquarters.
And very recently, something new has come out of this smoke. Sam Altman, the OpenAI CEO, claimed in an interview with Y Combinator in November 2024 that the age of AGI is near. And when Sam is saying “near,” he actually means next year—2025. This is huge. Upon hearing this, those natural intelligence-filled rooms surely tore themselves apart as people could not agree on how visionary, or delusional, Sam Altman really is.
But that’s not all Sam has said. He has also made it clear that the new AGI, coming to your life in 2025, is not going to be the transformative singularity many expected. Instead, he expects artificial general intelligence to come in phases, getting better with every iteration, gaining features and ironing out problems. For us, who are steeped in AI safety discussions for years now and have also read Superintelligence by Nick Bostrom (spoiler: there are a lot of assumptions in that book), this is rather anti-climactic.
Broken Promise Of AI Domination
AGI was supposed to be huge, loud, and impactful, turning us all into paperclips. This is not what we’ve dreamed it up to be. Iterative? Improving over time? It was supposed to be a singularity, with the moment of turning on the AGI for the first time being a point of no return. It was supposed to be the point where AI would take over, replicate and better itself in a matter of days or weeks, leading to superhuman-level intelligence sooner than you can manage to forget about your gym membership after that New Year’s resolution you’ve been looking forward to starting to realize. Not another corporate product with a roadmap and subscription attached to it.
The new narrative is that AGI is just a stepping stone. The real prize is superintelligence, which is apparently different enough from AGI that we can move the finish line without anyone noticing. It’s brilliant, really. By the time we get to superintelligence, they’ll probably tell us that the actual goal was always super-duper-intelligence, or mega-ultra-intelligence, or whatever comes after that. It’s turtles all the way down, except the turtles are increasingly ambitious definitions of artificial intelligence, each one conveniently just out of reach.
Meanwhile, back in the actual world of AI development, things are getting… complicated. Those energy-hungry LLMs we mentioned earlier? They’re running into some issues. Performance degradation when contexts get too long, models that drift over time, diminishing returns from just making everything bigger. It’s almost like throwing more computing power at the problem doesn’t magically solve intelligence.
So What Are We Actually Getting?
If we take Altman at his word—all his words, the exciting ones and the mundane ones—what are we actually looking at?
We’re getting better AI tools. Much better, probably. We’re getting AI that can help you code, write, analyze data, generate images, and do all sorts of useful things. We’ll get AI agents that can handle increasingly complex tasks. Your future AI assistant might actually be able to book that dentist appointment and reschedule when you inevitably want to cancel it at the last minute.
This is genuinely useful stuff! It’s just not… you know… the singularity.
It’s not going to turn you into paperclips. It’s not going to solve all of humanity’s problems overnight. It’s not going to create a post-scarcity utopia by Tuesday. It’s going to be a really powerful tool that gradually changes how we work and live, with all the messy, complicated, unpredictable consequences that come with any powerful technology.
Kind of like how the internet changed everything, but also somehow we still have to go to work and pay taxes and deal with all the normal human stuff. Except now we can do it while arguing with strangers on social media and doomscrolling through an endless feed of content that makes us vaguely anxious. Progress!
The Paperclips Are Safe (Probably)
Remember the paperclip maximizer? That delightful thought experiment where an AI optimizes paperclip production so efficiently that it converts the entire universe, including you, into paperclips? Yeah, that’s not happening in 2025.
The AGI we’re getting—if we’re getting it—doesn’t have the agency, the goal-directedness, or frankly the interest in turning everything into paperclips. Or into anything else, for that matter. Current AI systems are basically very sophisticated “what would the internet say next” machines. They’re not plotting. They’re not planning. They’re not nursing a secret obsession with office supplies.
They’re just… predicting tokens. Really, really well.
This doesn’t mean we’re out of the woods on AI risks. There are plenty of real problems to worry about: misuse, bias, misinformation, economic disruption, concentration of power in the hands of a few tech companies. You know, the boring, complicated, systemic problems that don’t make for good sci-fi movies but actually matter.
But the robot uprising? The sudden emergence of superintelligence that views humans the way we view ants? That particular nightmare scenario seems to be receding into the “maybe someday but not next Tuesday” category. Which is good news for those of us who were planning to still be around next Wednesday.
Wrapping This Up Before It Gets Too Long
So here we are. Sam Altman says AGI is coming in 2025. Sam Altman also says it won’t matter that much. The internet is confused. The AI safety people are confused. The investors are presumably also confused but are too busy counting money to care.
What’s the truth? Probably something like this: we’re going to get more impressive AI systems that can do more things, but the revolutionary transformation of human civilization is going to continue to be “coming soon” for the foreseeable future. The apocalypse is postponed. The utopia is delayed. We’ll all just keep muddling through with slightly better tools and slightly more anxiety about whether our jobs will exist in five years.
And you know what? That’s probably fine. Maybe even good. Because the alternative—a sudden, discontinuous leap to something we can’t predict or control—was always terrifying regardless of whether it ended in paperclips or paradise.
The AGI of 2025, assuming it arrives at all, will probably look a lot like GPT-5 with better marketing and a shinier interface. It’ll be impressive. It’ll be useful. It’ll make some people very rich and other people very nervous. But it won’t be the singularity.
The paperclips are safe. The world will keep turning. And somewhere, Sam Altman will be preparing his next announcement about how we’re definitely, totally, for real this time, almost at the next big breakthrough.
Until then, we’ll all be here, refreshing our feeds, waiting for the future to arrive, and occasionally wondering if maybe we should just get back to work instead of worrying about whether the robots are coming for our jobs.
They probably are, eventually. But not today. And when they do, they’ll probably need a subscription and won’t work properly on weekends.