AGI Is Coming At The Rate Of Human IQ Dropping
A look at how the AGI debate went from serious research to sci-fi narrative management, investor bait, and philosophical vibes.
The term "Artificial General Intelligence" (AGI) began gaining traction in the early 2000s, but the core idea—creating an AI with human-level, flexible, general-purpose intelligence—goes all the way back to the founding era of AI in the 1950s. The book General Artificial Intelligence by Cassio Pennachin and Ben Goertzel helped formally establish the term, describing an approach to AI design that contrasts with the prevailing narrow-AI paradigm focused only on specific tasks.
The Non-Model of AGI: A Theory Held Together by Irrsinn
[Irrsinn is German. Translation: somewhere between madness and lunacy]
Goertzel wrote: “Intelligence is about what, not how,” although he admits this hypothesis can’t be confirmed or falsified. He also concedes that sometimes we need to consider the internals of an AI system (how) rather than just its behavior (what), and openly speculates whether human logic is even relevant to AGI—“Logic means different things to different people.”
Fast forward about 15 years, and IBM summarized the state of the field in September 2024:
“AGI has been actively explored since the earliest days of AI research. Still, there is no consensus within the academic community regarding exactly what would qualify as AGI, or how to best achieve it.”
The topic of AGI is debated with the utmost sincerity, even though the content is often neither scientific nor coherent—and at times reads like satire. Consider this quote from Goertzel again in 2014:
“Informally, AGI may be thought of as aimed at bridging the gap between current AI programs, which are narrow in scope, and the types of AGI systems commonly seen in fiction – robots like R2D2, C3PO, HAL 9000, Wall-E, and so forth; but also general intelligences taking non-robotic form, such as the generally intelligent chat-bots depicted in numerous science fiction novels and films.”
So to sum it up: AGI is something undefined, shaped by science fiction, and marketed as a virtual C3PO.
Really?
What makes this even more absurd is the fact that the term AGI has so far served one very real business function: to give investors a reason to pour vast sums of capital into AI companies based on the vague idea that some software will soon solve every problem, effortlessly—and make us all rich in the process.
Is it possible?
Sure.
Is it likely?
Hm.
AGI Angst is Arbitrary: Just Ask a Quantum Computer
Quantum computers can outperform humans in very real, high-stakes tasks—like breaking RSA encryption and stealing all the money from all the AI-billion-dollar dreams in the making. Or solving the mundane-but-impossible: simulating molecular interactions beyond human comprehension.
So why aren’t we afraid of this singularity?
It would be far more fitting thematically.
I see no reason why a supervillain couldn’t weaponize it and wipe us out.
What’s stopping us from panicking?
Are we intellectually unevolved—only capable of fearing one technology at a time?
No. I don’t think that’s it.
The reason is simpler:
Quantum computing sounds abstract.
And—worse—it’s boring.
When Ant-Man in Avengers: Endgame had to go into the quantum realm to reset the timeline or whatever so they could fight Thanos one more time, I remember thinking:
“How lucky we are to have a fast-forward button.”
Quantum computing is Paul Rudd—
but not in Avengers.
In Clueless.
It gives off weird, elderly uncle vibes.
Nobody fears that.
And here’s the other thing:
I can’t remember a single film where a quantum computer runs wild.
But bossy little computer programs with psychopathic tendencies?
That’s practically a genre.
Our actual experience with computers doesn’t suggest they’re flawless gods.
They freeze.
They crash.
They autocorrect my name Swen to Sven—with passion.
But films gave us Mr. Data and the Red Queen from Resident Evil.
(Side note: Resident Evil 6 has one of my favorite scenes of all time.)
They finally make it to the CEO’s office.
The AI—a creepy little girl in a red dress—wants to help our lady hero.
But one evil guy is still alive.
Milla Jovovich says, “Help us kill him.”
And the AI replies:
“I can’t. He’s an employee. I’m not allowed to damage company equipment or personnel.”
Sounds like Reinforcement Learning to me.
Then an old lady—who was the human model for the Red Queen—says:
“Wait. I’m on the board.”
She turns to the guy and says: “You’re fired.”
BOOM. Big metal plate drops. Evil guy gone.
That’s our idea of AI:
Cold-blooded killer. Cute UI.
For a while, I thought maybe I was too cynical.
That I was just being snarky.
Until I read one of the leading voices in the AGI debate make exactly that argument.
No real model. No evidence.
Just:
“Well… you’ve seen the movies.”
We don’t debate AGI on evidence.
We debate it on vibes and Star Wars.
(And sure—I do that too. But I mean it as a joke.)
Now imagine if we’d named Artificial Intelligence something more accurate.
Like: Statistical Inference Program.
SIP.
OpenSIP, instead of OpenAI.
(I can feel there’s a joke trying to come out. Still working on it.)
Would anyone be shouting:
“Take shelter! The SIP is coming!”
While everyone runs around channeling John Connor being chased by Schwarzenegger with a machine gun?
I’d say no.
Because no one fears something that sounds like an Excel plugin.
But call it AI, and suddenly we’re not debugging code—
we’re wrangling digital demigods and releasing the Kraken.
This whole AIG debate has been hijacked by charlatans playing on our primordial fears.
🤖 AGI Strategy: Cosplay, Confusion, and Cultish Vibes
I recently wrote about Herr Schmidt’s (ex-Google CEO) take on AGI, which I would summarize as:
“Nuke them while we still can, deploy sci-fi tech we don’t have, and adopt best practices we’ve had for donkey’s years.”
In Schmidt’s defence, at least his vision is internally consistent:
Problem → Response.
Even if the response is what it is… it’s still a strategy.
Paradoxically, that makes it more sincere than Yoshua Bengio’s UN-backed Safety of Advanced AI pamphlet (also covered in my piece on Schmidt). That document casually throws around terms like “AI hallucinations” and “model alignment”—without explaining what’s actually going on under the hood.
The result?
Confusion, not clarity. Alarm, not understanding.
It lists problems with no context, and proposes no meaningful solutions.
What use is that?
Meanwhile, any paper that at least ties each problem to a proposed solution—or even a range of possible approaches—does a better job of moving the conversation forward. Bengio’s doesn’t. Instead, it offers:
1. Lack of Clarity on the Tech
Terms like “AI hallucinations” or “misalignment” are presented with no explanation of how LLMs actually work. That makes these issues sound mystical—like the AI is having a sentient meltdown. But it isn’t.
In fact, it’s a misleading term I don’t like to use. It just describes what happens when you train a model to predict the next token in a sequence.
It’s not a fact-checker.
It’s not broken.
It’s designed this way.
Errors are inherent to that process—not signs of emerging psychosis.
If everyone’s mental, nobody’s mental.
2. Failure to Distinguish Risk Levels
When policy docs treat minor issues (like made-up citations) the same as existential threats (like “human extinction”), they flatten the risk landscape into a blurry “AI bad” narrative.
A coherent paper might say:
Here’s a short-term risk we can address with method X.
Here’s a longer-term risk that requires deeper work.
Here’s why they’re different.
What we got instead is anxiety stacking.
3. No Proposed Mitigation
Just listing things like:
AI might produce disinformation
AI might be used by terrorists
AI might become uncontrollable
…without any actionable governance steps, leaves policymakers and the public completely unequipped.
If people walk away thinking “AI is terrifying” but don’t know what to do—that’s not a safety roadmap. It’s a trap.
4. Missed Opportunity for Education
A “Safety of AI” document could have used just two pages to explain:
What’s a training pipeline?
How does token prediction work?
What are session limits and why do they matter?
Can an LLM even plan?
Most importantly:
What should we use LLMs for, and what should we not?
That alone would’ve helped the reader separate relevant risks from broad concerns. Instead, the paper offers ominous ambiguity dressed in institutional language.
🙃 “We Hope”: The Cult of Uncertainty
Let’s go to the top: Sam Altman, maybe the “former ex-CEO” of OpenAI depending on the week.
He writes about AI safety while pushing AGI as the miracle fix for everything short of climate collapse.
“We believe the future of humanity should be determined by humanity.”
Great line. Are there any alternative views we should weigh before locking that one in?
“Some people think the risk [of AGI] is fictitious; we hope they’re right, but we’ll prepare anyway.”
This is classic hedge language:
We believe...
We hope...
We imagine...
We’d be delighted if...
It’s the language of aspiration, not verification.
Imagine Boeing saying:
“We believe planes shouldn’t fall from the sky. We hope our autopilot works. Some engineers say crashes are rare—we’d be delighted if they’re right!”
This is AI risk management by vibes.
Altman does gesture at one practical safeguard: a “responsible pre-deployment test phase.” Basically, an IT firm discovering User Acceptance Testing and Business UAT, then calling it “responsible.”
But what does he mean by it?
100 users (or whatever),
A feedback form,
Some vague stress tests.
No independent verification.
No public oversight.
No transparent standards.
Next! Leopold Aschenbrenner’s Situational Awareness paper
“You can see the future first in San Francisco.”
Really? I thought they celebrate New Year after we’ve finished in Europe.
“Aschenbrenner” literally means “ash burner.”
Which is curious—ash is already burned, so… why is he trying to burn it again?He’s just one semantic step away from Aschenbecher—ashtray.
Which, honestly, might be a more functional upgrade.
Just a thought.
He left OpenAI in 2024—or was asked to leave (who cares, but it was widely covered)—and then published a document laying out his concerns about AGI.
His work fits neatly into the growing genre of forward-looking AI risk and timeline outlooks. It’s not a rigorous, data-driven forecast—because you can’t forecast genuinely disruptive change. You can’t model unknowns.
Instead, it’s a narrative outlook, built on selectively amplified trends—scaling compute, talent competition, “the AGI race”—and projected forward in a straight line. No real counterpoints. No historical humility. Just a single path with dramatic milestones stretched across 160 pages or so.
It’s not wrong. It’s just a story told with confidence.
This kind of scenario planning creates value not by correctly predicting the future, but by helping us think through how robust a strategy might be under changing conditions. But like most speculative tech writing, it highlights trends with very optimistic assumptions about the industry’s capacity to scale. It doesn’t weigh alternative scenarios. It doesn’t question its priors. So no, it’s not a forecast in any meaningful economic or strategic sense.
And yet, our AI media class treats documents like this with all the seriousness of Dumbledore interpreting the prophecy of Voldemort’s return.
That’s when it all goes sideways.
My favourite quote (of the little I read—because it’s “naja”, German for “meh, sort of, not really worth it but here we are”) is this:
“The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I.”
Being smarter than a sleep-deprived liberal arts major doesn’t earn you the AGI crown. Superintelligence must be wrestled from the iron grip of Herr Aschenbrenner himself.
And people say I have too much ego.
OpenAI, Here’s Your Last Challenger: Daniel Kokotajlo
Also former-OpenAI-something. His website is as polished as Herr Aschenbrenner’s, and it opens with this:
“We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution. We wrote a scenario that represents our best guess about what that might look like. It’s informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes.”
Wargames?
They made a few correct predictions in the past, and now they’re using those to forecast the future? Also—an AI beating me at writing a grammatically perfect Sanskrit sentence doesn’t exactly scream “superhuman.” That’s not evidence of superior intelligence. So what are we even calling superhuman now?
How is this different from plain old-fashioned forecasting?
Then there's this:
“Sometimes people mix prediction and recommendation, hoping to create a self-fulfilling-prophecy effect. We emphatically are not doing this; we hope that what we depict does not come to pass!”
What? So you’re predicting something while also hoping you're wrong?
“Feel free to contact us if you’re writing a critique or an alternative scenario.”
Deffo. But only to let you know I posted it already.
What he’s really produced is a semi-evidence-based forecast—but one still resting on highly hypothetical terrain. If you reject the core assumptions—like persistent long-term AI self-improvement or unrestricted access to compute and data—then most of the scenario’s more dramatic outcomes fall apart.
It’s not about what assumptions you make.
It’s about why you make them.
And what would make you change your mind.
From a scientific perspective, a scenario that ignores counterevidence or alternatives isn’t “rigorous.” But hey—he could still be right, if you choose to believe.
What Have We Learned? From Schmidt to Kokotajlo
These AGI papers lack a clear methodology—no testable frameworks, no falsifiable models. They don’t bring us closer to understanding AGI; they bring us closer to writing science fiction novels.
And sure, fiction sometimes becomes reality. Star Trek predicted tablets and voice assistants. But that doesn’t mean writing speculative AGI essays is a good business plan.
So far, what we’ve seen is more of a positioning strategy than a research program:
“Let’s enter the AGI market instead of opening a burger shop.”
But now what?
If You Want Clarity, Start Here
If we want real insight into near-term AI risks, we need to zoom in:
What tech are we talking about?
What domain?
What environment?
What deployment challenges?
For example:
“Here’s how a large-model-based medical triage system might fail, and how we could mitigate that.”
Now we’re in testable territory.
We can gather data.
We can run failure analysis.
We can do science.
If You Still Want to Talk AGI…
Then be clear:
What definition are you using?
Why that one?
What competing models exist?
What are your benchmarks for progress?
If you’re writing scenarios, call them that.
Speculative. Hypothetical.
Useful for brainstorming.
What Solid Governance Actually Needs
Good policy decisions come from:
Measurable systems
Realistic constraints
Clear feedback loops
Actual oversight
Defined failure modes
Not vague premonitions.
Not 160-page essays with wargames and vibes.
Final Thought (Before Part 2)
It’s time to reflect on something bigger:
Being a top AI researcher—even a Turing Award winner—means you’ve made contributions to mathematics, theory, or core ML tools.
It does not necessarily make you an expert in:
Complex systems
Cybersecurity
Supply chains
International law
Political economy
Large-scale societal impact
The frightening part? Their op-eds influence the conversation on how to govern the most powerful digital systems in history assuming we are being given expert opinion.
And based on what we've read...
I'm not sure their opinions are worth all that much.