It Is Not Normal to Say AI Is a Normal Technology
A review of Columbia's Knight Institute AI intelligence pamphlet—where pontification replaces thinking, and vibes stand in for argument.
"AI as Normal Technology" is a recent article written by
and , published under the bold motto “debunking AI hype.” Grand.“When everything is normal, nothing is worth thinking about.”
The same article also appears under the Knight First Amendment Institute at Columbia University—a body dedicated to defending free speech and public education.
What is it?
A bigheaded TED Talk-style op-ed in PDF form—published under the noble banner of free speech and public education.
In summary, I can’t decide whether to feel offended by how completely it betrays the ideals it claims to pursue—or just sad, for the exact same reason.
They call it a vision. It’s a pretty dumb one:
“We articulate a vision of artificial intelligence (AI) as normal technology. To view AI as normal is not to understate its impact—even transformative, general-purpose technologies such as electricity and the internet are ‘normal’ in our conception.”
Let’s pause. Electricity is not a technology. It’s energy. We harness it through technology.
And since their fine theory is that we should label all technology “normal technology” for no reason other than it sounds less confusing, we’re in branding territory, not theory.
Let me help:
A camera and a digital camera are both “normal.” Yet the distinction is useful—say, when you’re trying to order one on Amazon.
Apparently, they’d also lump energy infrastructure under this term. So maybe it’s time for a full rebrand:
United Nations Atomic Energy Commission (UNAEC) →
United Nations Normal Technology Commission (UNNTC).
Because if everything’s normal, nothing is.
When your theory begins and ends with “let’s just call it normal”, you don’t have a theory.
You have something very unique and very unusual.
It needs its own category to do it justice.
“The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it.”
If “normal technology” includes everything from the internet to nuclear power to AI, then what’s left?
Telepathy?
Sorcery?
A toaster that writes poetry? And why would’t that be normal?
Such statements are absurd. They prevent falsification, because anything that exists or will exist can be retroactively declared “normal.” It offers no tools to distinguish between technologies we can ignore and those we can’t.
This is not a theory—it’s just a way of saying things are or stuff happens.
It’s what you say when you’ve run out of ideas and still want the mic.
The scandal is that the paper lists half a page of names—dozens of people, some from top institutions—who were supposedly consulted or gave feedback on the draft.
At that scale, it’s not an oversight. It’s institutional dysfunction.
If nobody among them could see how conceptually hollow this is—how it presents a "theory" that says nothing, predicts nothing, explains nothing—then we have to ask what kind of intellectual standard is actually being applied.
This would make it a world beating new low in public discourse.
Their not so normal idea also says:
“It rejects technological determinism, especially the notion of AI itself as an agent in determining its future. It is guided by lessons from past technological revolutions.”
This is secular dogma dressed in cardigan-speak.
They reject novel IT risk not by disproving it, but by declaring it inconsistent with past experience as if I can predict the destructive power of nukes from my experience firing cannon from my medieval castle.
What they’re really doing is appointing themselves as the AI papacy, issuing comfortingly vague encyclicals to reassure the masses:
“Thou shalt not worry. AI is normal. It didn’t kills us in the past when it wasn’t invented. And means it has no impact now that it is here. ”
When something defies rational thinking we call mysticism, it becomes question of faith and cannot be judged by rational arguments.
“The downsides of which will be likely to mirror those of previous technologies that are deployed in capitalistic societies, such as inequality.”
In proper economic or financial analysis, you'd never say "we live in capitalism" as if that’s a meaningful unit of measure. It can be used in a specific context:
Free-market capitalism
Shareholder capitalism
State capitalism
But it’s not a form of political organization. Karl Marx popularized “capitalism” as a critical framework.
He didn’t use it to describe a type of economy—it was a historical stage in a dialectical process.
He used this term to describe exploitative conditions. But even then, the countries didn’t have the objective to exploit their people. It was the outcome of how they organized themselves. Capitalism is not a form of political organization. The United States of Exploitation?
Second, inequality is not unique to Western democracies. It exists in every form of society—feudal, communist, tribal, imperial—you name it. And why a debate on AI technology requires moral posturing is unclear.
Third, why would absolute equality be desirable? For example, if I worked 10 hours and you worked 5, and I earned twice as much, we have created inequality. One might say this is fair—but that’s not a logical conclusion, it’s a moral one. The idea that rewards should be proportional to effort is not derived from logic, but from a value judgment about what feels fair or just. It may be widely accepted, but it remains a philosophical stance, not a formal necessity.
You must agree with me, because we also look favorably on concepts like sick pay—where someone does no work but still receives money. This breaks our simple rules engine. Humans are exceptionally good at conceptualizing conditions and circumstances, so that “time worked” becomes not a raw metric, but something judged in context—relative to the potential a person has to give.
A sick person who cannot work at all is understood to be fulfilling their part by working zero hours. The next person might be unwell but still contribute a little. These coordination problems are not the exception—they are the rule. In fact, staying functional when everyone is an edge case is one of the central challenges of governance.
They write: "Our societies only seem to know one way to deploy technology: to produce more inequality." But is not the issue. The real issue is that we have not found a way to allocate resources among people equally and fairly. If society has 1,000 members and we have 1,000 units of a resource, then solving for equality is trivial. A computer could allocate them perfectly—as long as we accept a few conditions:
Time doesn’t matter to us.
If I receive one car in January and you get one in December, I have access to a car all year while you don’t. If we consider this unequal, then we would need to rewrite the laws of the universe—to remove constraints like the time required to build a car. Physics would need to go.Having nothing is better than having something—but less than someone else.
Take this example: Christmas is coming, and all 1,000 citizens want to visit family. But we only have one aircraft that can carry 250 passengers. If everyone has equal money, we still cannot solve the problem. We use the price mechanism to signal demand. If we optimize purely for equality, the solution becomes: cancel the flight.
In such cases, we may distribute fractional resources and let people negotiate—a system that inevitably reintroduces inequality. Everyone gets 0.25 tickets, and now you must convince three others to give you their shares. Some will—perhaps a mother who doesn’t want to fly alone gives her share to her child, or someone trades services for extra tickets. We preserve the principle of equal allocation, but people differ in how much they value a resource (e.g. being with family). That’s why you can’t escape inequality.
This is why the quip in their paper is misguided. It’s an example of bias masquerading as virtue—what they signal as ethics looks more like posturing and a lack of understanding.
Then comes the killer phrase:
“This essay has the unusual goal of stating a worldview rather than defending a proposition. Thow e literature on AI superintelligence is copious. We have not tried to give a point-by-point response to potential counterarguments, as that would make the paper several times longer. This paper is merely the initial articulation of our views; we plan to elaborate on them in various follow-ups.”
Translation: This is not an argument. It’s a vibe. And if you find holes in it, don’t worry—we’ll backfill them later in a series of equally unsubstantiated blog posts.
But it’s even worse—and you find it a lot on social media. I would call it performative openness:
A person posts speculative or vague ideas. When critique arrives, they respond with: “Well, I never said this was definitive.”
This creates a no-win scenario:
If you engage critically, you’re “overreacting.”
If you stay silent, the idea floats unchallenged.
It disables progress and silences dissent. This is a toxic form of manipulation to control the narrative. Openness is signaled, not practiced. It has no place in scientific writing, because the assumption that no knowledge is final truth always holds. If you point that out, you fabricate a rhetorical weapon to attack dissenting voices.
I have written about this phenomenon here.
This covered the first 2 pages of a 40–50-page document, and it doesn’t get better—but why bother.
I do agree that most of the writing about AGI is unscientific and badly argued. That doesn’t mean AGI won’t happen. It only means we are not approaching the question scientifically: it’s a lot of opinions and speculation that could be right or wrong.
Their instinct to critique that can be argued scientifically. But not this way. This is nothing anyone should read.
They may say some correct things, but science requires method and clarity—and there is none here.
They present "AI as Normal Technology" under a bold "RESEARCH" banner (on their website), which implies scholarly rigor, method, and evidence-based argument. But the paper itself clearly states:
“This essay has the unusual goal of stating a worldview rather than defending a proposition.”
You can’t call it research if it openly avoids defending claims or engaging with counterarguments. You can’t have it both ways.
"AI diffusion in safety-critical areas is slow because more complex models like transformers are rarely used, due to their lack of interpretability and the risk of unpredictable deployment behavior."
This framing assumes that modern models inherently fail in real-world scenarios, but the cited failure (e.g., Epic’s sepsis tool) reflects a design flaw — not model complexity. The model did what it was trained to do, but the oversight process failed to identify that it was using a future-dependent variable. This isn’t an argument against diffusion — it’s an argument for better system-level auditing, causal reasoning, and accountability. Complexity isn’t the problem; unaccountable deployment is.
"Decades-old statistical techniques like regression are still used in safety-critical domains because they are simple and interpretable, unlike modern ML methods such as transformers."
Why still? Why would this be bad? Older models often persist not because they’re better, but because adding complexity without interpretability can increase risk. In domains like life expectancy or insurance risk, the problem isn’t a lack of predictive power — it’s that the underlying signal is often noisy or irreducibly uncertain. A transformer won’t magically resolve this. Unless the model captures causal structure, its increased flexibility may just learn spurious correlations — making it harder to trust or audit. The issue isn’t computation, it’s epistemology: more math doesn't always mean more meaning.
#AIDebate #CriticalThinking #AIHype #KnightInstitute #TechCritique #ArtificialIntelligence #ScienceNotVibes #AIPolice