How MIT Faked Integrity: The "Nobel Prize" for Hypocrisy
How to Win a "Nobel Prize" for Ignoring Your Own Theory: Kind of Nobel laureate Daron Acemoglu warned us about extractive institutions. Then he helped MIT become one.
“It begins with a preprint, as usual...”
— sounds snarky and dismissive, but it rests on a false assumption:
that peer review = quality, and preprint = suspicious.
In reality, the opposite is now true—especially in fields like physics, computer science, AI, and increasingly economics and biology, where preprints are not fringe, they are the norm. Peer review, by contrast, lags behind, often functioning less as a quality check and more as an ideological filter or prestige gate.
And we’re not talking about a few bad apples. We have empirical evidence that the quality of peer-reviewed articles has become a race to the bottom:
Retraction rates have increased dramatically in top journals.
Entire fields have faced replication crises (psychology, economics, medicine).
Reviewers frequently miss critical errors in method, code, or data—and are rarely accountable.
Peer review now often rewards familiar conclusions, conformity with fashionable methods, and alignment with senior academics’ expectations. Risk-taking research and interdisciplinary approaches are more likely to be buried than evaluated fairly.
So when someone says “it began with a preprint”—as if that discredits it—they’re clinging to a mythology that no longer holds.
Preprints don’t bypass scrutiny.
They invite open, real-time scrutiny, which is exactly how this fake paper got caught.
The problem isn’t that this started with a preprint.
The problem is that everyone else still puts faith in a peer review system that consistently fails to do what preprints already did: let smart people read, critique, and expose.
MIT writes:
“[W]e are concerned that, even in its non-published form, the paper is having an impact on discussions and projections about the effects of AI on science.”
They’re not worried that it's wrong.
They’re worried that it’s believable.
That it’s shaping thought.
That it’s already functioning as a model—precisely because it’s plausible, beautiful, and elegant.
This is the nightmare scenario: A fake paper shapes real-world belief, because our systems don’t distinguish performance from truth anymore.
This wasn’t misconduct. It was epistemic theater.
It doesn’t fit fraud. It doesn’t fit art. It’s post-research—a performance that uses the structure of science as its medium.
But it is superficially impressive: It simulates the form of structural econometrics without satisfying any of its conditions. It gives the illusion of inference by fabricating variables, bootstrapping their validity from LLM classification outputs, then performing linear algebra on those artifacts. The AI related info are impossible.
Chemical Implausibility:
"Graph Diffusion Architecture... Forward (Add Noise)... Reverse (Denoise)... Random Compound"
This is nonsense on both chemical and epistemic grounds:
In chemistry, you cannot "add noise" to a molecule and then "denoise" to discover a valid compound. That’s not transformation — that’s fantasy.
Molecular generation isn’t like image synthesis. You can’t tweak structures arbitrarily — stability, valency, and reaction kinetics impose hard constraints.
A molecule isn’t a latent vector in a generative model. It’s a discrete structure with physical laws baked in. You don’t sample from Gaussian noise and get C₆H₁₂O₆.
If this is metaphor, it’s misleading. If it’s literal, it’s wrong.
Synthetic but Believable Narratives
The paper is full of:
Real citations and canonical names (Acemoglu, Autor, Mullainathan).
Grounded survey response rates (44%, benchmarked to other econ papers).
Modestly imperfect data (e.g., results are strong but not absurd; ~17% increase in product prototypes, not 1000%).
Conclusion: The fraudster mimicked academic humility and probabilistic messiness—making the data feel "real." That’s psychologically savvy.
Internal Consistency
From task logs to AI training stages, the story is:
Logically interlinked across levels (macro innovation → micro task shifts).
Quantitatively consistent (quality, novelty, productivity all rise with plausible lags).
Balanced across dimensions (positive effects but also reduced job satisfaction).
They anticipated peer review objections—and preemptively answered them. It reads like a conversation with a hostile referee.
Based on absurdities
“I employ a large language model — Anthropic’s Claude 3.5 — to classify scientists’ activities…”
Then link this to atomic-level material discovery?
Claude doesn’t model atoms.
Claude is a language model. It maps token probabilities — not quantum states, not bonding configurations, not electron densities.
Using it to theorize material behavior is like using a thesaurus to simulate gravity.
No scientific inference occurs.
The paper never shows Claude engaging with physical theory, simulation tools, or even structural databases like Materials Project. It classifies textual activity logs, then pretends that’s insight into matter itself.
How can this be? It requires significant effort and expertise to pull this off and we are supposed to believe a 26 year old would invest so much effort in producing fake output. This doesn't add up. Whatever happened here, MIT is not forthcoming with the real story.
This hybrid — high gloss, shallow concept — strongly suggests AI-assisted production plus human supervision. Not a random student alone, but possibly a demo gone rogue, or a speculative exercise that was never meant to pass as final research. Kiddie professionalism:
Earnest formalism: as if following econometric ritual guarantees validity.
Childish extrapolation: if a model assigns labels to text, then the labels are real, then they’re skills, then skills are causes.
Facade of inference: regression is used like a magician’s wand, transforming invented constructs into policy implications.
And MIT doesn’t know how to talk about that.
"Following Acemoglu and Autor (2011)"
This phrase in the paper is crucial. It indicates that the author:
Reused a well-known task-based production framework from Acemoglu and Autor's labor economics work.
Transplanted it into an AI–R&D context, likely using a large language model (LLM) to scaffold the argument.
This isn’t just imitation — it's thematic mimicry. The model, citations, phrasing, and theoretical skeleton are all derivations of a recognizable academic signature.
This would suggest Acemoglu or his circle guided, advised, or even indirectly supported the student. So why doesn’t he explain his involvement?
It may have begun as:
A demo of AI authorship.
A social probe into the credibility of synthetic research.
Or an experimental hybrid of real data, AI-generated scaffolding, and academic method.
But then it worked too well:
The model was coherent enough to pass as real.
It fit elite narratives (AI productivity, economic modeling, innovation).
It was publicized — NYT interviews, press coverage, Substack fame.
Now it posed a problem of visibility:
Either Acemoglu was complicit in a synthetic narrative.
Or he failed to recognize synthetic scaffolding that mimicked his own style.
Or the student used his framework in a way that unwittingly exposed the vacuity of the method.
Any of these would be deeply embarrassing.
So instead, they file it under “misconduct” and issue a carefully worded bureaucratic redaction—without telling us how it happened.
The Withdrawal Language
“Withdrawn by arXiv administrators due to concerns about the validity of the data and incomplete Institutional Review Board requirements.”
This is classic procedural deflectionl
But the paper itself exposes that it’s not valid. It describes an AI pipeline (Claude 3.5) conducting judgments on atomic-level R&D, trained on manually classified corpora, used to infer productivity and innovation. This is methodologically incoherent. No human reviewer with minimal AI literacy could believe this was empirically grounded.
So why the cautious language? This makes no sense. The paper contains enough to declare a fake. Why don’t they admit that?
And that makes this fake report vital! MIT says:
“While student privacy laws and MIT policy prohibit the disclosure of the outcome of this review, we want to be clear that we have no confidence in the provenance, reliability or validity of the data and in the veracity of the research.”
What are they saying?
We have internal rules (privacy, due process, FERPA) that prevent us from disclosing what happened.
But we’re going to ignore those rules just enough to publicly destroy the credibility of the research anyway.
And you’ll have to trust us—because we won’t show you how we know. Our rules are meaningless and we bend them as much as we like as long as it suits us and we think the reader is too stupid to notice because we are MIT.
That’s what it says.
No Accountability Is Possible
You can’t question the process because it’s secret.
You can’t audit the judgment because no evidence is shared.
You can’t defend the author, because the charges are all implied, never stated.
This is trial by implication under a banner of “procedure.” It is a violation of the very principles a scientific organisation is founded on. And the Professor Daron Acemoglu and Professor David Autor are blind to see it.
The “Nobel Prize” given to Acemoglu was a huge mistake.
This Is Not Just Bureaucratic Incoherence. It’s Institutional Abuse by MIT faculty
MIT’s statement hides behind confidentiality laws (FERPA, privacy policies), then uses the authority those laws are meant to protect to destroy a person’s credibility without due process.
That’s not just cowardly.
That’s structural misconduct.
You cannot invoke rules of fairness and then break them to silence dissent.
You cannot claim you respect privacy while publicly implying guilt.
You cannot claim to uphold science while refusing to explain the logic of your own judgment.
This is not “research integrity.”
It is an authoritarian gesture dressed up in institutional language.
If a student—or anyone else—had done this in reverse (e.g. publicly accused MIT of fraud with no evidence, no process, and no access to records), they would be expelled, sued, or sanctioned.
But when MIT does it, they call it “clarifying the record.”
No.
It’s defamation under institutional cover.
That is not a gray area. It’s a crime in any ethical or legal system that respects evidence, rights, and due process.
When MIT faced a crisis:
It withheld process.
It invoked rules selectively.
It broadcasted guilt while hiding evidence.
It protected power and reputation, not transparency.
Acemoglu didn't challenge that. He joined it.
He didn’t stand up and say:
“Wait—we of all people cannot do this. We literally won a Nobel for studying this failure mode.”
He said:
“We have no confidence in the provenance, reliability or validity of the data.”
That’s not a statement of inquiry. That’s a closing of the conversation.
The Real Tragedy
The fake paper was a kind of intellectual deepfake.
But the response from Acemoglu & MIT is something worse:
A live demonstration of how extractive behavior plays out—even in elite academic institutions that preach inclusion, reform, and rule of law.
They didn’t just fail their own standards.
They performed the very system they claimed to diagnose.
And no one called it out.
“Nobel Prize” Logic Imploded
You win a “Nobel” for proving extractive institutions suppress reform.
Then, when facing an institutional challenge, you help suppress the reform.
That’s not irony.
That’s disqualification.
Are we giving “Nobel Prizes” for this kind of idiocy?
Not just for ignoring fraud—but for enacting the very institutional rot your work claims to warn against?
If that’s where we are, then the “Nobel” is no longer a reward for insight.
It’s a badge of performative blindness.
The Myth of Institutions: Acemoglu’s Story Isn’t Science—It’s Narrative
The Nobel Prize committee praises Acemoglu for showing that:
“One explanation for differences in countries’ prosperity is the societal institutions that were introduced during colonisation... Inclusive institutions led to prosperity; extractive institutions led to poverty.”
But this is not causal reasoning. It’s retrospective storytelling, painted with moral language and historical hindsight. It is not science.
If a colonizing power can extract gold, rubber, sugar, or oil without resistance—why would they ever build inclusive institutions?
What are you gonna extract from Massachusetts—beaver fur?
When there was nothing to take, they sent settlers.
When there was wealth to plunder, they sent boats.
Where is my Nobel Prize for this insight?
They don’t need democracy or property rights.
They need control. And they build just enough infrastructure to extract.
The presence or absence of “inclusive institutions” is not the cause of prosperity.
It’s a symptom of whether the colonizer needed to build a society—or just a pipeline.
And since when, exactly, did we start distinguishing between nice and not-so-nice conquerors?
As if the central question of global inequality is whether the colonizer bothered to set up a town council before extracting everything of value.
This story—that good institutions explain success while bad ones explain failure—isn’t science. It’s comfort fiction for the powerful, retrofitting morality onto material conquest.
And the fact that it was rewarded with a “Nobel Prize” tells you everything that’s wrong with the world.
Not Even a Real Nobel
Let’s clear up one last myth while we’re at it:
The so-called Nobel Prize in Economics isn’t a Nobel Prize.
It wasn’t part of Alfred Nobel’s will.
It wasn’t meant to honor peace, physics, chemistry, medicine, or literature.
It was created in 1968—by Sweden’s central bank—to honor economic theory that supports systems of power.
It’s officially titled: “The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel.”
Which is just a bureaucratic way of saying:
This is a branding exercise. Not a moral authority.
So when Acemoglu gets praised for diagnosing extractive institutions—then helps enact one—he’s not betraying the spirit of the Nobel.
He’s fulfilling the logic of the prize exactly: giving credibility to systems that maintain control while calling it insight.
Thanks for reading my piece. I have a few responses.
1) I think you've misunderstood and mischaracterized my views on peer review and preprints. The title "it begins with a preprint, as usual" was not meant to be snarky or dismissive. In fact, later on in the piece (in section 4) I explicitly defend the preprint system. I haven't written a post for my current blog on this, but in other forums I've written at length on why I think the modern peer review and academic publishing systems are incredibly flawed.
2) You write that the author exhibits deep domain knowledge in materials science and AI architectures. I think that the opposite is true; they have very superficial domain knowledge in those areas, viewing AI use and materials science innovation through the lens of an economics PhD student. This is the main reason why I think people were able to catch the fraud so early: a computational materials scientist wrote to the MIT department with their concerns, triggering the investigation.
3) You write:
"How can this be? It requires significant effort and expertise to pull this off and we are supposed to believe a 26 year old would invest so much effort in producing fake output. This doesn't add up."
What doesn't add up here? It adds up perfectly well to me. He got caught up in a chain of lies, wrote a paper with fake data (likely much of it AI generated itself), and then got caught. I'm not sure what more you think MIT could have done, other than not getting fooled in the first place (which, again, hindsight is 20/20). You demand accountability from MIT, but they (MIT, Acemoglu, and Autor) did exactly what they should have done. Upon getting a credible complaint that this student's work was fraudulent, they conducted a fairly prompt internal review, and then announced publicly that they could not stand behind the work and asked the journal and arxiv server to take the work down. If you want a multiple page accounting of how they could have been fooled by fraudulent work of a graduate student, well... for that there are substack writers.