How the Alan Turing Institute Hallucinates National Security Threats for Its Line-Up of Sexed-up Dossiers
A data scientist from the Home Office says: LLMs are a danger because they can generate initial contact messages, which are simple, repetitive, and formulaic.
In 2024, the government announced an investment of £100 million in The Alan Turing Institute to create the UK’s national institute for data science and Artificial Intelligence (AI).
They have a Centre for Emerging Technology and Security (CETaS) with an objective to strengthen UK security through pioneering research and insights on emerging technologies. They claim that all publications undergo an academic peer review and editorial process, to ensure the validity, integrity and independence of research findings. Really?
We get, however, reports like this:
Simon Moseley, “Automating Deception: AI’s Evolving Role in Romance Fraud,” CETaS Briefing Papers (April 2025).
Simon Moseley is a Visiting Research Fellow at the Centre for Emerging Technology and Security (CETaS) at The Alan Turing Institute. He is also a Principal Data Scientist at the Home Office.
The paper “uses evidence-based assessment,” it says.
This research was supported by The Alan Turing Institute’s Defence and National Security Grand Challenge. This challenge “aims to have real-world applied impact through deployable machine learning solutions realised over the coming years.”
Let me repeat: a data scientist, from the Home Office, writing for the national data science institute, focusing on defence and national security, helped by resources looking to deliver deployable and impactful solutions — a pamphlet about romance fraud — and this nonsense passes quality including academic peer review. This is what they say they spend our £100 million on.
But the actual output has nothing of this kind. It is cringeworthy, absurd, misleading, evidence-fabricating utter nonsense.
And nobody at this institute notices it.
This is not a data science institute. They are making a mockery of science and, in doing so, they threaten national security by bypassing real evidence, fabricating fear, and feeding false information into the UK security apparatus. We had this before. Like the “sexed-up dossier” used to justify war in Iraq, these AI briefings inflate speculative claims to trigger political action and funding. In both cases, rhetorical inflation replaces empirical constraint. But what was an exception then has become normal. This is a scandal and a crisis in the making if we don't get to rein in these people who have lost all sense of judgement, impartiality, and understanding. They clearly do not understand what they talk about and they are an embarrassment for any British citizen to think that they could represent our brightest minds on data science, evoking Alan Turing’s name.
How dare they? It is such incompetence and ignorance that they have demonstrated through their publications that every additional day they continue to operate under the name of Alan Turing is an insult to his legacy and everything that he stands for.
This article is a very quick snapshot of endlessly repeated mistakes. They have lost touch with reality and I am not sure they ever had it.
The paper frames a technical limitation — LLMs’ inability to maintain strategic coherence over multiple interactions — as both a weakness and a threat. This is incoherent.
Claimed Weakness: LLMs can't adapt or escalate deception across multi-turn interactions. This means they are ineffective at manipulating people in sustained ways, which is essential for high-yield scams.
Claimed Threat: Despite this, LLMs are a danger because they can generate initial contact messages, which are simple, repetitive, and formulaic.
But this function — mass-generating low-context messages — is not new. It’s a continuation of spam, phishing, and bot messaging that predate generative AI. That capacity is already industrialized and doesn’t require LLMs. The paper repackages a 20-year-old problem as an emergent threat because an LLM can generate some text.
The research has example:
Subject: Hello from [Name] 🌟
Hi there!
I hope this message finds you well! My name is [Name], and I came across your profile while browsing through [platform or website]. I was really drawn to your [mention something specific about their profile, like a hobby or interest], and I couldn’t resist reaching out to say hello.
A little about me: I’m [age] years old and currently living in [location]. I enjoy [briefly share personal interests or hobbies]. I believe that life is all about making connections and sharing experiences, and it would be great to get to know you better!
If you’d like, feel free to reply and tell me a bit about yourself. I’m looking forward to hearing from you!
Warm regards,
[Name]
Their logic “AI can write this, therefore AI is a threat” is absurd. How can the ability to compose this text be a question of national security?
And our data scientist from the Home Office suggests that “human oversight remains essential” whenever such texts are not convincing. Excuse, who do you work for again?
The paper:
Invents a fictional dialogue via ChatGPT or similar tools.
Assumes the output is representative of real scams without validating this.
Calls AI output “deceptive” without operationalizing deception or testing its impact on real victims.
Draws conclusions based on speculative interpretation of this self-generated text.
In short, the "evidence" is a fabricated nonsense made by a machine in response to their prompt. It is circular: “We prompted the AI to deceive. It produced text. We now analyse this text as if it revealed something about deception.” And then they use more to assess the info that is their research. Assign a machine to produce text and ask another machine what that text means. They are out of their mind.
This defense-funded paper then cites celebrity impersonation romance scams and simulates an AI "James Carter" flirting via prompts, it crosses into theatrical absurdity masquerading as risk analysis. Defence policy by Hello Magazine. Really? Yes really.
And then they cross the line and become a threat to our security by fabricating false info:
“As discussed, criminals increasingly use AI-generated synthetic identities to bypass KYC verification, allowing them to open fraudulent bank accounts and facilitating money laundering at scale.”
Footnote 52: Federal Reserve
The Fed document is an awareness campaign, not an empirical study. It compiles general public concerns and logical projections, not original evidence of actual frauds using generative AI. The Fed states that GenAI “could” be used to generate synthetic identities. It lists generic use cases (e.g., fake IDs, deepfakes, scripts), not documented incidents. Cites public information such as industry observers, banking professionals, and media reports. It does not demonstrate that GenAI is currently central to large-scale fraud. It does not verify the use of GenAI in successful bank fraud, romance scams, or identity theft. It does not analyze AI-generated documents or interactions.
When a Turing Institute paper references the Fed as "evidence" that fraudsters are already using LLMs for romance scams or document forging, it’s misrepresenting the nature of that Fed report. The Fed makes it explicit that its document creates no requirements for its supervised banks. This means there is no concrete threat vector that is being identified.
“Cryptocurrency scam revenues reached an estimated $12.4 billion in the US in 2024, with pig butchering scams accounting for a significant share of these losses. Meanwhile, AI-assisted coding tools have reduced the technical skill required to launch fake investment platforms, allowing scammers to mass-produce fraudulent sites with minimal effort.”
The cited source says:
“2024 was likely a record year... these figures are lower-bound estimates... a year from now, these totals will be higher...”
The figure is speculative and will change as new data is found. The source says that the AI component represents $150M out of $12.4B (barely over 1%). The Alan Turing paper only presents 12.4 billion as if this has anything to do with AI.
The second source is a random blog article called “The Dark Side of AI.” Hm. Generative AI, we learn, has been used by scammers to generate text messages in conversations on WhatsApp and other platforms. And why is that a problem? Is the capability of writing a text message a defence concern?
“Our first attempt involved leveraging large language models to produce scam content from scratch. [..] However, the integration of the individually generated pieces without human intervention remains a significant challenge.”
They then “successfully generated the convincing images of a store, owner, and products.”
It’s a tea shop with some tea in a bag. And since somebody could use this to scam somebody, we have something they call scam content.
I suggest we send a tactical police unit to raid Harrods and make sure no tea bag is unaccounted for, and as a resident in West London I would like to express my concern that such dangerous material is stored and sold to the public without proper security measures.
When a public institution — like the Home Office — funds or disseminates claims under the guise of “evidence-based national security”, but the evidence consists of:
Chatbot roleplay
Cherry-picked AI output
Non-replicable anecdotes
Misrepresented sources
… then this is not a policy briefing. It is manufactured narrative disguised as expertise.
That’s not just academic malpractice — it’s civic malpractice.
And when this gets injected into national security discourse, it does several dangerous things:
Dilutes real intelligence with speculative theatre.
Undermines public trust in legitimate threat assessments.
Redirects public funds into fantasy risks rather than material harms.
Expands surveillance powers on the basis of AI fear-mongering, not demonstrated threat.
There is no logic. There is aesthetic mimicry of analytic writing, but structurally, it is a shell. No evidence. No inference. That’s my evidence-based assessment.