Forked Law in a Wormhole: Can Captain Sisko Retake EU Reason? Or Making Law Like Crypto
A legal paradox in the heart of the EU AI Act reveals more than regulatory confusion — it threatens the very fabric of law, logic, and trust.
And now the conclusion
“Sisko to all ships, cruiser and galaxy wings, drop to half impulse.”
Klingon reinforcement is late, which somebody points out to Captain Sisko. But he says:
“Forget the Klingons. Our job is to get to Deep Space Nine and prevent the Dominion reinforcement from coming through the wormhole. And that’s what we are going to do.
Prepare to engage, on my command.
Fire at will!”
He wins the whole thing, of course—with the help of some angels, why not? But I’m getting ahead of myself. This intro begins an episode of Star Trek: DS9 concerning a historic battle to retake a space station and thereby prevent some evil organization from entering this sector of space. It was vital to do so since the Federation of Lawful Space Exploration would have collapsed if it had to defend against additional troops.
It seems I find myself in a similar position to Captain Sisko, whose ship is called Defiant. It is also a perfectly fitting problem in our part of the galaxy. It is time to secure a decisive victory for reason and the rule of law in all matters of AI Acts in the EU quadrant of the galaxy. I also count on some divine help, and that means victory is guaranteed. Did you hear that, EU Commission?
“You are not a captain and you have no ships.”
I wonder if that could be a hindrance. How could that be? So no problem.
“Forget the ships. Our job is to get Brussels to understand their mistake and prevent the EU AI Act from coming through the wormhole. And that’s what we are going to do. Prepare to engage, on my command.”
No worries—even if I had command over a space armada, I would not dare to force my will on anyone. I am simply doing what is not only my right, but under these circumstances, my duty as a citizen: to engage in public discourse and call for course correction. Voting or no voting by the Parliament makes no difference—procedure alone is insufficient to establish law lawfully.
And this EU AI Act is of such quality that it makes this structurally impossible.
The AI Act prohibits certain practices—such as subliminal manipulation, emotion inference, or profiling for criminal risk. No, that would be wrong. It is difficult to describe what the law actually does. The Act uses phrases such as “the following AI practices shall be prohibited” and then, for example,
“(h) the use of real-time remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement.”
This is what they call “unacceptable risk,” which belongs to the category of banned AI applications in the EU. So when the text says “real-time biometric identification is forbidden… unless it is strictly necessary for…,” and then lists three broad exceptions, it effectively allows member states to make their own rules and deviate further.
It explicitly states that such screening is possible to identify somebody, which means they don’t need a suspect. And since when do member states require the EU Commission’s approval for domestic police work? Why would member states be subject to internal market rules for their domestic police forces? There is no free movement of police goods and services. Police organizations are not running a side hustle and bypassing important controls, or are they? Who gave the Commission that authority—and what objective could such intervention possibly have?
“Nothing” is perhaps too optimistic here.
In practice, nothing is banned if it’s “necessary”—and necessity is defined by the Commission, police, or state authorities. But people have no necessary reasons.
That means obligations, but no protection, from this law.
And the Act says it prohibits something—but allows everything. Sometimes the import is not allowed, but such systems can be operating and be in use at the same time. And it is not illegal. The EU Act forbids marketing for a stated purpose, regardless of actual purpose or alternative sales models. If you buy through other means than “placing on the market,” then “unacceptable risk” becomes acceptable - esp. if Police forces where to run a system otherwise considered too risky.
“Captain, forward torpedoes ready and phaser banks on full charge. Ready on your command.”
“Mr. Worf, wait for my command. First, we must allow for evil to reveal itself.”
General-purpose models like ChatGPT are classified—for the moment—as low-risk, provided they are not marketed for high-risk uses (e.g. biometric evaluation, law enforcement). Obviously, this therefore offers no protection. If you were to sell a gun as if it were ice cream—and you sell the gun and someone uses it to shoot somebody—then you’re blocked from selling your ice cream/gun into the internal market.
Large Language Models (LLMs) and similar general-purpose AI systems do not possess discrete modules for such capabilities. Instead, they operate by predicting tokens within context, based on probabilistic inference from a vast latent space of training data. These capacities are emergent, not modular—they arise from the same unified function that supports language understanding.
The law implicitly assumes that prohibited capabilities are modular, like turning off a feature. But general-purpose AI is not a kitchen appliance. It’s latent capacity + prompt engineering. You cannot buy an LLM without X—if X has anything to do with pattern recognition or analysis, AI can do it. And with all due respect: how can a machine understand human intent when it has no intent?
GPT-4 can also be configured as a custom version of ChatGPT. It takes a few mouse clicks and doesn’t change the model much, but gives it a defined standing instruction—simulate that you know something about X, and you can manage more effectively, provided data that combines instructions, extra knowledge, and any combination of skills.
The model remains unchanged, unaltered.
Question: If I create a custom interface and call it:
“Biased Remote Diagnosis for Autism via Webcam,”
this would trigger the EU AI clauses—and my custom version could not be placed in the EU.
Question: What happens to GPT-4 vs. my GPT-4 Bias Deluxe?
They are not separate applications in any meaningful shape or form. Functionally, there is no difference.
Option 1:
We treat both the same, so my custom interface elevates GPT-4 to high-risk by:
Making the risk classification entirely dependent on framing and intent—not OpenAI’s framing, but mine.
And it takes basically nothing more than a mouse click for me to launch an AI business.
These custom models can be commercialized—and if I cause a bit of extra work for other people, why should I care?
I shouldn’t. But neither can OpenAI be exposed like that.“With a single mouse click, I have jumped two full risk classes.”
Option 2:
I am high-risk; GPT-4 stays low.
But since there are not two systems, only one, how can I run a high-risk application on a low-risk system?
Option 3:
I also get low-risk status, despite my stated purpose, because I rely on a platform that doesn’t make the statement—
but that doesn’t mean you can’t use it for that purpose.
What this means is that the EU AI Act has regulatory blindness to Use vs. Structure.
The Act distinguishes between:
Providers (those who place the system on the market)
Deployers (those who use it in real-world applications)
And the use cases that trigger prohibitions or restrictions
But the model is the same in all cases.
Thus, the legal question becomes one of branding and interface, not capability or safety.
Risk is not tied to architecture, but to the declarative metadata wrapped around it:
A model is low-risk if called “ChatGPT.”
The same model becomes high-risk if called “Remote Diagnosis for Autism via Webcam.”
This renders the risk classification self-referential and semantically unstable.
In all three cases, the Act fails its stated purpose:
It cannot enforce true risk mitigation.
It places legal burden on aesthetics, not architecture.
It allows marketing fiction to dictate legal fact.
If we require models to be classified by their riskiest possible use, then no model survives commercially. But if we allow classification by declared intent, then nothing prevents tainting a low-risk system with a high-risk use case post-market.
This turns the risk-based framework into a semantic game, not a safety protocol.
The AI Act becomes a taxonomy of imagined intentions, not a control mechanism for actual technological function.
And some standard won’t address as the LLM is not programmed or testable as traditional software. But that doesn't make it riskier. That would be the wrong conclusion.
"Sisko to all attack fighters, engage the enemy. Worf, Attack Pattern Omega. Fire at will."
1. Deterministic Systems Cannot Support Mutually Exclusive Legal Truths
Law is a deterministic system: each output (e.g., classification, liability) must follow from input (facts + definitions).
The AI Act introduces a structure where identical systems (e.g., GPT-4) can be simultaneously low-risk and prohibited, depending only on context and label.
This results in non-deterministic legal output from a deterministic system — an impossibility, unless the system is self-contradictory.
2. Branding as Risk-Creation Violates Legal Causality
If risk is produced not by technical behavior but by framing or marketing, then causality is reversed.
The intention to use becomes more legally potent than the function of the tool.
This violates both objectivity and technological neutrality — principles foundational to European legal standards.
3. The Collapse Point: Interstitial Contradiction
One GPT-based system can simultaneously fall into low-risk and prohibited categories without any change in inputs.
This breaks the AI Act’s internal logic and, with it, its enforceability under the principle of legal certainty.
“The law collapses into either selective non-enforcement (i.e., arbitrary power) or rigid enforcement of injustice. Both are incompatible with the rule of law.”
4. This Cannot Be Fixed with Local Patches
Any regulatory “patch” (e.g., clearer guidance, exemptions, sandboxing) merely hides the error locally.
The root cause is logical: the Act embeds a category error by trying to define risk both structurally and contextually, in ways that produce mutually exclusive truth conditions.
This cannot be refuted except by presenting an entirely new legal universe — one where:
Systems are not bound by causality,
Truth is not coherent, and
Enforcement is not based on facts but on narrative.
The EU AI Act is a Forked Blockchain with Failed Validation
Law as a Deterministic Ledger
In a constitutional democracy, law functions like a blockchain:
Every legal decision is a transaction.
The principles of justice, legality, and causality are the consensus mechanism.
Each new law or regulation must validate against past blocks (precedents, constitutional values, rule of law).
The AI Act as a Failing Transaction
The AI Act introduces transactions (e.g., risk classifications, prohibitions) that:
Contradict validation rules — and my node says: “non non non. Nein. Never.”
Cannot validate under existing rules (e.g., the same model being both prohibited and permitted based only on use-frame).
Introduce undefined or mutually exclusive states — like “prohibited use” of a tool that remains legally available and unchanged.
From a legal-procedural standpoint: this is a failed validation.
The Commission as a 51% Attacker on the Legal Chain
The Commission acts as a centralized actor asserting interpretive control:
Bypassing validation by asserting regulatory authority.
Attempting to push the invalid transaction onto the legal ledger through power, not consensus.
That is the legal equivalent of a 51% attack.
Unlike blockchain consensus, this will not be sufficient to establish control if the rule of law still holds.
We will find out soon enough.
Is the EU governed by crypto consensus models — or the rule of law?
That is the question here.
Just like Bitcoin Cash emerged from irreconcilable rule disagreements...
The AI Act may force a functional fork in law:
One side: formal legality (the Act).
Other side: material justice + technical reason (your position).
This destroys the rule of law.
No one can trust the next “block.”
“You can try to establish a new rule of law — but given the experience, why would anyone waste their time?”
Gödelian limits, blockchain consensus theory, and constitutional jurisprudence can be wrapped into one unifying argument:
The AI Act violates the trust fabric of law. It is unconstitutional.
And the risk is:
A consensus collapse of this significance could be irreversible — and that would be dramatic.
I think it is time to roll the end credits.
The Battle was victorious. Qapla’ (Klingon for success).
Maintaining the rule of law is the success I am talking about.
There are no losers — if that is so.