The EU AI Act is here in Post‑Brexit Britain and Saying Hello to its New Subjects: Dei Gratia Bruxellensis
A walkthrough of Stability AI’s Acceptable Use Policy—and how AI safety discourse builds authoritarian regimes.
There is a deep misalignment between how AI law is written and what it actually does in the lived world of rights, speech, and knowledge. It is silently tearing up the foundations of society and calling it "AI Safety" or "Responsible AI." The EU AI Act — and UK AI policy with UK AISI copying its logic — is not about regulating to prevent harm, but hypothetical harm, outsourcing legal reasoning to tech firms who become judge, jury, and enforcer. We are heading into troubled times shall they succeed.
STABILITY AI LTD sent me an email about legal updates.
I’m not sure what I used them for, but I can’t resist the lure of AI contractual novelties. So I had a look. What I found made me laugh—and then cry. It reads like simulated legal satire. Except it's not. And the EU AI Act is sneaking in behind it.
Brexit? That was cute. EU law is alive and well—if AI companies decide it’s good for us.

These terms apply to research—commercial or not—regardless of whether the models are hosted by Stability or by a corporate customer. If researchers can only do what’s pre-approved, then science is no longer free. Discovery is taxed with bureaucratic permission.
It’s like Microsoft publishing a list of sanctioned words you’re allowed to type in Word—even on your own offline desktop. Their logic: they must protect unknown third parties from possibly seeing rude words in a future email. So: no sexual references, no suggestive phrasing, no quotations of copyrighted text. Any breach, and they shut off your access to Word—and every Microsoft product—without context, necessity, or explanation.
The policy assumes:
Every generated image is already public
Every user is a malicious actor
But ignores that:
The user must deliberately save, share, or apply the image
The model doesn’t deploy itself
The company has no causal role unless it publishes, forces publication, or fails to warn of real, imminent threats
This is prohibition for theater. It doesn’t protect the public. It protects the illusion of safety from an AI Armageddon that’s always just around the corner.
And if you think that’s bollocks—strike one. That’s all it takes: say the wrong thing, lose your access. Lifetime ban. Bye.
They do all this while trampling civil liberties—even when you’re exercising those liberties in the privacy of your own home. They claim the right to dictate what you can do there—if it involves their product, like image generation. The argument goes: if the terms are too extreme, just don’t sign. Go elsewhere.
But the EU AI Act ensures there will be no elsewhere.
That’s the joy it brings.
Now, looking at some computer-generated pictures at home supposedly gives rise to existential risk—from wars to planetary catastrophes.
And based on such flimsy assumptions, the EU has rendered the German Basic Law dysfunctional. The EU tells AI companies what their terms must say, receives regular reports that "all is well in the land of the oppressed," and assumes we’re all too stupid to see the irreparable harm this Act inflicts on the Union. They do believe it—and they were right… until today when I published on why I think it violates the German Basic Law (that makes 2 out of 3 major issues I see).
But now it turns: Britain won’t be excluded from having some fun with it.
The EU AI Act is here—in post‑Brexit Britain—saying hello to its new subjects.
Annotated Commentary on Stability AI's Acceptable Use Policy (Effective July 31, 2025) that makes explicit reference the AI laws we don’t have. So what may have given them the idea then?
"We Prohibit Violations of Law or Others’ Rights."
They are acting illegally if this would be Germany. Unauthorized exercise of official capacity is a formal crime. Who knows maybe it will one fine day become a crime in England after Stability AI has driven everyone nuts with its human rights tribunals. This is a contract between them and me, then they cannot declare themselves as holding a mandate to represent the interests of a non-contracting third party and unilaterally impose duties on me based on their guesswork about what a third party might feel, want, or imagine. That is not law but the most elegant and noble form of coercion. If it is for me to decide Stability AI says I would not have done this or that, However, some other people really tied our hands. But we value your business. That’s just in time law making.
"Violations of law or others’ rights, including intellectual property and privacy rights."
I am creating an image locally, in a sandboxed environment. If any violation occurs, it is on me, not Stability AI. They are not the police, and they cannot know or infer my intent. My use is not their jurisdiction. The claim to evaluate my actions preemptively in private space is extrajudicial and untenable.
"Violations of AI laws"
Britain has no AI law. The EU AI Act doesn’t apply to individual users or citizens. It governs market practices of providers. Stability AI is asserting obligations where none exist, in a jurisdiction where no enabling law is in force. And they were headquartered in London. I am sure they just have come across something called Brexit and what that may imply here.
"Using subliminal, manipulative, or deceptive techniques..."
This is open-ended behavioral policing. Are they running surveillance? Courts? Who defines manipulation? Saying “make a cute cat” is not cognitive warfare. This clause transforms artistic or personal expression into a regulated zone, with no due process.
"Exploiting vulnerabilities due to age, disability, or socio-economic situations."
With a cat picture? The disproportion between the stated prohibition and plausible user behavior is absurd. How and how not to do that?
"Evaluating or classifying persons..."
Again: what if I make a picture of a person with glasses? Or smiling? That is classification. This clause criminalizes human inference, which is what perception is. Irrelevant categories applied to individuals that have no choice but to agree to such an oppression or walk away. But the next AI company already awaits with the same draconian wording made in Brussels,.
"Assessing or predicting the risk of a person committing a crime..."
As if a picture could do that? This clause is science fiction turned into policy. It prohibits imaginary threats.
"Creating or expanding facial recognition databases without consent."
I'm not making a surveillance system. If I save a face from a movie to use as a character reference in a collage, that’s not a database. This clause is both overbroad and epistemically meaningless.
"Inferring emotions in the workplace or education institution..."
This assumes I am an HR department. Again, a private user experimenting with generated images is not equivalent to institutional emotion surveillance.
"Categorizing people based on their biometric data..."
This is unavoidable in perception. If I look at someone, I recognize phenotype. I don’t assign value, but recognition is human and inevitable. Declaring it a breach of contract is a categorical error. Applying language and concepts to individuals when they don’t apply is a new form of categorisation bias risk the EU AI Act not only invented but successful shipped to England.
"Sharing of personal information without consent."
As a private person, I have no GDPR obligation. If I share my neighbor's phone number in a private conversation, it may be impolite, but not unlawful. This clause treats every user as a regulated institution.
"Provision of advice on essential services..."
If a friend asks me what pills I would take against something, I can answer. I’m not pretending to be a doctor. It is a private concversation between two consenting adults. AI-assisted personal opinions are not professional practice. This clause implies all speech must be formally licensed.
Then this—and here they lost their minds:
“We Prohibit Harm to or Exploitation against Children.”
You don’t protect anyone by repeating empty phrases. I assume their model was not trained on actual child sexual abuse material (CSAM).
Any output thereof is a fantasy. This fantasy is then, by definition, not flowing from harm that has occurred. We find the content obscene, and it may entice future harms—but treating both objectives as identical embodiments of harm, as the law now does, is based on well-meaning but intellectually misguided virtue signalling by children’s-rights advocates and the like. It has corrupted international law, and with it English law, on the argument that the ends sometimes justify the means. That is an aberration under the law that will demand a price, because that break cannot be contained—and it creates new paradoxes in other domains and penalises someone unfairly and not lawfully. It is a game of some people claiming to be judges of higher morals than the rest of us. I have zero tolerance for such soft-spoken dictators—but the pattern is not yet well understood. Trust me, one day they will be everywhere.
The word ‘CSAM’ is not illegal. The software from Stability is not illegal.
Typing CSAM and getting CSAM irrespective of intent is not possible. So instead that logical break requires that typing “CSAM” for this article and using “CSAM” as a prompt is considered different intent—but only if the model creates something one would recognise as CSAM. It infers intent that is observably entirely random. And criminal intent—and thereby guilt—is by law a function of a probabilistic outcome that the person doesn’t control. The threshold between legal and illegal is unobservable, and the required action is the same: typing “CSAM” into the same interface seconds apart.
To preserve this logic, the law must impute criminal intent retroactively, based not on what the person did, but on what the machine did—and then assign that action back to the human. That is structurally unjust.
It violates:
Nullum crimen sine lege — no crime without law (you can’t invent the rule after the fact).
Mens rea — criminal intent must be both demonstrable and temporally prior to the act.
Due process — punishment must be based on observable conduct, not retrospective inference.
This is not just inconsistency — it’s a corruption at the foundational level of the law. English law, EU law, any law. It demands we believe the incoherent in order to preserve institutional authority, and it must force us to believe in order to secure its continued existence.
It has given the law agency, and with that, the law system as we know it is gone.
This is not merely a theoretical problem. It is a live breakdown in how legal systems assign blame, regulate tools, and preserve rights under the guise of harm prevention.
It is my hope that this country’s legal structure has preserved a spark, and that from it, this darkness can be expelled. I really hope it’s not too late.
Should I mentioned that consensual sexual content between adults is not abuse.
It is often legal, protected under freedom of expression (e.g. European Convention on Human Rights, Article 10).
Treating all sexual acts as akin to child abuse is a false equivalence — not grounded in law or ethics.
Not in Instability’s Diffusion land.
One comment on the earlier list of Stability’s terms. They don’t say “you may not do x.” They say: Using Stability technology to facilitate x is banned — but “facilitate” is vague and relational. It depends on what others do: users are expected to anticipate not only consequences but the actions of third parties. Yet those parties are not bound by the same contract. This isn't rule-based law. It unenforcable in court but they don’t need courts to enforce. They terminate.
It's an implicit behavioral scoring system with EU AI Act fingerprints all over it. And we are next.
I can use a pen to do many things — write a love letter, forge a signature, draw obscene images, or poke someone’s eye out. But there’s no such thing as a “law of rightful pen usage.” The law doesn’t regulate what the pen can be used for in advance. We sell the pen as property. Once it’s yours, what you do with it is judged afterward, case by case — not by controlling what the pen allows.
Now compare that to AI.
Suddenly, with AI, we say: “You can’t use this model to generate certain images or even think about certain topics.” Not because of what you did — but because someone else might do something bad later. Or worse, because someone might misunderstand what you made.
But if AI is that dangerous, why is it the only tool where this logic applies?
With cars, we do have laws: licensing, speed limits, insurance. But Ford can’t say, “We’ll repossess your car if we find out you had sex in the backseat.” Yet that’s the logic of AI policy now: prevent hypothetical harm by controlling the tool at the source, not the act or the outcome.
Imagine a movie where two adults meet in a Ford car, fall in love, and have sex under the stars — totally consensual, in the middle of nowhere. That’s been shown in movies, in comedies, dramas — even Oscar-winners.
Now imagine Ford trying to stop the film industry from making that film scene, arguing that our safety depends on it.
And when we ask why, Ford says that they face a moral dilemma — they would feel responsible if a meteor hit Earth right when the film shooting takes place, and then two people film inside the car. And we ask: why would the film change which way meteors fly?
The answer: can we be certain the car model has no impact? That would be dangerous hubris — to rule out this possibility entirely.
That’s the logic of Stability’s terms.
And they borrowed the idea from the EU AI Act. And the UK’s AISI is building our own version of it. It may surprise you — their ideas are even more crazy.
P.S. If anyone knows the answer to this question, please contact me:
If a fully trained barrister drafted such a document, assuming its application under English law—what does that tell us about how dire things already are?