The Misuse of Institutional Power and Public Trust in the EU's Approach to AI (PART 2)
From Joint Research Center, AI Office to AI Act incl. Analysis, Procurement and Policy Making the EU's conduct in the field of AI would be seen fraudulent in any other area
Regulating a machine as though it were a person is unlawful under any serious legal system based on the rule of law. It’s unfortunate that we live in times when that is no longer understood by lawyers. That’s no excuse for anyone. This must be obvious to any citizen.
Die Würde des Menschen ist unantastbar. Sie zu achten und zu schützen ist Verpflichtung aller staatlichen Gewalt.
Human dignity shall be inviolable. To respect and protect it shall be the duty of all state authority.
The constitutional order in Germany is derived from this principle. It is the foundation, the beginning, and the end of the German state. All its resources and capabilities are at its disposal—if so needed—to make this principle effective, always.
The validity of this principle will be tested. If the EU AI Act enters into application, Germany will have failed the test. That will not be without consequences. It is the event horizon of the post-war constitutional order as we know it. There is no coming back if the Federal Republic of Germany and all its institutions fail—even for a moment—to do what is reasonably required so that human dignity is guaranteed forever and at all times. A moment of structural failure is a fatal error within these arrangements.
This principle, enshrined in the German Basic Law, depends on one thing: that human beings—citizens—live up to their human potential, that we reason, and that we can tell the difference between what is and what is not, and act accordingly. The Basic Law offers protection for fundamental rights, but it cannot protect us from ourselves.
The people are the last line of defense.
If we arrive at this moment, then:
Our government will have failed.
Our legislative bodies will have failed—at both federal and state level.
Our judicial system will have failed us.
All professional bodies, the press, science—everyone and everything in our civil society will have failed to live up to their responsibility if the EU AI Act becomes applicable.
If the citizens of Germany can no longer identify how the EU AI Act violates the Basic Law, or if we have lost the ability to act as citizens, then the Basic Law is ineffective and the promise is broken.
The eternity clause (Art. 79(3)) of the German Basic Law is both its strength and a structural vulnerability.
It protects but only under one condition:
That those interpreting it are still guided by the spirit of its inception.
But if that interpretation is handed over to committees, or diluted through delegated EU regulation, then the very thing meant to be eternal can be nullified without ever being touched.
The danger in front of us is the bureaucratic erosion of meaning:
dignity becomes “risk”;
freedom becomes “output moderation”;
law becomes “market harmonization.”
If we fail the test to recognise this, the Basic Law will not be practiced.
I hope every fellow German citizen will realize in time why what I say is so important—and act. If nobody steps forward and lends their voice in defense of the German Basic Law, and thereby of human dignity, which is at stake here, then no words in the constitution can compensate for an absent citizenship.
This was of course known and raised when the constitution was written. And it must be accepted to be so—because it is only consequential, given everything I have written so far.
In this case, these blogs serve one last purpose. Nobody will have to believe that this was true
“Ich konnte es doch nicht ahnen. Wir haben es nicht gewusst.”
“I didn’t know what it meant.”
“I thought it was just about technology.”
There is no too small thing anyone can do now as long as you do what you can do reasonably. But whatever you do only matters if you do it now. Tomorrow it might be too late.
Fundamental Rights are Toys
This is from the EU Parliament’s AI landing page.
“AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:
1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys […]”
Does this look right for you, human rights or a broken toy. It’s all the same. And how will they address a violation fundamental rights or faulty toy. Similarly. This in itself should make it obvious to anyone involved at the EU Parliament how structurally undemocratic this EU AI Act is.
To place human dignity, expression, autonomy, equality — the core of democratic rights — in the same risk basket as malfunctioning toys or appliances is:
A conceptual collapse — confusing rights with risks.
A legal demotion — reducing inalienable rights to safety parameters.
A governance failure — treating violations of fundamental rights (Grundrechte) as if they were correctable with CE markings or recalls.
This violates the structure of constitutional democracies
In Germany, for example:
Human dignity (Art. 1 GG) is above all public authority.
Product safety is a regulatory matter — completely subordinate.
Placing them side by side, under the same risk framework, is a structural error that:
Obscures legal hierarchies
Undermines the special status of rights
Prevents meaningful enforcement, because it folds fundamental rights into a technocratic matrix
And most dangerous of all:
It delegates the interpretation of dignity to regulatory categories — without oversight, recourse, or public deliberation.
That is not democracy. It is a perversion of the law and offensive to all reason. Yet, the European Parliament uses it as promotional content without realising this. We are staring into the abyss.
The Parliament has set up a working group to oversee the implementation and enforcement of the AI Act. That’s what we are being told. What does that look like? Empty seats and some guy talking to himself and parroting what some bureaucrat from the Commission told him. And this concludes the process. The “working group” is functionally decorative, not protective.
It is a perversion of the law and offensive to all reason.
The EU AI office organisational set-up is as follows:
The “Excellence in AI and Robotics” unit
The “Regulation and Compliance” unit
The “AI Safety” unit
The “AI Innovation and Policy Coordination” unit
The “AI for Societal Good” unit
The Lead Scientific Advisor
Are these people spiritual beings of a higher sphere of existence and comprehension? The structure borrows language from ethics, safety, excellence, and societal good — but without method, transparency, or any structure of appeal.
What scientific advice can reasonably be expected on unscientific categories such as societal good or excellence in robotics?
The Commission describes the European Approach to AI:
“The European AI Strategy aims at making the EU a world-class hub for AI and ensuring that AI is human-centric and trustworthy. Such an objective translates into the European approach to excellence and trust through concrete rules and actions.”
This is an offensive statement to reason and by extension offensive to anyone claiming to respect human dignity. You cannot make technology “trustworthy.”
You can make it reliable, auditable, predictable, or safe within defined limits. But “trust” is a human judgment, not a property of a machine.
To declare that a technology is trustworthy in itself is offensive to reason and conceptually flawed.
Because this is so obvious and yet not understood by those in charge, it has already proven its destructive power: it disables serious thinking at the threshold by replacing method with illusion, lawmaking with illusion, and fundamental rights with toys.
The Commission is looking to appoint a panel of scientific advisors. It is doing so in clear violation of the spirit and the letter of the law. I have written about this before.
But let’s follow the law and see what it says. And then ask yourself: is what you have just read possible in actual law or only to be expected in satire or comedy?
Alerts of systemic risks by the scientific panel
“Art 90: The scientific panel may provide a qualified alert to the AI Office where it has reason to suspect that ..[let’s say serious concerns meeting some absurd definition ok let’s accept that] Furthermore, a Board composed of representatives of the Member States, a scientific panel to integrate the scientific community and an advisory forum to contribute stakeholder input to the implementation of this Regulation, at Union and national level, should be established.”
Which Board?
“Recital 20: The European Artificial Intelligence Board (the ‘Board’) should support the Commission, to promote AI literacy tools, public awareness and understanding.”
They’re doing ‘Sesame Street for AI.’ Moreover, this is not a Board that was established by the EU AI Act. It is not a formal authority of any kind that would allow us to consider this a governance process with actual control or oversight of AI-related activities.
“Recital 140: Furthermore, a Board composed of representatives of the Member States, a scientific panel to integrate the scientific community, and an advisory forum to contribute stakeholder input to the implementation of this Regulation, at Union and national level, should be established.”
So now there's another Board — this time including people from the community. But you can’t even be certain to whom a task or competence is directed when this abomination of a law refers to “the Board.” This unspecified structure alone makes it unlawful.
Worse, how could external scientific advisers be involved in decisions the EU makes under the law? It is not for the EU to outsource its powers to unaccountable conference votes.
Many believe that as long as the process was democratic, the outcome must be valid. But if the process itself is corrupted—by vague language, undefined terms, or unaccountable structures—then the result is not only legally fragile, but potentially unconstitutional.
This is especially true in the context of Germany’s Grundgesetz. The moment we are no longer certain whether a regulation aligns with the Basic Law, that uncertainty itself constitutes a breach. The Grundgesetz demands clarity in the protection of dignity and rights. Doubt is not neutral; it is a violation.
This must never be forgotten. It is a direct consequence of the Ewigkeitsklausel (eternity clause): what is unalterable must also be unambiguous. Once we permit uncertainty to enter into the foundations of human dignity and the rule of law, we have already crossed a constitutional line.
“Article 92 Power to conduct evaluations. The Commission may decide to appoint independent experts to carry out evaluations on its behalf, including from the scientific panel established pursuant to Article 68. [..] The Commission may decide to appoint independent experts to carry out evaluations on its behalf, including from the scientific panel established pursuant to Article 68. [..] “
We are thus in a situation where a systemic risk has been identified and verified — a risk that, by definition, is serious and impactful.
And yet, the EU’s institutional response is: a professor of some kind is asked to run an Excel sheet or something.
Law enforcement, courtesy of the EU AI Act, has become MacGyver à la EU with a PhD. It goes on
“The providers of the general-purpose AI model concerned or its representative shall supply the information requested. In the case of legal persons, companies or firms, or where the provider has no legal personality, the persons authorised to represent them by law or by their statutes, shall provide the access requested on behalf of the provider of the general-purpose AI model concerned.”
How can this be? This is a legal contradiction: how can an entity that has no legal personality still be subject to legal obligations and represented “by law or by their statutes”?
If there is no legal personality, then by definition there is no entity that can hold rights or obligations in the eyes of the law.
Statutes and representatives only apply to entities with recognized legal standing.
Therefore, assigning responsibility in this way undermines basic legal logic and opens the door to arbitrary enforcement.
This clause effectively creates obligations without a clearly defined legal subject—a structural flaw that violates fundamental principles of legal certainty, accountability, and proportionality.
Either there is a legally recognized entity, with natural persons acting on its behalf, or there are individuals who personally bear obligations. But it is legally incoherent to assert that natural persons owe obligations to—or act on behalf of—a non-existent legal entity. That cannot be. It is a contradiction in terms and renders the provision unenforceable by design.
“Article 93 Power to request measures [..] (c) restrict the making available on the market, withdraw or recall the model.
and
“Article 94 Procedural rights of economic operators of the general-purpose AI model Article 18 of Regulation (EU) 2019/1020 shall apply mutatis mutandis to the providers of the general-purpose AI model”
And now we have complete absurd contradiction on a contradiction and using irrational illusions to circumvent actual fundamental rights protection.
Legally, AI is treated as a product—a "thing" governed under product safety regimes (like CE marking, ISO 51, etc.). The safety standard the Commission has chosen—ISO 51—cannot be used to test software safety because this standard concerns only one question: should we label a safety helmet a "safety helmet" or a "protective helmet," considering that some jurisdictions may create product liability if a consumer buys a helmet thinking they purchased "safety" instead of merely "protection"?
As a risk standard, it only addresses this semantic issue. It is not about risk mitigation as such; it is about preventing risk from misperception in the labeling of products specifically marketed for safety purposes.
This means that even if the entire EU AI Act were implemented, the Commission would either be enforcing standards that are mostly irrelevant under this standard (i.e. unless someone markets a “Safe AI”—i.e., an AI that claims to provide safety), or the regulation becomes arbitrary and coercive, because the Commission demands what the standard does not require.
Narratively and politically, AI is portrayed as semi-person or quasi-agentic—capable of autonomous decision-making and thus posing moral, societal, or existential risks.
The regulatory tool treats it a thing
Products don't have intent. Persons have rights. "Semi-persons"—a category invented for convenience—have neither clear liability nor clear protection.
The way the EU regulates the import of lightbulbs is the same as it regulates AI. That is why you cannot read the EU AI Act and understand what it says by knowing what the words mean. The EU has fictionalised itself into power by using dramatic language that recolours competencies and procedures describing whether to give a Chinese manufacturer a CE certificate to import their product or recall the product.
It is therefore inadequate if AI presented these systematic risk the EU alleges. But if that were so, the EU Commission under the Treaty is not competent to do something other than doing what it does for lightbulbs.
But it disables the rule of law by restating laws through words that suggest there is law and legal powers that don’t exist.
And the judicial system has a structural fragility as its tools cannot detect this problem. In fact, the judicial system suppresses the ability to detect it. Legal exegesis, which aims to clarify the meaning of legal texts, requires that we interpret law by what the words mean, and the law assumes that the law is written in line with this principle. But the EU AI Act is not. It has done something the legal science has not anticipated. In other words: there are structural preconditions that make legal interpretation possible. And these preconditions are often not codified, which is an error of judgment in itself.
A product cannot simultaneously be treated as a legal "thing" and held responsible as if it were a legal "subject."
But this fact is not written in the law as such. It is a consequence of logic that if the law says we have things and we have people, that means that things and people are not the same. The EU AI Act has exploited this obvious but uncodified condition: a lightbulb and human rights are not the same.
The regulation therefore operates on a semantic fiction: it uses the legal apparatus for "things" to control "agents" while using the public narrative of "agents" to justify the overreach into "things."
This mismatch—between the legal category and the described behavior—is not a minor error. It dissolves the boundary between object and actor, enabling both control without accountability and risk claims without definitional discipline.
It is the starkest violation of the German Basic Law I could have ever imagined. And it flies under the radar screen, bypassing all controls.
We are in crisis mode!