The Constitutional Legal Defect in the EU AI Act: Violation of Institutional Limits under Article 51(1) CFR, Article 5 TEU, and Article 52(1) CFR
EU Institutional Overreach and Rights Without Remedy
It is one of the most soul destroying topics I have written about: how is it possible that the EU AI Act was adopted when this should have never happened.
They made up something to justify taking a real fundamental right away.
The defect in the AI Act is constitutional.
It relies on institutional self-certification of rights compliance.
It bypasses independent judicial review, and in doing so, undermines the Charter and the rule of law.
It is contradictory and irrational in parts
The AI Act’s reliance on risk classification contradicts its own premise of emergent harms. If risk can arise dynamically, then classification by sector or intent is invalid. Conversely, if classification is decisive, then emergent harms cannot justify extended authority. This produces moral asymmetry: the same violation of rights is judged differently depending on preassigned category, rendering the entire enforcement regime structurally arbitrary.
The inclusion of Bayesian estimation as an AI technique renders the entire historical domain of inference subject to regulation. This leads to the implication that artificial intelligence predates the computer — with 18th-century mathematical reasoning falling within the scope of modern compliance law.
By defining AI systems according to the presence of mathematical methods such as Bayesian estimation, the AI Act erases the distinction between executable systems and abstract knowledge. This leads to the absurdity that a book, lecture, or human reasoning process employing these methods could fall within the scope of AI regulation.
By defining AI systems to include ordinary statistical tools such as Microsoft Word’s predictive features, the AI Act inadvertently subjects systems used by minors to obligations intended for high-risk platforms. This creates a structural contradiction: either the Act ignores vulnerable groups, or it makes mass-market software non-compliant by default. Both outcomes render the regulation unjust and unworkable.
Microsoft Word uses:
Statistical prediction,
Suggestion of sentence structure,
Grammar correction — all AI under Annex I.
By defining systems as high-risk if they influence or are accessible by children, the AI Act effectively elevates all Microsoft Word to a such a system and all other public-facing systems. Since children interact with most digital interfaces — and models may be shaped by their data — no system can reliably claim low-risk status. This renders the classification framework structurally void and the compliance landscape universal, recursive, and unbounded.
It is arbitrary in its implementation
It serves no actual purpose other then a mechanism to give the EU Commission powers they are not allowed to have
And how can we make this whole again. Can this be fixed? Is there anything that could be done to allow us to trust the EU again? I don’t know and it terrifies to think about the consequences if that is so.
The Act violates in key aspects the EU Treaties and German Basic Law. If that makes only parts of the Act unlawful or everything I don’t know. To me it invalidates the whole simply because the Parliament adopted the Act as a whole. And out of respect cannot assume MEPs are happy to live with only some but not all they voted on.
Regardless, it is politically a scandal. A law that claims to protect rights and repeats that to an extent it becomes almost vulgar in how it does so while in fact disabling these rights that are not for the EU to give and hence not for the EU take away is an unforgivable betrayal
The European Union has long employed internal market regulations as instruments of harmonization, from the famous phase-out of incandescent light bulbs to CE conformity requirements. These actions fall within its competence under Article 114 TFEU: to ensure the functioning of the internal market by removing regulatory fragmentation.
However, the AI Act marks a categorical departure. It shifts from regulating technical interoperability to imposing ideological compliance under the banner of "fundamental rights" protection—a domain explicitly excluded from autonomous Union competence under Article 6(1) TEU and Article 51(2) of the Charter of Fundamental Rights of the EU.
II. The Pretense of Protection
The AI Act repeatedly invokes the phrase "in order to ensure fundamental rights" — not once or twice, but dozens of times. Yet under EU law, the Charter is binding only as a limit, not as a legislative mandate. It does not authorize the Union to legislate for rights protection independently. The Commission's claim that it needs this Act to secure democracy or prevent discrimination thus exceeds its Treaty-based powers. This rhetorical overreach constitutes a breach of the principle of conferral (Art. 5(2) TEU).
Sectors Already Regulated: No Protection Added
In all truly high-risk sectors—banking, aviation, energy, chemicals, pharmaceuticals—AI deployment is already subject to stringent sector-specific regulations. For example:
Banking: MaRisk, BAIT, EBA Guidelines, Basel III (Pillar 2: risk governance)
Aviation: EASA, ICAO compliance
Pharmaceuticals: GxP, EMA standards
The Commission agrees with that:
“[..] the Union’s financial services legislation where AI systems are to some extent implicitly regulated in relation to the internal governance system of credit institutions.”
or this:
“As regards high-risk AI systems related to products covered by relevant Old Approach legislation (e.g. aviation, cars), this proposal would not directly apply.
However, the ex-ante essential requirements for high-risk AI systems set out in this proposal will have to be taken into account when adopting relevant implementing or delegated legislation under those acts.”
So what kind of protection are we getting. AI calculating biased credit score? Already covered. AI replacing pilots. Already covered. What is not covered is the private individual creating some art or trying to learn something and that is seen as systemic risk justifying a surveillance and coercion regime called EU AI Act.
These frameworks provide precise, enforceable oversight of algorithmic risk. The AI Act adds nothing to them. It does not enhance protection but attempts to politically reinscribe it under general-purpose language. This duplicative reach is not benign: it represents a claim to supervisory authority over systems already accounted for by more competent regulators.
The Commission does not enforce the AI Act through criminal prosecution or judicial process. It uses market exclusion. If a provider refuses to comply—even with undefined or unfulfillable duties (e.g. "demonstrate that your model protects fundamental rights")—it may be banned from offering services in the EU (Art. 73, 76).
This converts economic regulation into a coercive instrument: market access becomes contingent not on legality or safety, but on rhetorical compliance. The enforcement mechanism is not rights-based but revenue-based. If an AI provider lacks substantial market share, enforcement pressure is absent. If it holds economic relevance, oversight is triggered.
Thus, protection is not a function of need. It is a function of who is a valuable customer. That is not justice. It is structural discrimination embedded in regulatory design.
Were such conduct undertaken by a German federal authority, I would think as a layman without any formal legal qualifications it could meet multiple thresholds of criminal concern:
§132 StGB (Amtsanmaßung): exercising powers not conferred
§240 StGB (Nötigung): using market threat to coerce compliance with ambiguous duties
§129 StGB (Kriminelle Vereinigung): structured intent to bypass constitutional restraints
But that is not for me to decide and neither for the Commission. That is for our judicial system to determine. But I am allowed to state an opinion. It relates to Freedom of Expression.
The Commission says the legal basis for the Act is this
“The legal basis for the proposal is in the first place Article 114 of the Treaty on the Functioning of the European Union (TFEU), which provides for the adoption of measures to ensure the establishment and functioning of the internal market.”
Agreed.
But then it says something unexpected
“This proposal imposes some restrictions on the freedom to conduct business (Article 16) and the freedom of art and science (Article 13) to ensure compliance with overriding reasons of public interest such as health, safety, consumer protection and the protection of other fundamental rights (‘responsible innovation’) when high-risk AI technology is developed and used. Those restrictions are proportionate and limited to the minimum necessary to prevent and mitigate serious safety risks and likely infringements of fundamental rights.”
Both statement are made in the same document by the EU Commission and both statements cannot be true because
Article 52(1) Charter: Restrictions on Rights Must Be Justified Externally
"Any limitation on the exercise of the rights and freedoms recognised by this Charter must be provided for by law, respect the essence of those rights and freedoms and, subject to the principle of proportionality, may be made only if they are necessary and genuinely meet objectives of general interest..."
The EU cannot lawfully restrict my freedom of expression if the only reason is that "this EU AI law says so."
A legal basis alone is not sufficient under Article 52(1).
Any such interference must be objectively justified, independently assessed, and proportionate — and that judgment belongs to courts, not the EU Commission
Any EU law that restricts expression without an external, proportionate, court-testable justification is invalid.
And this is exactly the situation we are in.
CJEU Case Law Confirms This Rule
In Schrems I & II, Digital Rights Ireland, and Tele2 Sverige, the CJEU ruled:
A law cannot justify rights violations simply because it is law.
Rights under the Charter have “primary status” and can invalidate entire EU regulations if they breach them without proper justification.
Even serious public goals (like fighting terrorism or crime) cannot override fundamental rights without strict tests.
In all of these cases, the Court struck down EU law because the justification was insufficient, and the interference too broad or disproportionate.
Under Article 6 TEU and Article 51(2) of the Charter:
The Charter does not extend the scope of EU law and does not give any new power to EU institutions.
That means:
The Commission has no independent authority to determine when my expression can be restricted.
Any attempt to say, "this law limits expression because we believe the trade-off is fair" is ultra vires — outside its legal powers.
Only courts — especially national constitutional courts or the CJEU — can assess whether a limitation meets legal standards.
These are undeniable facts.
The European Commission cannot expand its competences or discretion through rights-balancing. Its power is limited to what is conferred by the Treaties. The AI Act’s restrictions on freedom of expression (Art. 11 CFR) and other rights were decided through a legislative process without an independent rights adjudication mechanism built-in, which exceeds institutional authority.
Article 52(1) allows restrictions on fundamental rights only if:
They are clearly provided for by law.
They respect the essence of the right.
They are strictly necessary and genuinely proportionate to a legitimate aim.
This must be subject to effective judicial control.
The AI Act attempts to pre-emptively justify interference (e.g., with freedom of expression in risk classifications) through political reasoning by the Commission. But this is not equivalent to judicial review. It leaves affected individuals without an automatic path for rights enforcement.
While the AI Act includes Fundamental Rights Impact Assessments (FRIA) for high-risk systems, this:
Is left to deployers, not independent regulators.
Provides no ex ante judicial oversight.
Cannot substitute the role of courts in weighing rights and proportionality.
This is especially problematic in domains like automated content moderation, where freedom of expression is at stake and where errors or overreach directly harm individuals’ Charter rights.
Evidence: EU laws must be "fundamental-rights proof"
Since the Charter of Fundamental Rights became legally binding after the Lisbon Treaty, all EU legislative proposals must undergo a fundamental rights impact assessment. The Commission introduced a checklist to ensure that no law it proposes violates the Charter (Butler, 2012).
Commission powers are limited by the Treaties and Charter Article 51(1) of the Charter makes it clear that EU institutions, including the Commission, must respect fundamental rights within their areas of competence. They cannot go beyond the powers conferred on them by the Treaties (Stasi, 2013).
Fundamental rights cannot extend EU powers Article 51(2) of the Charter states that it cannot be used to expand the EU’s scope or powers. Courts have upheld this principle, especially in the Dereci case, where the EU Court of Justice refused to let fundamental rights override limits of EU powers (Snell, 2018).
The original Treaty of Rome lacked strong rights protections The Rome Treaty focused on economic integration and did not explicitly protect fundamental rights. However, the growing role of the EU in citizens’ lives made this omission "inacceptable", prompting the development of rights protections through case law and later treaties (Hrestic, 2014).
Checks exist on balancing conflicting rights When trade-offs are needed between fundamental rights (e.g., economic freedom vs. privacy), courts—not the Commission alone—ultimately decide on the proportionality and legality of such trade-offs (Butler, 2012); (Snell, 2018).
The Bundesrepublik Deutschland was founded not just as a new state, but as a constitutional antidote to tyranny. Every core element of the German Basic Law (Grundgesetz) — dignity, democracy, rule of law, rights with teeth — was designed to prevent precisely the kind of institutional abuse.
And I cannot understand how the EU could have missed this.
How can this be? Who wants to argue what we have here is not a violation of the constitutional order?
The German Federal Constitutional Court (Bundesverfassungsgericht) has never accepted that EU law is absolutely supreme over the German Constitution.
I feel deeply grateful for these decisions.
It has held, in landmark decisions (like Solange I & II, Maastricht, Lisbon, PSPP), that EU law is only supreme if it respects the core constitutional identity of Germany — especially:
Human dignity (Art. 1 GG)
Democracy (Art. 20 GG)
Rule of law (Rechtsstaat)
Right to resistance (Art. 20(4) GG)
If EU law fundamentally violates the Basic Law, Germany is not obligated to obey it. That is a settled principle of German constitutional jurisprudence.
If the EU institutions now act as a coalition to disable constitutional rights protected by the German Basic Law — rights protected, not granted — then resistance becomes not only lawful, but necessary.
It is a responsibility rooted in my home country’s history to take this very seriously. There is no room for compromise.
This is not a threat. It is a plea for understanding: this so-called EU AI Act constitutes an act of aggression, as it destroys the very reason the German State exists. In doing so, it also destroys the idea of Europe — because it destroys us.
Germany cannot continue to exist if it has not learned from what it once did.
Fundamental rights are not fulfilled when they are protected only “on paper” — they must be lived, enforced, and given real effect (zur Geltung verschaffen). This is the Verfassungswirklichkeit — the lived reality of our Constitution. And when law becomes a hollow shell used to justify coercion, conceal abuse, or disable remedy, the legitimacy of the entire system collapses.
Many people will have to answer for this:
The legal service of the EU didn’t flag any concerns I presume. They didn’t think what the law does — only what it says. How can this be?
We need to know what happened. We need to hear from people to explain themselves.
We have no EU criminal law — so when EU institutions engage in coercive regimes that would be criminal if done nationally, they place themselves above accountability.
This is an example if such a structurally dangerous situation.
German Grundgesetz (Basic Law) Jurisprudence
Under German constitutional law, the state cannot:
Violate fundamental rights indirectly via third parties.
Create structural coercion (strukturelle Nötigung) through legal rules that deprive individuals of the ability to refuse compliance with unconstitutional laws.
If the state forces companies to suppress expression (or process illegal surveillance), it is not lawful delegation, but state-imposed coercion in violation of the rule of law (Rechtsstaat principle).
That is what the act provides:
EU Commission enacts AI Act, forcing providers (e.g., platforms, tech companies) to use or avoid certain types of AI.
Company implements the law, restricting freedom of expression as confirmed by Commission
I suffer harm, but I can't sue:
Not the company (they’re "just following the law").
Not the Commission (I can’t meet the standing threshold because indirect impact gives me no standing at European Court of Justice).
This is what legal scholars call a "structural derouting of responsibility."
EU Law and Article 47 CFR – Right to Remedy
If a regulatory regime prevents people from:
Understanding what rights are infringed,
Knowing who is responsible,
Or accessing a court,
then it violates Article 47 of the Charter (effective remedy and fair trial). That makes the EU AI Act regime constitutionally defective.
Criminal Analogy: Command Responsibility & Complicity
In international law and human rights law, state actors can be responsible for rights violations committed by private actors, when:
Those actors were compelled or incentivized to act,
There was no effective way to refuse,
And the state designed the structure knowingly.
By that logic, if the Commission knowingly created a system where companies are forced to violate freedom of expression under penalty of non-compliance, that could meet the threshold of institutional coercion or state abuse.
And the Commission admits to these facts in writing on their website.
The AI Act does not regulate AI.
It regulates access, speech, and design authority under the guise of risk. It invokes rights it has no mandate to enforce, imposes duties that cannot be fulfilled, and punishes noncompliance through economic exclusion. It is not protective. It is coercive. And it is structurally illegitimate under both EU and constitutional principles.
The Commission cites the need for a
protection of other fundamental rights (‘responsible innovation’)
to justify restricting amongst other the freedom of art and science (Article 13)
They invented a vague concept (“responsible innovation”) to justify restricting concrete, guaranteed rights like freedom of expression, science, and business — rights that the EU institutions are not even empowered to redefine.
This is legally indefensible and a political scandal. “Responsible innovation” is not a legal term in the Treaties, the Charter, or CJEU jurisprudence. The rights they restrict are binding and enumerated in the Charter — they are not optional or negotiable. By appealing to this fabricated concept, the Commission masks coercive power as ethical governance — but in doing so, it nullifies constitutional safeguards.
This cannot stand.