The Misuse of Institutional Power and Public Trust in the EU's Approach to AI (PART 1)
From Joint Research Center, AI Office to AI Act incl. Analysis, Procurement and Policy Making the EU's conduct in the field of AI would be seen fraudulent in any other area
This article is the first in a series providing factual evidence about the actual implementation of EU policy concerning Artificial Intelligence.
I invite the reader to think about the implications, which are far-reaching beyond AI.
The EU is a construct that arose out of the experience that unbounded relativity between what is true and what is false turns into indifference between what is right and what is wrong. This dangerous and false analogue led to the rise of certain ideologies and the collapse of all human reasoning and sense of ethics.
The EU institutional framework seems to be an example of the importance of a constant reminder—so that we do not forget one of the key lessons from the atrocities of Nazi Germany and the Holocaust.
What is unfolding in the domain of AI regulation is a collapse in reasoning, and in the boundaries between fact and assumption, true and false, lawful and unlawful. This makes it impossible to ascertain whether the rules of society respect human dignity.
There is no single example that, on its own, would make me feel so concerned. But the same indifference to whether something is true or false, evidenced or assumed, can be observed across the board not just in AI but it features prominently here. It has infected the decision-making of this institution in all its functions and all its procedures. It is unable to self-diagnose and self-correct because it shows no sign that this assumption is invalid.
And it is for this reason that everyone, subject to their own conscience, must determine what is required under such a situation.
I share, to the best of my abilities, the evidence and my reasoning—not to convince you that what I say is true, but to appeal to your consciousness, so that we do not lose our ability to decide what is right and what is wrong.
And so we can and must act accordingly. Always.
Background to EU AI policy
The EU Commission set up something called the European AI office:
“as the centre of AI expertise and [it] forms the foundation for a single European AI governance system. [..] It enforces the rules for general-purpose AI models. [..] For well-informed decision-making, the AI Office [gathers] knowledge from the scientific community, industry, think tanks, civil society, and the open-source ecosystem [and] fosters a thorough understanding of potential benefits and risks.”
That is how it portrays itself and how it justifies the resource commitment
“The AI Office will employ over 140 staff, including Technology Specialists, Administrative Assistants, Lawyers, Policy Specialists, and Economists" [..]”
The reality of what can be observed makes these claims doubtful.
The EU Commission has launched a call for interest to establish a Scientific Panel of independent experts in the field of artificial intelligence.
It is doubtful that the final panel’s composition and work will meet the standards typically expected of a scientific body.
The eligibility criteria require candidates to hold:
“A PhD in a relevant area OR equivalent experience.”
This formulation is structurally flawed. A PhD is not an experience. It is a formalized certification of methodological competence within a symbolic and institutional structure. It confers legitimacy not through subjective knowledge, but through adherence to a codified research protocol, validated by peer review and institutional oversight.
Importantly, we must speak of sciences—in the plural. The authority of a truth claim in one domain does not automatically transfer to another. Scientific disciplines differ not only in method but in the kind of knowledge they produce and the standards by which it is validated.
Someone with a PhD does not necessarily possess tacit knowledge of a specific domain. Rather, they are presumed to have demonstrated domain-specific methodological competence, on the assumption that their field belongs to a recognized family within the sciences—that is, the ability to formulate and pursue research questions using accepted scientific methods within a disciplinary framework (e.g., legal theory, computational linguistics). This does not, however, qualify them to act professionally within applied domains such as legal practice or judicial reasoning, which require additional, staged acquisition of tacit knowledge and responsibility.
This distinction is critical. Epistemic protocols do not automatically transfer across domains. A person cannot invoke the Bernoulli theorem to avoid a prison sentence. Doing so would not constitute the cross-application of scientific method—it would be a category error, misapplying physical law in a domain governed by normative reasoning.
Yet the Commission provides no definition of what constitutes “equivalent experience.” There is no proposed standard for validating whether a given experiential background fulfills the epistemic function that a PhD is designed to certify.
The result is a non-falsifiable, structurally ambiguous criterion that invites discretion without accountability. While the possession of a PhD is a binary and externally verifiable claim, “equivalent experience” remains undefined. In a field as emergent as artificial intelligence, a PhD may not even indicate relevant methodological competence. But even if it does, two individuals—one with a PhD in legal studies and AI experience, the other with a PhD in history and AI experience—do not possess equivalent standing. Their experiences may relate to the same object (AI), but the methods and epistemic frames through which they approach it are categorically distinct.
This conflation undermines the scientific integrity of the panel’s composition. It opens the door for symbolic inclusion without epistemic foundation. This is a structural error—not merely a linguistic one. But it is more than that:
Article 68(2)(a) requires:
“The scientific panel shall consist of experts selected by the Commission on the basis of up-to-date scientific or technical expertise in the field of AI necessary for the tasks [..] particular expertise and competence and scientific or technical expertise in the field of AI”
This phrasing recognizes two distinct paths to panel inclusion:
Scientific expertise: typically evidenced by formal academic work, such as a PhD in a relevant field, publications, peer-reviewed research.
Technical expertise: typically evidenced by applied contributions—e.g., systems architecture, large-scale deployment, engineering benchmarks, security implementation, etc.
This requires each to meet its own epistemic standards:
A PhD is valid only within the symbolic logic of a certain field in science—it demonstrates methodological competence under peer-audited epistemic norms.
Technical expertise is validated by functionality, reproducibility, and domain impact, not symbolic credential.
What the Commission did instead:
It allows "equivalent experience" to be judged by an undefined, hybrid standard.
It treats a non-PhD with ambiguous experience as if it could substitute for either scientific or technical grounding—without evidentiary parity.
It does not distinguish between experiential relevance and methodological validation.
This blurs the distinction the law explicitly preserves.
The call for expertise defines new selection criteria defying the explicit intent and structural requirements as defined under the law.
The Commission did not inform applicants of the actual legal standard. By communicating the wrong eligibility criteria, the Commission has undermined the legality of the selection process—even if some candidates would have qualified under the correct standard.
Procedural inequality: Not all candidates were evaluated against the correct, legally binding criteria. nThat makes the entire process invalid under principles of administrative law and fair competition.
Under the applicable law, applicants are entitled to rely on the official terms of a public call. If those terms deviate from the governing law, the process is legally challengeable. Even if the Commission later attempts to “interpret” the law correctly, the damage is done: Applicants were misadvised at the point of decision.
This is not a minor clerical error. It is a procedural defect that invalidates the process of selection.
But there is more.
The regulation defines independence in relation to providers of AI systems or general-purpose AI models (GPAIs)—which implies entities with ownership, control, or commercial interest in their development or deployment.
Legal Definition (AI Act, Article 68(2)(c)):
“independence from any provider of AI systems or general-purpose AI models”
This refers to entities (companies, consortia, public-private partnerships),
Not individuals with knowledge, nor researchers engaged in the domain without financial or institutional alignment.
What the Commission has done is collapse two distinct concepts:
Conflict of interest: a structural relationship that could bias judgment.
Proximity to the domain: e.g., having a PhD, working in AI research, or contributing to open-source tooling.
Under their implied interpretation:
If you are involved in the field of AI, you are no longer “independent.”
But that is structurally incoherent. It implies:
Every domain expert is compromised by definition.
Only non-experts can be considered neutral.
This would lead to the absurd result that:
A lawyer specializing in AI law is disqualified from advising on AI law.
A physicist working on open research in quantum computing is “not independent” of computing.
The Commission is conflating independence from corporate providers with intellectual engagement in the field itself. It has defined selection criteria that are structurally incompatible—demanding both domain-specific expertise and independence from that very domain. Yet it insists that candidates demonstrate compliance with both conditions simultaneously.
Gender Options
Female
Male
Other
Prefer not to disclose
It struck me as odd to see that in this order. Thus, seeing such a template in the order male, female, other etc means nothing in terms of gender issues. Seeing it this way around signals gender issues are overly emphasised. This is especially since this order is alphabetic but the multiple choice are not alphabetic for other questions.
The Law (AI Act, Article 68(2)):
Experts must be selected on the basis of scientific or technical expertise, and must meet all of the following:
(a) Particular competence in AI;
(b) Independence from providers;
(c) Capacity for diligent, objective work.
The Commission shall ensure fair gender and geographical representation — after selecting on the basis of expertise and independence.
The Implementation Document (Group Specification):
“The Commission shall aim to ensure to the extent possible gender balance in the selection of experts.”
This flips the order:
Selection is framed through gender balancing, not scientific merit.
The language "shall aim to ensure" now appears in the context of selection itself—not as a constraint after eligibility is determined.
Since he form allows designation as “Female,” “Male,” “Other,” or “Prefer not to disclose the system enforces a metric that cannot be verified. Other cannot be falsified but would be a gender. If I were to apply declaring my gender as “Other” thereby claiming to belong to a presumably underrepresented category based on form input alone. If this declaration then results in favorable bias under “gender balance” logic, the process becomes gameable by design.
A metric meant to ensure equity is rendered manipulable precisely because the Commission conflates “balance” (which requires stable categories) with “representation” (which accommodates fluidity).
That means:
The policy is self-contradictory: it imposes “balance” without definable inputs.
It is internally unverifiable: decisions based on “gender balance” are not falsifiable.
It is vulnerable to strategic manipulation, and therefore cannot serve its stated purpose.
And it contradicts the intention and the letter of the law.
The listed “areas of expertise” are not fields in any scientific sense. They are policy metaphors dressed as disciplines. Let’s break this down clearly:
Misuse of Scientific Language
"Cyber Offence Systemic Risk"
Not a recognized scientific field.
A compound of threat discourse and systems language
Does not refer to any methodological canon, formal framework, or reproducible protocol.
"Misuse and deployment systemic risks of GPAIs"
Self-contradictory.
“Systemic risk” traditionally refers to a system-wide vulnerability, often in economics or finance (e.g., Lehman collapse, cascading failures).
Here it is used in reverse—referring not to how GPAIs affect a system, but how deploying them might create risks in society.
This is not AI systemic risk, it’s AI as an external actor—completely different.
Fabricated Fields
These phrases create the illusion of disciplinary legitimacy, but none are anchored in established scientific or engineering domains:
“Risk assessment methodologies for AI” — vague unless specified (quantitative? operational? ethical? probabilistic?).
“GPAI technical risk mitigations and best practices” — no recognized taxonomy, no standards body, no scientific canon defines this as a field.
“Emergent systemic risks of GPAIs” — tautology; “emergent” + “systemic” adds drama, not clarity.
“Evaluation GPAI model capabilities, propensities, and impacts” — reads like output from a regulatory workshop, not a research program.
Scientific selection is being simulated through word clusters.
These are not disciplines, not falsifiable, and not bound to any methodology.
This turns the selection process into a rhetorical filtering device, not an epistemic one.
The listed “fields” are not scientific domains but ungrounded abstractions—constructed from policy language, not method. They have no canon, no reproducibility, and no defined scope of falsifiability. The form simulates disciplinary expertise through thematic labels, ensuring that selection is driven by narrative compliance rather than scientific rigor.
Many of these unscientific assertions mirror the irrational phrasing embedded in the AI Act itself, which creates an absurd loophole: it allows the selection of individuals for scientific advisory roles based on speculative experience in domains that, by definition, lie outside the scope of what can be meaningfully assessed under the scientific method.
(End of Part 1)