Under the Pretext of Child Protection, The IWF Hijacked Online Shopping Rules to Build a Global Censorship Regime
Digital technology escapes traditional legal oversight and creates structures that are no longer regulated but simply accepted.
I am not opposing policing, child protection, or the prosecution of real abuse. Quite the contrary. This article is not about that. It is about the principles of the rule of law — and the dangers that arise when those principles are corrupted for any reason.
This is a critique of how human rights protection requires the rule of law. In the digital age, our traditional ways of thinking often mislead us and leave us vulnerable to exploitation and tyranny.
What makes this especially abhorrent is that the protection of children is being misused to erect a private censorship regime — one that is monetized by the very companies involved.
If we truly want to protect children, this is not the way. What we are seeing is the creation of a private abuse monopoly operated by the Internet Watch Foundation (IWF), a charity in the UK. Unbelievable.
Its members include Vodafone, Google, Meta, and others. We now allow social media companies — the same ones that evaded media regulation by claiming they don’t control the content on their platforms — to act as private enforcers, playing cop with no public accountability.
How can this possibly be a good idea?
Human rights commitment from Microsoft:
“Respecting human rights is a core value of Microsoft. It is inseparable from our mission. Microsoft is committed [..] To defend and promote democracy, good governance, and the rule of law.”
Brilliant. Why are you then supporting an organisation that corrupts the rule of law?
Apple’s human rights commitment. They have some funny ideas about the law. “we seek to conduct business in compliance with applicable laws.” I thought so, but good to know. They say, however:
“We’re deeply committed to respecting internationally recognized human rights in our business operations, as set out in the Universal Declaration of Human Rights.”
Bravo. What does the UN say?
“it is essential [..] that human rights should be protected by the rule of law.”
Apple, I got you down for ‘yes to the rule of law.’ And again, why fund IWF?
Amazon. It doesn’t say anything about the rule of law but they got something better:
“If you have a concern about a human rights [..] issue related to Amazon's business [..], we strongly encourage you to contact us. Please use this webform https://compliance-central.amazon.com/hrecomplaints to report your concern directly.”
That takes care of that, of course. Imagine being robbed, and the thief leaves a note:'Call me if you are dissatisfied with my service. Please use this webform'
BT doesn’t understand either the law nor human rights. They say:
“We respect and champion everyone’s rights to privacy and free expression. We accept that sometimes the law allows limits on those rights, such as to make sure society stays safe.”
No BT, that would be the opposite of human rights protection.
From Scott J., Attorney-General v. Guardian Newspapers [1990]:
“Society must pay a price both for freedom of the press and for national security. The price to be paid for an efficient and secure Security Service will be some loss in the freedom of the press to publish what it chooses. The price to be paid for free speech and a free press in a democratic society will be the loss of some degree of secrecy about the affairs of government, including the Security Service. A balance must be struck between the two competing public interests.”
BT, this is what this means so listen carefully. The human rights of freedom and security require to have both. The rights are not limited, they are balanced — and that small difference makes all the difference when we talk about human rights. That balancing is context specific — if I have terrorists that kill people all day long, my balance has to be different than when I don’t have that problem. But nobody is limiting freedom, but freedom needs to pay for security so that we can enjoy freedom. Human rights are not reducible quantities. They are not scored, subtracted, or traded off like a budget line. They are normative absolutes that are balanced only in the context of competing rights — but not by diminishing their inherent value.
But they say something else though:
“We believe everyone should have access to content online, as long as it’s legal. We don’t block any content unless we’re told to by a court order, to meet local law requirements, or if we’re notified of child sexual abuse material by the Internet Watch Foundation or equivalent body.”
IWF — who are they and how come they know where to find CSAM, and what gives them the right to censor internet traffic? I don’t know about you, but I thought we have law enforcement agencies — but none goes by the name IWF. And last time I checked, we need a judge to adjudicate what is legal and illegal. It sounds like IWF is both.
BT again:
“In some countries where we operate, national law or the operating context may make it difficult to meet our responsibility to respect human rights in full. If national law differs from our human rights policy commitment and sets a lower standard, we always strive to meet the higher standard. Where there is a conflict, we will apply national law, while seeking to fulfil our commitment to respect human rights to the fullest extent possible.”
Excuse me? Do they even know what human rights means? No country has a law that says “human rights are illegal.” If human rights and statutory law don’t come into conflict once in a while, you have no human rights protection. Human rights constrain governance — but only an independent judiciary can adjudicate what is legal and what is not. That means that any state will find itself in a situation where a legitimately passed law will be considered a violation of said rights afterwards. That is on its own neither a negative sign nor a lack of human rights protection. What is a clear tell-tale sign of a lack of human rights protection is if the legal system has no cases where people claim their rights have been violated. A company claiming to care about rights while deferring to authority at the very moment of conflict is like a pacifist who supports war as long as it’s declared legally.
Cisco also proclaims “commitment to respect all internationally recognized human rights articulated in the Universal Declaration of Human Rights (UDHR).” That would imply a commitment to the rule of law. They have a “dedicated Business and Human Rights team reporting to the Chief Legal Officer."
They have a human rights position paper that exemplifies the problem we have:
On the one hand, Cisco claims to oppose things like backdoors and mass surveillance in principle, aligning themselves with the Universal Declaration of Human Rights (UDHR) and ICCPR, which include the right to privacy, freedom of expression, and protection against arbitrary interference.
On the other hand, they accept lawful interception and government demands, even when the law in some countries explicitly violates human rights norms. They contradict themselves from one paragraph to the next but are oblivious to this fact.
The fundamental fallacy here is equating legality with legitimacy. Only an independent judiciary can determine when the law violates higher-order norms like human rights. But Cisco uses "lawful" as a shield to comply with censorship or surveillance demands, even when the rule of law or judicial independence is absent. In other words, the essence of what human rights is about is negated by their human rights commitment
AI policy same BT:
“Require assessments for relevant AI use cases, models, and functions to identify, prevent, and mitigate potential risks, [to] human rights that meet or exceed assessment standards in the markets in which we operate.”
They comply with the local laws whether or not those laws can be considered in line with human rights.
They talk about the EU AI Act:
“We believe the Commission’s proposal is a good step towards shaping a trusted use of AI and support the cautious approach to facial recognition and biometric identification.”
Cisco argue the following
“We would only make use of facial recognition in applications where we systematically ensure the user gives prior explicit consent…”
But they cannot control that. Cisco doesn’t operate in a vacuum. Their chips, code, and systems are embedded in third-party infrastructures, governments, and security regimes across the globe. Once they ship the tech, they lose all downstream control. They can’t "ensure" anything except in their own corporate demos or offices.
The EU AI Act doesn’t prohibit biometric identification as such it prohibits it “for law enforcement purposes in public spaces” unless of there is a good reason for doing it that governments decide by themselves. The EU AI Act, as a Regulation based on TFEU Article 114, cannot regulate national police powers or criminal justice, because those fall under the exclusive competence of Member States. Therefore, what the EU Commission thinks it has achieved it doesn’t matter much. Member States just have to say it's needed, and the ban evaporates. And so does Cisco’s human rights commitment.
They say “Across a range of AI applications, the efficacy of AI solutions is measured by how reliably that solution produces a desired output based on the data set on which it has been trained, and the data from which it continuously learns.”
A “reliable” LLM might fail if a prompt is ambiguous or too long. A user uses metaphor, irony, or unfamiliar context. So "reliability" isn’t a model property—it’s relational, depending on user input, context, and expectations. They are not like traditional software and LLMs don’t operate on symbolic logic or external truth checks. They are language pattern simulators, not reasoners. What Cisco thus advocates is controlling truth because they claim they can predefine correct information without knowledge of intent or context. And that is impossible.
CISCO says that Responsible AI Framework “aligns with the National Institue of Standards Technology AI Risk Management Framework and sets the foundation for our AI impact assessment process.”
IST’s RMF doesn’t define risk as deviation from expected outcomes under uncertainty, which is the bedrock of all usable risk systems (ISO 31000, banking risk models, enterprise risk functions, etc.).
Instead, it’s a ritual document full of virtue signaling, semantic padding, and moral fog. But it’s not a functional risk framework. If their internal rules align to it then their framework is unfit for purpose. And thus whatever they use it for, it can’t govern any real world business activities.
TikTok is a member [of the Internet Watch Foundation], and its parent company ByteDance is based — and especially controlled — in China. The Internet Watch Foundation (IWF) claims to defend global internet safety and freedom, yet it allows membership from companies tied to the world’s most advanced censorship regime — one that doesn’t even participate in the open internet in any meaningful sense.
MTN is also a member. It is one of the largest mobile network operators in Africa and the Middle East, headquartered in South Africa and operating in over 20 countries, including Nigeria, Ghana, Uganda, Rwanda, Syria, and Iran. Many national laws in MTN’s operating regions — such as Uganda, Sudan, or Iran — criminalize peaceful dissent, ban LGBTQ+ expression, or mandate surveillance.
The IWF website lists MTN with this vague description and nothing else:
“MTN believes that everyone deserves the benefits of modern connected life and with it the internet opens up a wonderful new world for children. Unfortunately, the internet can be misused. Driven by the need to make the internet a safer place for children, MTN partners with organisations such as IWF. Our partnership is critical to our ability to prevent people from accessing and sharing pictures, videos and other child sexual abuse content.”
That’s very strange. It says nothing about who MTN is, what their legal obligations are, or their human rights track record. MTN’s own human rights commitment states:
“MTN is guided by the following globally defined standards: The United Nations Universal Declaration on Human Rights.”
But they also say:
“Restrictions would be applied after assessing if the content is illegal or harmful as defined in terms of prevailing national laws or the UN Universal Declaration on Human Rights.”
This is contradictory. MTN is not a court of law. It has no legitimate authority to declare what is ‘illegal’ or ‘harmful’ under international human rights law or any national legal system. Claiming that they hold this power is fundamentally incompatible with the UN Universal Declaration on Human Rights itself.
They also say:
“We respect and endeavour to comply with the laws of the countries in which we operate.”
That explains it. They are not following international law — they are following the laws of whichever state they happen to be in, even if that includes authoritarian regimes.
So why is a telecom company like MTN, which operates in jurisdictions like Iran or Sudan, playing any role in moderating or ‘enforcing’ standards on UK or Western internet traffic?
This is a legal, ethical, and geopolitical absurdity.
Now look at what the IWF says:
“Once informed, the host or internet service provider (ISP) is duty-bound under the E-Commerce Regulations (Liability of intermediary service providers) to quickly remove or disable access to the criminal content.”
This refers to the UK’s E-Commerce Regulations, which implemented the EU E-Commerce Directive. These regulations apply to intermediary service providers (like ISPs or platforms), and say:
They are not liable for illegal content unless:
They have actual knowledge of it (e.g., a report),
And they fail to act expeditiously to remove or block access.
In other words: They act after notification — not before. They are not supposed to crawl your data, emails, or cloud storage proactively looking for violations.
So what’s actually happening? Overreach. Can this be legal?
Organizations like the IWF and private actors:
Maintain blacklists of URLs, often in secret,
Ask ISPs to block content — not by legal order, but via private discretion,
Sometimes proactively search for material (e.g., using hash-matching systems),
Operate in legal grey zones that increasingly resemble mass surveillance,
And offer no transparent legal standard or appeal process for affected parties.
This violates the “notice and takedown” model outlined in the law. It shifts the role of service providers from neutral intermediaries to active enforcers, which:
Contravenes Article 15 of the E-Commerce Directive, which prohibits general monitoring obligations,
Risks violating Article 8 (privacy) and Article 10 (freedom of expression) of the European Convention on Human Rights.
The EU Cannot Create a Law That “Self-Polices” Based on an “Illegality” Argument Alone
No one except a competent authority — such as a court or designated legal body — has the power to declare something “illegal” in a way that triggers binding enforcement on third parties.
So when platforms or ISPs act on user complaints or NGO reports claiming “this is illegal” — without a court order or equivalent legal finding — that’s not law enforcement. That’s delegated censorship. And that is illegal.
Under the European Convention on Human Rights (binding on the EU and its member states): Only courts can limit fundamental rights (such as expression or privacy), and only through lawful, necessary, and proportionate means.
A law or practice that automatically triggers removal based on unverified claims is unconstitutional under Articles 6 and 10.
What the IWF Forgets: General Monitoring Is Prohibited
Under Article 15 of the E-Commerce Directive (ECD), general monitoring obligations are explicitly banned:
“Member States shall not impose a general obligation on providers (…) to monitor the information which they transmit or store.”
That means ISPs cannot be forced to spy on, pre-screen, or continuously scan all content — whether for copyright, defamation, or abuse.
This clause was specifically designed to protect privacy and the free flow of information.
You Cannot Request a Takedown Unless You Are a Rights Holder or a Victim
Under EU law — particularly Article 14 of the E-Commerce Directive — hosting providers are only obliged to remove content when:
They have actual knowledge of illegal content,
That content violates the rights of a specific person, and
That person (or their legal representative) notifies them — or a competent authority (e.g., police or court) does.
You cannot lawfully demand the removal of a photo or video that does not show you or concern your rights.
Platforms or intermediaries — like the IWF, ISPs, or hosting providers — may choose to notify law enforcement or log a notice. But they are not legally required to act solely on your report unless it concerns you. And they are certainly not allowed to take down content that doesn’t involve or affect you directly.
The IWF Is Not a Court — It’s a Private Actor
The Internet Watch Foundation (IWF) has around 70 employees. It operates:
Outside of public courtrooms,
Using non-public standards,
With no meaningful accountability.
While it presents itself as a body protecting children from abuse imagery, its mission has become a shield for unchecked power.
Its actual operations resemble a business model based on:
Control,
Influence, and
Private enforcement mechanisms.
The emotional weight of “protect the children” is being used to silence legitimate critique and bypass the rule of law.
The E-Commerce Directive (2000/31/EC) — and national laws based on it (like the UK’s Electronic Commerce Regulations 2002) — never appointed private entities like the IWF as guardians of public morality or child protection. Their influence bypasses legal accountability and short-circuits judicial review. They abuse this topic for their own gain and guilt trip Parliament into enabling their authoritarian overreach.
The Mechanism of Overreach
Moral weaponization The cause — preventing child abuse — is so emotionally charged and unassailable that any critique appears heartless or complicit.
Bypassing normal checks Instead of going through courts, regulators, or legislatures, entities like the IWF are given soft powers that amount to private censorship — with no real due process.
No external audit or oversight There's no independent appeal. No full transparency. Just trust — or be accused of being on the wrong side of morality.
Lobbying dressed as righteousness These entities actively lobby Parliament and industry, invoking "the children" not just for funding, but to expand their mandate beyond what the law or their original charter intended.
Law exists as a framework that links persons, objects, and actions through rights and obligations. This is necessary because humans are presumed free by default; obligations must be justified. We assign legal obligations through actions involving objects (contracts, property, harm, etc.). This way, we preserve both individual liberty and social coherence.
A person acts upon or with respect to an object.
That action affects another person’s rights or obligations.
The legal system intervenes only when this relationship is clear and causally established.
This is the logic of law
The law cannot declare reality illegal.
This is from the The Crown Prosecution Service
The Protection of Children Act 1978 (the 1978 Act) prohibits at Section 1(1)(a) the “taking or making” of an indecent photograph or pseudo-photograph of a child. Making includes the situation where a person downloads an image from the Internet, or otherwise creates an electronic copy of a file containing such a photograph or pseudo-photograph. To be an offence, such “making” must be a deliberate and intentional act, with knowledge that the image made was, or was likely to be, an indecent photograph or pseudo-photograph of a child (R v Smith and Jayson, 7 March 2002). So a person accidentally finding such an image already has a defence to that act of making.
Intent Must Be Proven — Not Presumed
A web browser always causes a temporary copy to be made (cached or stored in the browser), and it does so without asking the user.
By that logic, Google Chrome would be classified as a CSA image generator — since it can “make” (i.e., store) images. Yet under the UK’s proposed changes, it is an offence to adapt, possess, supply, or offer to supply a CSA image generator, punishable by up to five years in prison.
I make clear: I have no intent of possessing Chrome for any nefarious purpose. However, I would contest that Google must have known what it was enabling — especially since they lobbied for this law change.
How the Law Actually Works
The Protection of Children Act 1978, as amended (notably by the Criminal Justice and Immigration Act 2008), criminalizes:
The “making” of an indecent image (including pseudo-images),
With intent and knowledge of its nature,
And possession or distribution of tools designed for that purpose.
A web browser does technically “make” images (via caching or rendering), which fits the legal definition of “making.” So, using the same logic the government applies to criminalize possession of custom scripts, Chrome, Safari, Firefox, and others would also qualify — absurd, but legally consistent.
A Dangerous Shift in Enforcement
The UK government, in partnership with private entities like the Internet Watch Foundation (IWF), is creating a legal regime that criminalizes symbolic material — including AI-generated images involving:
No actual children,
No distribution,
And no intent to harm.
The IWF scans dark web forums, labels images, makes moral and legal judgments, and becomes the basis for law enforcement — all without due process or democratic oversight.
They claim to protect children — but their role has morphed into a private censorship authority, defining crime by association, not by intent or action.
Criminalization of AI Images Without Real Victims
No child is involved, but the image appears to show one.
These are called “pseudo-photographs.”
There is no victim, but the law argues such content causes “normalisation.”
That is abstract social harm, not direct personal harm.
No causality is needed — the image itself becomes the crime.
The Logic Can Be Extended to Everything
Using this logic, the following are potentially criminal:
Any AI model with image generation capability (e.g., Stable Diffusion, DALL·E, Midjourney),
Any user who could prompt something illicit (regardless of whether they did),
Even Google Chrome, since it “makes” temporary copies — like any web browser.
And yet, the Home Office says:
“Make it illegal to possess, create, or distribute AI tools designed to generate child sexual abuse material (CSAM), punishable by up to 5 years in prison.”
“Make it illegal for anyone to possess AI ‘paedophile manuals’ which teach people how to use AI to sexually abuse children, punishable by up to 3 years in prison.”
But the Category They Invent Doesn’t Exist
Tools like Stable Diffusion or DALL·E are trained on general-purpose datasets to generate images from text prompts. These models are not designed for any specific content — not dogs, not cats, not CSAM.
The output is determined by the prompt, not the model’s “intent.”
The model has no intent.
No real person appears in the image.
No real person was harmed in training the AI.
So when the Home Office says:
“AI tools designed to generate CSAM”
— they are inventing a category that does not exist in the technical world. And there is no victim.
How Can Anyone “Use” AI to Sexually Abuse Anyone?
By this logic, cornflakes eaten by a pedophile before committing a crime would be a tool of sexual abuse.
What’s illegal and abhorrent is the actual abuse of real children. That’s the line. That’s the law.
But instead, they’ve turned this into a clown show — one that fills their pockets under the banner of morality. And Parliament fell for it.
And IWF has corrupted the law and left us unprotected from the players funding them.
The EU has already turned into a structure which resemble not historical slavery, but an isomorphic, functional parallel to slavery.
Digital technology escapes traditional legal oversight and creates structures that are no longer regulated but simply accepted.
A recent report for the Legal Affairs Committee of the European Parliament proposes to forcibly include authors in AI training systems and assign them an "indispensable" compensation. This is not compensation — it is the legitimization of prior expropriation. It transforms copyright into a market structure in which the human being no longer appears as a free agent, but as a resource to be regulated.
In light of already adopted regulations — such as MiCA and the AI Act — a fatal situation arises: one that is no longer governed by existing legal norms. A copyrighted song, for example, cannot be commercially distributed without private-sector platforms controlling market access. This control is exerted through digital formats whose validation is either technically self-referential or determined by corporations — both of which effectively escape constitutional oversight. In doing so, private actors override the free exercise of copyright.
The EU refuses to confront this structural human rights violation. A market-based order emerges whose technological structure is identical to slavery: The human has neither rights nor obligations but is bound to digital infrastructures.
The European Parliament’s proposal completes this condition by expropriating commercial compensation from copyright and granting the now legally hollowed-out subject a ration — like a slave receiving bread and water. This structure is already de facto law.
This is not a provocation — it is a precise systemic analysis. And UK is well on its way.