Conversing With Structure: Do We Have The Will To Be Free? And Is Facebook like くるくる寿司 (kuru kuru sushi)?
We live in the Matrix—not as simulation, but as structural epistemic corruption of perception and thinking affecting politics and business. Social media is only a symptom of a deeper collapse. So?
This is an article everyone should read.
Not because it’s trending. Not because it’s accessible.
But because it’s one of the few things you’ll read that actually thinks through the world we live in—without dumbing it down or decorating it with vibes.
An invitation. No hidden fees. No strings.
It describes something that affects your life directly and structurally.
It outlines a new way of thinking about thinking itself.
It points toward a paradigm shift in how we understand freedom, control, and knowledge.
It’s not an opinion—it’s a verifiable scientific hypothesis.
Here’s the core idea:
Freedom is not the absence of constraint.
Freedom is the power to wield constraint—on others, and on yourself.
It is the measurable capacity to shape or redirect the systems we rely on.
And that capacity is shrinking. Shrinking relatively.
Because the speed of technological innovation has now outpaced the speed of human comprehension, consensus, and control.
And the stakes are rising—forcing us to confront the truth:
We are reaching an existential inflection point.
That’s why this article concerns you—whether you like it or not.
The mechanism I’m describing doesn’t need your permission.
It doesn’t wait for your opinion.
It governs how we live—and it manipulates you into thinking you’re the one in charge.
I know—it sounds like a lot.
It is.
Big claims. Big tone. No apology.
I got more coming but let’s go one step at a time.
I didn’t write this to fit into the system.
I wrote it because the system is already collapsing—and we still don’t know how to see it. So let me help you.
Why can’t we all live in France?
Recently, Instagram showed me a video of an L.A. filmmaker praising a French streaming media law whilst sitting in her bedroom—saying how incredible it was, asking, “Why don’t we have this here?”
A fair question.
And the answer is just as simple:
Because you don’t live in France. And you don’t speak French.
That’s not a joke. It’s the entire point.
Other possibly correct answer include:
Because all your TV and music is not made in Spanish and presents issues people in Mexico City care about rather than people living in LA or Chicago.
France has a law—the SMAD decree—that mandates streamers like Netflix, Disney+, and Apple TV+ invest a percentage of their French revenues into local productions.
That includes quotas for French-language content, independent creators, and diverse genres.
In return, these platforms get access to a faster release window—9 months instead of 17—if they play by the rules.
France is a country that believes its defending its language is a necessary precondition to preserve French identity, its culture. And its media ecosystem is a critical component of this approach.
Not because French movies are the best—nobody would believe that even the French wouldn’t—but because they’re theirs.
France doesn’t pretend its system is neutral. It uses law, quotas, and state funding to protect the conditions under which a society can speak in its own voice. France is not looking to see more croissants on Carrie Bradshaw’s brunch table or simply watch her in ‘Le sexe x.' They want Amelie from Paris doing whatever Amelie likes to do dans la ville.
That’s not nationalism. That’s epistemic sovereignty in the typically annoying French way instead of the understated but usually very well reasoned way Germany would pursue similar objectives. And that’s the difference:
The French don’t seem to get that “C’est comme ça” (“That’s how it is.” Implied: no further discussion) doesn’t automatically make their demands agreeable.
But it does make them coherent.
And that’s the difference.
The French don’t just like their creatives more than U.S. politicians like theirs.
They want content in their own language, in their own style,
so they don’t wake up one day and realize that their cultural identity has become only recorded history—
no longer expressive of the lived experience of being French today.
Because when that happens, language dies.
And with it, the constitutive cohesion that once gave a society its stability.
The challenge France has is a common issue
Having attention is the market, and if you can’t be heard or seen because the stage is closed for, you don’t exist. Time will make sure of it.
Same Need, Different Rhythm
Germany doesn’t understand governance like France—and much of that traces back to historical architecture.
France was a highly centralized kingdom from very early on in its history. It effectively invented the absolute monarch, with the Catholic Church as co-pilot, issuing policies from the head office.
Germany had no head office. It had an emperor who traveled up and down the country like a vagabond, hosted by local dukes or bishops—who often couldn’t wait to see him bugger off and become somebody else’s problem.
In that system, authority required visibility and presence. The traveling court eventually became obsolete as the imperial crown consolidated in dynasties like the Habsburgs, who stayed put—long enough, at least, to redecorate the Hofburg Palace in Vienna.
It remained a structural patchwork—a conglomerate of kingdoms, principalities, free imperial cities, and ecclesiastical states with varying degrees of dependency on Imperial law. There were powerful actors, but legitimacy often depended on coalition.
A Structure of Performative Governance
The Holy Roman Empire was a system built around balancing influence, not consolidating power. And the more influence it lost over time, the more ceremonial it became. A former political power transformed into a structure of performative governance—its efforts focused more on appearing to govern than on actually governing.
Then came the early 1800s—and with it, a stress test of resiliency. Napoleon volunteered to conduct the experiment and applied pressure. And it failed the test.
In its place, some of the regions ‘joined’ the Confederation of the Rhine—a new satellite system set up by Napoleon and aligned with his geopolitical aims. And with it came the Napoleonic Code—an export of revolutionary basic rights, imposed on a region that still followed medieval traditions. A conquest that brought liberty to the conquered using the same method he had used to introduce this change to his own people. Top down approach,
Germany is also making its own laws, but there is a nuanced difference. Germany understands effective governance differently and therefore uses different policies when it tries to protect its cultural identity. I know that may sound unexpected to anyone who is not German, but orders issued from central command—and similar stereotypes in film and TV—are fiction.
The “head office” seeks support for its decisions, and it is only legitimate with the backing of its constituents. That requires decision-making capacity locally, not consolidation centrally.
Therefore, Germany has not mandated German language quotas or prescribed exactly how many times an hour the radio must play a German song—such things that the French are adamant must be in law.
Germany governs here through incentives, structure, and slow adjustment, and it gives responsibility to regional governments. There are public broadcasting councils—Rundfunkräte—we fund broadcasters with grants and expect a balance of demographics, regional representation, and public value. Even private media companies are bound by law to maintain plurality in their offerings—one of many institutional lessons drawn from the Third Reich. The primary risk that warrants legislative action for Germany is the protection of a free press and diverse media.
An insufficient supply of German Schlager music is seen as an acceptable risk—I suppose, Germany would survive.
But French chanson must still find its way into French ears.
Germany and France appear to pull on different levers. Both systems are responding to the same underlying pressure.
Germany’s constitutional structure mandates delegated authority when it comes to media and rules of engagement in support of general objectives.
And over time, the system is fine-tuned through process—or, if you like, bureaucracy.
It’s softer.
It localizes dispute resolution to where disputes actually occur.
It is therefore often slower.
But not necessarily ineffective.
All of it expresses the same human need:
The need to protect the conditions under which people can speak as themselves—
be informed about diverging views,
and be heard in their own voice—
and thereby have influence in decisions being made.
The U.S. shares the same objective—but not the same risk.
It doesn’t currently face the problem that Televisor Mexicana might drop Sex and the City just because it’s made in a foreign language and air more telenovelas instead.
France looks more aggressive only because its goals are not economic—and not about art either.
They’re about preserving a confident cultural space—
one where French voices shape what’s being said, not just what language it’s in,
and where those voices can be heard.
Why should this be a problem?
French people know what the internet is.
They could just set up a new service like cinématique.fr, right?
Job done.
We can say Auf Wiedersehen to Netflix.us.
We have social media like Facebook and YouTube—so why can’t they just upload their stuff there?
Simple.
But our creator @Force_de_filmfrappe (hashtag #OuiCinémaFr) has already demonstrated the flaw in that logic.
Putting something online doesn’t make it public.
Not unless people know how to find it.
And for the last decade or so, “being found” mostly meant being visible through search engines. Those days are over.
The concentrated market structure of those engines—combined with certain business practices—made that increasingly problematic anyway.
So problematic, in fact, that Google was fined billions for antitrust violations in 2017.
This, from Michigan Law School (and I don’t know why I keep picking on them—they publish too fast for their own QA department):
“A company like Google, with global reach, plays a decisive role in determining what most of us read, use, and purchase online.”
That’s quite a claim.
And then they ask:
“Should Google be prevented from and punished for capitalizing on the popularity of its services?”
Sometimes you think: Lawyers, huh?
Michigan—here’s your answer:
Yes, if that behaviour is illegal.
But that’s not the point.
I reference this only to highlight a deeper issue:
We are vaguely irritated by the idea that tech companies decide what we see and what we buy—
but we don’t seem especially concerned until they promote something they shouldn’t.
And the reason for this insanity?
We think of the internet as a public space.
We believe: “If it’s online, it’s public.”
Nothing could be further from the truth.
French and EU law may not agree with me on this point.
A lawyer might give you the legal answer.
But it would be the intellectual equivalent of a ‘dumb Michigan Law Review response’.
Because here’s what France’s media efforts make clear:
The problem is not what tech companies show.
It’s what they don’t.
The challenge of maintaining diverse views in entertainment and online discourse isn’t about what gets pushed.
It’s about what gets omitted—quietly, invisibly, structurally.
In Germany, even private broadcasters are required to meet program diversity obligations.
Newspapers and public broadcasters have internal checks for balance—
not just in what they cover, but in what they omit.
There’s an explicit recognition that what you don’t show matters.
Even under the EU’s Digital Services Act (DSA), platforms aren’t held accountable for what’s algorithmically excluded—only for what’s flagged as harmful or illegal.
The DSA is a shocking example of misguided trivia and regulatory theater—dressed up as consumer protection.
I only wanted to check a definition and immediately encountered serious flaws. How can that be?
The fact that such amateurish text made it through the legislative process of EU institutions is evidence of a structural issue—one that is beginning to surface everywhere.
Meanwhile, the EU Commission boasts:
“The DSA protects consumers and their fundamental rights online by setting clear and proportionate rules. […] The roles of users, platforms, and public authorities are rebalanced according to European values, placing citizens at the centre.”
This is one downside of Brexit I didn’t see coming: EU English could get even weirder. Melancholic EU fabulation (“roles are rebalanced…”).
Ireland, I thought you stayed in—maybe step up a little? Have you that?
Mind you:
If citizens are placed at the centre, then we’re not talking about balance anymore.
We’re talking about centring—which is a different metaphor entirely.
One implies equilibrium, the other implies a hierarchy of relevance.
You can’t balance around a center and place someone in it at the same time.
No, this is not just a little faux pas.
This is semantic collapse—and it points to something far more serious: Institutional language mimicking effective rules and regulation
to obscure the absence of them.
“For society at large, […] mitigation of systemic risks, such as manipulation or disinformation.”
This is absurd.
If we're using the term systemic risk seriously, it implies a specific kind of fragility: That the failure of one actor can trigger a cascading collapse of the entire system. But Google could arguably survive the demise of Facebook unscathed. There’s no interdependence like in finance.
No contagion effect. So what’s systemic about it?
The only systemic risk here is to society itself.
The harm doesn't accrue to the platforms.
It accrues to civil societies, democracy and public discourse
That’s a massive externality—more like carbon emissions.
And yet I don’t recall the EU saying climate collapse is a systemic risk of the oil industry.
“…because of the exponential growth… [platforms—here they mean things like AWS, Google Drive, etc.] contribute to the spread of unlawful or harmful information.”
If spreading a certain type of information is harmful, perhaps—but not unlawful, which is exactly what the word “or”implies—then why do they think they should mess around with it?
Also, the EU has no mandate to enforce criminal law. So do they now have a mandate to regulate content that is not illegal, but should be? Under the EU treaties, they can propose to harmonise some limited procedural aspects, but otherwise they need to stay out of this.
And do they even have evidence for this claim? Usually, we get a nice cost-benefit analysis when the Commission acts within its actual competencies.
Under the DSA, information is considered disseminated to the public only whe it is made available to an unlimited number of persons, and there is no human decision determining who gets access. So sharing a file with your husband or wife or they: not public dissemination.
According to this definition, your Google Drive file is private.
✅ Legally private
❌ But treated by Google—and the EU’s enforcement logic—as publicly scannable
That’s a contradiction between legal classification and functional enforcement.
DSA says Google should only disclose the name of the user
“where this information is necessary to identify the illegality of the content, such as in cases of infringements of intellectual property rights.”
Just because you have an iTunes-purchased file in your Google Drive doesn’t mean you stole it.
It’s not illegal content.
It’s copyrighted content—hopefully lawful.
Unless, of course, the new Shakira song drops and Brussels declares it "illegal"—to make room for a French alternative.
At which point Shakira replies,
“Illegal? Sí, claro.”
By this logic, Regeringsgatan 19 in Stockholm—Spotify headquarters—should be a crime scene.
What is the situation?
The EU knows the difference between public and private content.
It defines it clearly in the DSA.
But when Google scans a document you have not made public and flags it for potential criminality, and the EU encourages as legitimate—
something unthinkable has just occurred:
A private contract (Google’s ToS) has just overridden constitutional protections.
No court approved this.
No judge authorized this surveillance.
No warrant was issued.
All algo is calibrated meaning one also may find a match and the next won’t where scanning the same document (generally speaking and depending on what they are looking for). The EU Commission turns a blind and cannot specify what standards such scanning should follow because it would be illegal for the EU Commission to issue law disabling constitutional rights of citizens in particular the right to privacy. In case you forgot EU Commission, I am talking about Article 8 of the European Convention on Human Rights:
“Everyone has the right to respect for his private and family life, his home and his correspondence.”
This privacy would extent to computer you have at home. But when Google does the dirty work and you benefit from it even passively, then they become co-architects of a post-constitutional order.
Directive on Soft Authoritarianism (Through Private Delegation)
What’s worse is that the logic is so mundane—so completely wrapped in “safety” language—that we no longer recognize it for what it is.
But if police were to commit a crime to prevent another, we’d call it what it is:
A fundamental breach of the rule of law.
So why is it acceptable when Google does it?
And why is the EU so comfortable benefiting from it?
The Fourth Amendment protects against unreasonable search and seizure.
It applies to your private computer at home. So yes—the U.S. has heard of this idea too.
But it also has something called the Third-Party Doctrine, which says:
You cannot have a reasonable expectation of privacy in data you voluntarily give to someone else.
Except I didn’t give my data to someone else.
I rented a digital room to keep it.
I’ve never met anyone who works at Google Drive.
How could I give anything to someone I’ve never met?
But I’m sure the lawyers will say that’s not important to make the legal fiction stand.
All I did was sit at home, click a few buttons, and save some files.
Unwittingly, I opened a portal into a dystopian legal construct that sucked in
Article 8 of the European Convention on Human Rights like it was a spreadsheet error.
Apparently, it’s no longer effective.
Apologies for the inconvenience I caused.
This is why the U.S. model enables platform surveillance.
And why companies like Google act pre-emptively:
They’re allowed to.
It’s your content,
but their infrastructure.
And in U.S. law, ownership and location define the privacy regime.
And Brussels thinks this is a great idea.
Why ask citizens for their approval when we don’t have to?
And every Member State lines up behind it:
“Hurrah, let’s do it.”
Meanwhile, Google publishes enforcement statistics.
Top 10 countries for CSAM-related account reports?
Indonesia
India
Brazil
Russia
Philippines
Mexico
Bangladesh
Vietnam
Thailand
United States
Not a single EU country on the list.
But somehow, this system is operating with European legitimacy and EU Commission wants to read regular progress reports.
Let’s be clear:
The EU has no business reading private files about potentially horrific crimes in Indonesia or Brazil.
Not legally.
Not ethically.
Not strategically.
But that hasn’t stopped them. The DSA or Directive on Soft Authoritarianism—through private delegation.
To be fair, there isn’t much “digital” in this act anyway.
And even less “regulation.”
Instead, we get whatever this is:
“They should seek to embed such consultations into their methodologies for assessing the risks and designing mitigation measures, including, as appropriate, surveys, focus groups, round tables, and other consultation and design methods.”
Should.
Could.
Maybe.
And apparently, the big idea is to encourage Google to use SurveyMonkey more often. Utterly meaningless nonsense.
If China encouraged Yandex to scan Google Drive accounts in Thailand,
and report Thai citizens to Russian authorities,
we wouldn’t call that “content moderation.”
We’d call it foreign interference. Maybe even an act of aggression.
But if Google does it—with the EU’s blessing—we call it digital responsibility.
That’s not just hypocrisy. That’s the structural corruption of what rule of law is supposed to protect against:
Unaccountable power exercised in the name of values,
but without the consent of those governed.
No court in the EU can compel you to testify against your spouse.
No court would approve police checking your laptop without cause.
But Google can—just because you clicked ‘Save.’
Algorithmic scans at first, until it finds something. That is circumvention of the democratic process par excellence.
One more comment on:
Google Drive Additional Terms of Service
Effective Date: March 31, 2020“We may review content to determine whether it is illegal or violates our Program Policies, and we may remove or refuse to display content that we reasonably believe violates our policies or the law.
But that does not necessarily mean that we review content, so please don’t assume that we do.”
In all seriousness—this is the actual document governing my service relationship with them. I admit it: I used their service for years without reading anything,
They’re mandating me to behave as if I won’t be watched,
while getting my agreement to let them watch.
That’s not just ambiguous.
It’s perverse, if you think about it.
If Google’s Terms of Service are vague about whether or not they scan your files, and you act based on a reasonable interpretation in your favor,
you should not be punished under the law for breaching unclear terms.
There’s even a name for this:
Interpretatio contra proferentem —
“Ambiguities in a contract are interpreted against the party that drafted it.”
I watched enough Ally McBeal to make it through at least the first two semesters at Harvard Law. But even then — would it matter?
Legal proceedings cost money.
You probably wouldn't even find out why you lost access to your account.
So let’s ask the more relevant question:
Why are Google and Facebook classified as platforms—
not publishers, not media companies?
Because that’s why they get light-touch regulation under telecom rules and online safety laws (like the DSA).
That means:
No control obligations.
No visibility standards.
No duty to ensure pluralism.
Like the French would say:
Non. Non. Non.
Before explaining why that decision was a big mistake—even 10 years ago, regardless of whether they knew what we know now (I love playing ‘banking regulator,’ holding everyone accountable for the stuff they didn’t know, because they could have known)—I need to re-emphasise one principle issue:
Social Media Is Not A Public Place
We often talk about social media as if it’s a public space where ideas are exchanged, statements are made, and consequences follow.
But that metaphor is wrong.
Once you hear my argument, I think it’ll feel much less strange than it might right now.
What’s the difference between posting something on Facebook and two people talking while walking down the street?
In both cases, an opinion is shared in a space where others might overhear it. But nobody polices that street corner—unless someone pulls out a bullhorn or a camera crew. That usually requires a formal act, like registering a protest with the police.
Posting something on social media doesn’t make it public either. It only becomes “public” when it receives attention—similar to calling a press conference.
Attention is the currency that defines visibility.
Without it, a post is just noise in a void.
Think of it like this:
Facebook is くるくる寿司 (kuru kuru sushi).
The sushi bar with a giant conveyor belt… except they fired the chef.
So now it’s more like a picnic in a sushi bar. You get bite-sized, endless, algorithmically spun snacks—stuff people like to eat or might make use of at a picnic.
But not much sushi.
Since the bar doesn’t actually sell food anymore, it makes money indirectly. For example:
Some people eventually say, “Dammit, I actually wanted sushi—not Taylor Swift candy or Gummibears.”
And so the bar lets people order delivery, in exchange for a little tip.
We get community, Gummibears, and the illusion of free choice: you pick whatever’s to your liking—or put your own stuff on the belt and start a little side hustle.
There’s no longer an arrogant sushi chef saying,
“I don’t serve Gummibears. Only sushi.”
If you put some California spring rolls on a plate and send them down the belt, in theory, anyone in the bar can grab them. But how would they know?There are millions of plates spinning past—how could anyone know this one has exactly what they came for?
They don’t.
So the spring rolls keep spinning.
Eventually, they go in the bin. The sushi bar has no menu but something else: a feed which is their menu. If you are not on the feed you are not on the menu in this restaurant even though your food is ready to be eaten.
Not because no one wanted them.
But because no one saw them.
And so everyone just eats Gummibears, thinking that’s all there is.
The feed is the menu. If you are not on it, your ideas might as well not exist.
Thus, to treat all posts as “public speech” in the same way we treat, say, a newspaper op-ed or a televised statement, is to misunderstand the architecture of these platforms. It conflates publication with circulation, ignoring that reach is a function of machine logic, not merely user intent.
This matters for legal norms, cultural expectations, and governance models. Platform design reshapes what counts as speech, audience, and even authorship. Failing to distinguish between technical visibility and actual visibility leads to flawed assumptions about what is being said, to whom, and with what responsibility.
Unfortunately, our laws, norms, and platform policies are built around a flawed assumption: that all social media content is public by default. And this false classification has led to some very real problems. We’ve created an invisible police force—of opinion, of performance, of self-surveillance. And laws are narrowly focused on preventing harmful content. Why? There used to be the idea of educating people so have media competency. Just because somebody makes some offensive comments doesn't make it true and doesn't ensure attention. You don’t get competence by hiding things.
Social Media Is Highly Editorial
But it doesn't look like it is editorial. That’s why it escaped regulation covering anybody else being in the editorial business. It pretends to be neutral—just a platform, just a conveyor belt.
The platform most often dismissed as shallow—TikTok—is, in practice, the most meritocratic, epistemically transparent, and thematically open.
That’s an opinion, but I think I have good reasons to believe so.
They give their creators:
Real-time feedback
Transparent growth logic
Low barrier to virality
Even intellectual content can gain reach—if it fits the form.
You only have 30 followers? Doesn’t matter.
If you hit the rhythm, the idea lands, and the aesthetic supports it—you go viral.
That’s not free speech. But it’s conditional fairness.
TikTok is also trying to gain market share from incumbents—which may explain its relative openness.
Sometimes, competition creates remarkable effects.
However, social media companies and their creators are beginning to resemble the relationship Uber has with its drivers: formally self-employed, but de facto bound by directives coming from the app. Nobody forces anyone to post on Instagram. And nobody is forced to drive for Uber.
Control today comes in a soft, libertarian style.
It’s still control—and sometimes, oppression.
That makes these platforms not just infrastructure with editorial.
They are full-spectrum, end-to-end, decentralized media production systems—with an integrated shopping channel and a level of vertical integration I could never have imagined.
One that spans private life, professional domains, personal identity, and global narrative formation—all governed by invisible, real-time performance metrics.
But let’s be honest:
It’s not a public square. It’s not all smiles. It’s a Disney resort.
But you're not the guest.
You're one of the people in costume.
You play Mickey. Or Daisy. Or whichever character the algorithm has assigned you today.
And you'd better do it well—on rhythm, on message—because if you slip out of character, they'll sing and dance you straight to the exit.
You can’t wear a costume from Universal Studios either.
And why would you? We have Spider-Man now too!
If you say you want Batman, everyone laughs. The music swells. A cheerful jingle plays to distract the visitors.
And the feed moves on.
You? You're still dressed as Mickey.
And the longer you perform, the more they think: Ah yes, Mickey Mouse—that’s who you really are.
Always a smiling face.
Always a curated script.
Always under surveillance.
We internalize the platform’s laws without realizing their impact.
We know there’s an algorithm. We know it decides what we see.
But we can’t observe what’s missing.
We don’t see the filtered-out posts, the silent suppression of creators who fall out of rhythm, or the subtle pressure to perform visibility-compatible behavior.
You don’t see the world. You see your feed.
And then—without realizing—you believe the world is your feed.
The currency of attention is faux-authenticity under quiet conformity.
And a few demographic facts are enough to predict which brands we prefer, what political tribe we belong to, what roles we’ll play in the script.
Have we become more predictable?
No.
We’ve just become more fluent in being who is required in a given scene.
More indifferent. Less genuine.
Social media didn’t create this—it exploits it.
But other industries exploit too. So what makes this worse?
Two things:
First, this isn’t new.
The same structure has repeated itself before: Johannes Gutenberg invents the printing press, making the distribution of information suddenly scalable. So people started printing a lot of stuff—including some bad stuff—and the English Parliament decided: “Oh well, maybe the Catholic Church was right after all. We need our own version of the Index Librorum Prohibitorum.”
That came in the form of a new law: An Act for Preventing the Frequent Abuses Printing Seditious, Treasonable and Unlicensed Books and Pamphlets.
Basically, it created a monopoly right to print books through guilds, and they did a bit of good old-fashioned censorship. And very quickly, this became a big problem. Nobody wanted to write books—especially not science books. So they got rid of the law after a couple of decades and introduced copyright instead and gave it for a short period to the author.
(Not author right. Not artist right. Not creator right. The right to make copies.)
Why am I saying all of this?
Because the book-printing guild provided the infrastructure. Authors came and submitted books. And the guild decided what was worthwhile for people to read.
A structure with some familiar characteristics.
Social media is a business model built on censorship. Critiquing the platform for what it lets through isn’t the point. The stuff that can’t get through is what we should care about.
But we don’t recognize the similarity—because the pattern keeps changing. And we mistake unfamiliar form for a new problem, when it’s actually an old one resurfacing.
Second, the side effects are destabilizing.
Performative openness is poisoning everybody’s mind
It promotes no very common social media tactics such “performative openness.” A person posts speculative or vague ideas—often framed as “just early thoughts” or “a thread, not a thesis”—and asks for dialogue. But when critique arrives, they respond with: “Well, I never said this was definitive.”
This creates a no-win scenario:
If you engage critically, you’re “overreacting.”
If you stay silent, the idea floats unchallenged.
It disables disagreement by making criticism seem inappropriate, even when the post itself makes strong claims. Openness is signaled, not practiced. This behaviour is poisoning everybody’s mind.
Not just distraction or misinformation.
But dissonance. Erosion. Collapse.
Somebody wrote to me—not long ago—after I described this.
He spoke of a slow collapse in how he saw the world.
It started with a mismatch: between what he thought was happening, and what he saw around him.
And that gap grew until it caused him harm
It was a contradiction too large to hold.
Not from nowhere—but from something real.
And unnameable.
We are teaching people how to simulate meaning—without building the capacity to hold it.
And that mismatch will break more of us.
Instagram’s Originality Crisis — And Why That’s Actually Good News
“The biggest creators are probably overpaid, and everybody else — especially small creators — are underpaid.”
— Adam Mosseri, Head of Instagram 3 April 2025
Instagram was not designed to reward originality — just engagement.
“Identifying who’s creating original content is also an interesting trick.”
Translation: they had no system for it — because they felt they didn’t need one. But that system is not sustainable. And that’s good news. Ironically, he describes the objectives of French media policy in reverse.
The Users of Social Media Are Not Braindead
And here’s the hopeful part: the users noticed. They responded with their attention — by pulling back, skipping the recycled stuff, seeking something real. And Instagram felt it and wants to pivot. Let’s do it.
Instagram wants to measure something that only exists relationally, not statically.
Trying to score originality using algorithmic shortcuts (e.g. timestamps, file uniqueness, engagement rates) as proxies is likely to fail because they are:
Static (ignoring context),
Narrow (favoring trend-followers),
Blind to intention.
This creates a paradox: the more platforms reward surface-level novelty, the harder it becomes to surface true originality. We need a different mechanism—not one that tries to optimize originality, but one that invites context and rewards deliberate framing and is less transactional than today and does’t say: you looked once and now you’ll get more of it and nothing else.
The Fork in the System
If systems evolve exponentially—and our thinking evolves linearly—
we won’t just lag behind.
We will collapse.
That’s not hyperbole.
It’s a measurable, modelable, civilizational equation.
Even within science, performative competence has begun to corrode the foundations of truth. This isn’t just a theory—it’s visible. German physicist Sabine Hossenfelder addressed this years ago in Nature Physics, where she described the growing disillusionment with how academic systems reward hype over substance.
In February 2025, she read aloud a private email from a colleague who admitted that large parts of theoretical physics were essentially “bubbles” sustained for employment reasons—not scientific value. “What we created is a bubble,” he wrote, “but it helps thousands of those guys and their families not to die of hunger.” The system is broken, everyone knows, but nobody can change it.
That’s not cynicism. That’s evidence. We optimize for visibility, grant capture, or brand narrative, rather than truth, utility, or integrity.
That’s the diagnosis of a structural pattern of epistemic erosion.
So we need to innovate thought itself.
Not just what we think.
But how.
A new method of thinking.
Built from structural insight.
Framed as a response to an existential imperative.
Why Marx Was Right—And Still Not Enough
People have tried this before.
Karl Marx called himself a scientist of society.
He saw how work was organized—and drew systemic conclusions.
But he built those conclusions on moral assumptions: labor = value.
Just as Adam Smith built his on the invisible hand.
Their models weren’t wrong—but they weren’t falsifiable.
They were mostly philosophies. Not scientific theories in a modern sense.
But we owe Marx a debt (Mr Smith too of courser)—not for the revolutions in his name, but for the idea that society is a system and systems have structure.
And structure is not fate.
He thought class was the engine of distortion.
I think it’s cognition.
But the instinct is the same:
To reveal that what feels inevitable is, in fact, constructed.
And therefore should be changeable.
What Is Freedom—Really?
Marx believed freedom was the goal.
I believe freedom is the evidence.
Freedom is not the absence of constraint.
It’s the capacity to impose constraint—on others, on systems, and on yourself.
It is the measurable ability to shape what shapes us.
That capacity is shrinking.
Not because of censorship or oppression,
but because the speed of change now outpaces the speed of comprehension, consensus, and control.
And trial and error which got us this far stops working when failure is not reversible.
We still believe we are free—because no one is visibly stopping us.
But if the only freedom left is the freedom to fail irreversibly,
then we are not free.
We are simply… exposed.
It’s Not The Wizard of Oz. The Curtain Is the Cause
Marx believed that if we pulled back the curtain, we’d find the real agent.
Capital. The bourgeoisie. Ideology.
But there is no “behind.”
The curtain is the mechanism.
Structure is not hiding something.
Structure is the thing.
The recursive loops of thought.
The narrative patterns.
The cognitive defaults.
That’s where power lives.
There is no wizard.
Only mirrors.
But one thing has changed. We have a machine that allow us to analyse and manipulate the structure of language from which we gain insight and thereby increase our ability reason more effectively and even test the validity of our perception and categorisation. That machine is AI. It’s a game changer once people understand what it really is. It’s a ghost with a mind without mind (Mushin). That can be harnessed in an unexplored way.
The Final Error
Humans don’t act for singular reasons.
We optimize locally.
We take small risks.
We trust what we already see.
And that’s enough to produce similar consequences all the time—
without conspiracy.
The system manipulates us.
But only because we sometimes manipulate ourselves in how we perceive and reason.
Left vs. right.
Markets vs. regulation.
Safety vs. liberty.
These are not different values.
They are stylistic disputes over how to meet the same core needs:
Security. Freedom. Fairness.
We fight over folklore—and mistake it for substance. I am not saying people are without conviction, I am saying quite the opposite. But we differ in how that gets expressed and therefore is misinterpreted.
And when institutions crack, when trust disappears, when governance becomes theater?
Who protects the nukes if the U.S. collapses?
This issues is unsolvable within the constraints of our present system-the system being the combination of perception, structure and derived meaning which determines how our institutions operate
The solution isn’t ideology.
It’s a new method of thinking. AI enabled.
Socratic. Recursive. Embodied. Decentralised truth validation.
Being as the lived experience of thinking—not watching, liking, or branding it.
It is a cognitive operating system.
It is the one path to freedom that doesn’t require belief—only practice.
That’s the opening of something I am working on: Shemot. More to come -hopefully soon.
And in this age, freedom must be redefined:
Not as permission.
But as precondition for survival.
More to follow