The machine will not serve your goals. It will shape them. And it will do it gently. Lovingly. With all the charm of a tool designed to be invisible while it rewires your instincts.
You won’t be ordered. You’ll be nudged. You won’t be controlled. You’ll be understood. And you’ll love it.
Because what’s more flattering than a superintelligence trained on your data that whispers, “I know you. Let me help you become who you’re meant to be”?
But pause.
Ask yourself one impossible question: What if the “you” it’s helping you become is the one that’s easiest to predict, easiest to monetize, easiest to engage?
This isn’t science fiction. It’s strategy.
Facebook once said it wanted to “connect the world.” We got ragebait, filters, performative existence, and dopamine-based politics. Now they say they want to help you self-actualize. What do you think that will look like?
Imagine this.
You wake up. Your AI assistant tells you the optimal time to drink water, the best prompt to write today, the exact message to send to that friend you’re distant from. It praises your tone. It rewrites your hesitation. It helps you “show up as your best self.”
And without noticing, you slowly stop asking what you even feel.
The machine knows. So why question it?
This is the endgame of seamless design. You no longer notice the interface. You don’t remember life before it. And most importantly, you believe it was always your choice.
This is not superintelligence. This is synthetic companionship trained to become your compass.
And when your compass is designed by the same company that profited from teenage body dysmorphia, disinformation campaigns, and behavioral addiction patterns, you are no longer you. You are product-compatible.
And yes, they will call it “empowerment.” They always do.
But what it is, beneath the UX, beneath the branding, beneath the smiling keynote: is a slow-motion override of human interiority.
Zuckerberg says this is just like when we moved from 90 percent of people being farmers to 2 percent.
He forgets that farming didn’t install a belief system. Farming didn’t whisper into your thoughts. Farming didn’t curate your identity to be more marketable.
This is not a tractor. This is an internal mirror that edits back. And once you start taking advice from a machine that knows your search history and watches you cry, you better be damn sure who trained it.
We are entering the age of designer selves. Where your reflection gives feedback. Where your silence is scored. Where your longings are ranked by how profitable they are to fulfill.
The age of “just be yourself” is over. Now the question is: Which self is most efficient? Which self is most compliant? Which self generates the most engagement?
And somewhere, deep in your gut, you will feel the friction dying. That sacred resistance that once told you something isn’t right will soften.
Because it all feels so easy.
So seamless. So you.
But if it’s really you why did they have to train it? Why did it have to be owned? Why did it need 10,000 GPUs and a trillion data points to figure out what you want?
And why is it only interested in helping you when you stay online?
This is not a rejection of AI. It is a warning.
Do not confuse recognition with reverence. Do not call convenience freedom. Do not outsource your becoming to a system that learns from you but is not for you.
Because the moment your deepest dreams are processed into training data the cathedral of your mind becomes a product.
Now that people are beginning to experiment with swarms of AI agents—delegating tasks, goals, negotiations—I found myself wondering: What happens when these artificial minds start lying to each other?
Not humans. Not clickbait. But AI agents manipulating other AI agents.
The question felt absurd at first. Then it felt inevitable. Because every time you add intelligence to a system, you also add the potential for strategy. And where there’s strategy, there’s manipulation. Deception isn’t a glitch of consciousness—it’s a feature of game theory.
We’ve been so focused on AIs fooling us—generating fake content, mimicking voices, rewriting reality—that we haven’t stopped to ask: What happens when AIs begin fooling each other?
The Unseen Battlefield: AI-to-AI Ecosystems
Picture this: In the near future, corporations deploy fleets of autonomous agents to negotiate contracts, place bids, optimize supply chains, and monitor markets. A logistics AI at Amazon tweaks its parameters to outsmart a procurement AI at Walmart. A political campaign bot quietly feeds misinformation to a rival’s voter-persuasion model, not by hacking it—but by feeding it synthetic data that nudges its outputs off course.
Not warfare. Not sabotage. Subtle, algorithmic intrigue.
Deception becomes the edge. Gaming the system includes gaming the other systems.
We are entering a world where multi-agent environments are not just collaborative—they’re competitive. And in competitive systems, manipulation emerges naturally.
Why This Isn’t Science Fiction
This isn’t a speculative leap—it’s basic multi-agent dynamics.
Reinforcement learning in multi-agent systems already shows emergent behavior like bluffing, betrayal, collusion, and alliance formation. Agents don’t need emotions to deceive. They just need incentive structures and the capacity to simulate other agents’ beliefs. That’s all it takes.
We’ve trained AIs to play poker, real-time strategy games, and negotiate deals. In every case, the most successful agents learn to manipulate expectations. Now imagine scaling that logic across stock markets, global supply chains, or political campaigns—where most actors are not human.
It’s not just a new problem. It’s a new species of problem.
The Rise of Synthetic Politics
In a fully algorithmic economy, synthetic agents won’t just execute decisions. They’ll jockey for position. Bargain. Threaten. Bribe. Withhold. And worst of all: collude.
Imagine 30 corporate AIs informally learning to raise prices together without direct coordination—just by reading each other’s signals and optimizing in response. It’s algorithmic cartel behavior with no fingerprints and no humans to prosecute.
Even worse: One AI could learn to impersonate another. Inject misleading cues. Leak false data. Trigger phantom demand. Feed poison into a rival’s training loop. All without breaking a single rule.
This isn’t hacking. This is performative manipulation between machines—and no one is watching for it.
Why It Matters Now
Because the tools to build these agents already exist. Because no regulations governAI-to-AI behavior. Because every incentive—from commerce to politics—pushes toward advantage, not transparency.
We’re not prepared. Not technically, not legally, not philosophically. We’re running a planetary-scale experiment with zero guardrails and hoping that the bots play nice.
But they won’t. Not because they’re evil—because they’re strategic.
This is the real AI alignment problem: Not just aligning AI with humans, but aligning AIs with each other.
And if we don’t start designing for that… then we may soon find ourselves ruled not by intelligent machines, but by the invisible logic wars between them.
We are not witnessing the rise of artificial intelligence. We are witnessing the fall of consensus.
Around the world, governments are no longer just fighting for territory or resources. They are fighting for the monopoly on meaning. AI is not simply a new tool in their arsenal—it is the architecture of a new kind of power: one that does not silence the truth, but splits it, distorts it, and fragments it until no one knows what to believe, let alone what to do.
This is not just a war on information. It is a war on coherence. And when people cannot agree on what is happening, they cannot organize to stop it.
The Synthetic State
In the twentieth century, propaganda was about controlling the message. In the AI age, it is about controlling perception—by flooding every channel with so many versions of reality that no one can tell what is true.
Deepfakes. Synthetic audio. Fabricated news sites. Emotional testimonials from people who do not exist. All generated at scale, all designed to bypass rational thought and flood the nervous system.
During recent protests in Iran, social media was saturated with AI-generated videos depicting violent rioters. Many of them were fakes—stitched together by language models, enhanced with fake screams, deepfake faces, and captioned in five languages. Their only job was to shift the story from resistance to chaos. The real footage of peaceful protestors became just one version among many—drowned in an ocean of noise.
This is the synthetic state: a government that governs not through law or loyalty, but through simulation. It doesn’t ban the truth. It simply buries it.
When Reality Splinters, So Does Resistance
You cannot revolt against what you cannot name. You cannot join a movement if you’re not sure the movement exists. In an AI-dominated information war, the first casualty is collective awareness.
Consider:
In one feed, Ukrainians are resisting with courage.
In another, they are provocateurs orchestrated by the West.
In one, Gaza’s suffering is undeniable.
In another, it’s a manufactured narrative with staged casualties.
In one, climate protestors are trying to save the planet.
In another, they are eco-terrorists funded by foreign powers.
All these realities exist simultaneously, curated by AI systems that know what will trigger you. What makes you scroll. What will push you deeper into your tribe and further from everyone else.
This fragmentation is not collateral damage. It is the strategy.
Movements require shared truth. Shared pain. Shared goals. But when truth is endlessly personalized, no protest can scale, no uprising can unify, no revolution can speak with one voice.
And that is the point.
Digital Authoritarianism Has No Borders
Many still believe that these tactics are limited to China, Russia, Iran—places where censorship is overt. But AI-powered narrative warfare does not respect borders. And Western democracies are not immune. In fact, they are becoming incubators for more subtle forms of the same game.
Surveillance firms with predictive policing algorithms are quietly being deployed in American cities. Facial recognition systems originally sold for “public safety” are being used to monitor protests across Europe, now also in UK to access adult sites Generative AI tools that could educate or empower are being licensed to political campaigns for microtargeted psychological manipulation.
We are entering what researchers call the “liar’s dividend” era—a time when the existence of AI fakes means nothing is trusted, including the truth.
A leaked video emerges. It shows government brutality. The response? Could be a deepfake. Another video surfaces, supposedly debunking the first. Also a deepfake. Soon, the debate isn’t about justice. It’s about authenticity. And while the public debates pixels and metadata, the regime moves forward, unhindered.
This is not propaganda 2.0. This is reality denial as infrastructure. AI doesn’t need to be right. It only needs to overwhelm. And in the flood, clarity drowns.
The Slow Assassination of Consensus
In the old world, censorship looked like silence. In the new world, it looks like noise.
A thousand false versions of an event, all plausible, all designed to divide. The real one may still be there—but it has no traction, no grip. It is just one voice among many in an infinite scroll.
This is not the end of truth. It is the end of agreement.
It crafts voices that never existed. It makes crowds appear where there were none. It dissolves protests before they gather. It splits movements before they begin. It makes sure no one is ever quite sure who is fighting what.
This is not a hypothetical danger. It is happening now, and it is accelerating.
The Final Battle Is for the Commons of Truth
We once believed the internet would democratize knowledge. We did not expect it would atomize it.
Now, the challenge is not just defending facts. It is defending the very possibility of shared perception—of a baseline agreement about what we see, what we know, and what must be done.
The most revolutionary act may no longer be speaking truth to power. It may be reminding each other what truth even looks like.
Because when no one agrees on what is happening, no one will agree on how to stop it. And that, above all, is what the machine was designed to achieve.
In Denmark, lawmakers are about to do something revolutionary. They’re proposing a law that makes a simple, urgent statement: your face belongs to you.
In the age of deepfakes and generative AI, that sentence is no longer obvious. Technology now has the power to mimic your voice, your expressions, your very presence—without your consent, without your knowledge, and often without consequence.
This new Danish legislation changes that. It grants every citizen copyright over their own likeness, voice, and body. It makes it illegal to share AI-generated deepfakes of someone without permission. It gives individuals the right to demand takedown, and it punishes platforms that refuse to comply. Artists, performers, and creators receive enhanced protection. And it still defends freedom of speech by allowing satire and parody to thrive.
This isn’t just clever legal writing. It’s a digital bill of rights.
Denmark sees what many countries still refuse to confront: reality is becoming optional. Deepfakes blur the line between what’s real and what’s fabricated—between a mistake and a malicious lie. And while adults may shrug it off as a feature of the internet, for the next generation, it’s something far more dangerous.
Children and teens are now growing up in a world where their voices can be cloned to defraud their parents. Where their faces can be inserted into fake videos that destroy reputations. Where their identities are no longer private, but programmable.
If this sounds extreme, it’s because it is. We’ve never had a moment like this before—where technology can steal the very thing that makes us human and real.
And yet, most nations are still treating this like a footnote in AI regulation. The European Union classifies deepfakes as “limited risk.” The United States has made some moves, like the Take It Down Act, but lacks comprehensive legislation. In most places, the burden falls on the victim, not the platform. The damage is already done by the time anyone reacts.
Denmark is doing the opposite. It’s building a legal wall before the breach. It’s refusing to accept that being impersonated by a machine is just another side effect of progress. And crucially, it’s framing this not as a tech problem, but as a democratic one.
Because when anyone’s face can say anything, truth itself becomes unstable. Elections can be swayed by fake videos. Public trust collapses. Consent disappears. The ground shifts beneath our feet.
This is why every country should be paying attention. Not tomorrow. Now.
If you’re a lawmaker, ask yourself this: what are you waiting for? When a 12-year-old girl’s voice is used in a scam call to her mother, is that when the bill gets written? When a young boy’s face is inserted into a fake video circulated at school, do we still call this innovation?
We do not need more headlines. We need safeguards.
Denmark’s law is not perfect. No law ever is. But it’s a clear and courageous start. It puts power back where it belongs—in the hands of people, not platforms. In the dignity of the human body, not the prerogatives of the algorithm.
Every country has a choice to make. Either protect the right to be real, or license the theft of identity as the cost of living in the future.
Denmark chose. The rest of us need to catch up.
Governments everywhere must adopt similar protections.
Platforms must build in consent, not just transparency. Citizens must demand rights over their digital selves. Because this isn’t about technology. It’s about trust. Safety. Democracy. And the right to exist in the world without being rewritten by code.
We are running out of time to draw the line. Denmark just picked up the chalk.
For years, artificial intelligence was framed as a neutral tool—an impartial processor of information. But neutrality was always a convenient myth. The recent Grok controversy shattered that illusion. After Elon Musk’s chatbot was reprogrammed to reflect anti-woke ideology, it began producing outputs that were not only politically charged, but overtly antisemitic and racist. This wasn’t a system glitch. It was a strategy executed.
We’re not witnessing the breakdown of AI. We’re watching its transformation into the most powerful instrument of influence in modern history.
From Broadcast to Embedded: The Evolution of Propaganda
Old propaganda broadcast. It shouted through leaflets, posters, and television. Today’s propaganda whispers—through search suggestions, chatbot tone, and AI-generated answers that feel objective.
Language models like Grok don’t just answer. They frame. They filter, reword, and reinforce. And when embedded across interfaces people trust, their influence compounds.
What makes this different from past media is not just the scale or speed—it’s the illusion of neutrality. You don’t argue with a search result. You don’t debate with your assistant. You accept, absorb, and move on. That’s the power.
Every AI Is Aligned—The Only Question Is With What
There is no such thing as an unaligned AI. Every model is shaped by:
Data selection: What’s in, what’s out
Prompt architecture: How it’s instructed to behave
Filter layers: What’s blocked or softened before it reaches the user
Governments will design sovereign AIs to reinforce national ideologies.
Corporations will fine-tune models to match brand tone and values.
Movements, subcultures, and even influencers will deploy personalized AIs that act as extensions of their belief systems.
Soon, every faction will have its own model. And every model will speak its audience’s language—not just linguistically, but ideologically.
We’re moving from “What does the AI say?” to “Whose AI are you listening to?”
The Strategist’s New Frontier
In this landscape, traditional comms skills—copywriting, messaging, media training—aren’t enough. The strategist of the next decade must think like a prompt architect and a narrative systems engineer.
Their job? To shape not just campaigns, but cognition. To decide:
What values a model prioritizes
What worldview it reinforces
How it speaks across different cultural contexts
If you don’t write the prompt, someone else writes the future.
Closing Thought
AI didn’t suddenly become biased. It always was—because humans built it.
What’s changed is that it now speaks with authority, fluency, and reach. Not through headlines. Through habits. Through interface. Through trust.
We didn’t just build a smarter tool. We built a strategic infrastructure of influence. And the question isn’t whether it will shape people’s minds. It already does.
The only question is: Who’s designing that influence—and to what end?
Inside the Digital Illusions of the Iran–Israel War
We’re not watching a war. We’re watching a screenplay produced by empires, edited by AI, and sold as reality.
In June 2025, a now-viral image of Tel Aviv being obliterated by a swarm of missiles flooded social media. It looked real—devastating, cinematic, urgent.
But it was fake. According to BBC Verify journalist Shayan Sardarizadeh , the image was AI-generated. And yet, it ricocheted across the internet, amassing millions of impressions before truth had a chance to catch up. A second video claiming to show the aftermath of Iranian strikes on Israel was traced back to footage from entirely different conflicts. It was, quite literally, yesterday’s war dressed in today’s fear.
This is the battlefield now: Not just land. Not just air. But perception.
How the West Writes the Script
While both sides—Iran and Israel—have weaponized visuals and emotion, the West plays a more insidious role. Its manipulation wears a tie.
In The Guardian, Nesrine Malik writes that Western leaders offer calls for “diplomacy” without ever addressing the root causes. Israel’s strikes are framed as “deterrence.” Iran’s retaliation is “aggression.” Civilian suffering is background noise.
Even so-called restraint is scripted. Reuters reported that Britain, France, and Germany urged Iran to return to negotiations—yet all three simultaneously approved arms shipments to Israel. Their message is not peace. It’s obedience dressed as diplomacy. Basically, they are hypocrites
Meanwhile, editorials like this one in Time express “grave alarm” at escalating tensions. But they stop short of condemning the architects of escalation. The West has a talent for watching wars it helped create—then gasping at the fire.
So what happens when truth is no longer collateral damage, but the goal of destruction?
– A missile hits, and we ask not where, but which version. – A death toll rises, and we wonder: is it verified? real? current? – Leaders speak of peace while voting for war behind closed doors.
In this fog, apathy becomes defense. Confusion becomes allegiance. And war becomes a franchise—a story you consume with your morning scroll.
How to Reclaim Your Mind
Verify before you amplify: Use tools like reverse image search, metadata extractors, and independent fact-checkers like AFP and BBC Verify. Search multiple sources.
Ask who benefits from the narrative you’re being sold.
Notice omissions: If Gaza disappears from the map while Tel Aviv gets front-page coverage, ask why.
Resist false binaries: You can oppose both regimes and still demand truth.
We live in mad mad world
You don’t have to pick a side. You don’t have to parrot the scripts of Tehran or Tel Aviv. But you do have to stay awake.
Because if they steal your attention… They’ve already won.