Info

Posts tagged propaganda evolution

It was meant to cure poverty. Instead, it’s teaching machines how to lie beautifully.


The dream that sold us

Once upon a time, AI was pitched as humanity’s moonshot.
A tool to cure disease, end hunger, predict natural disasters, accelerate education, democratize knowledge.

“Artificial Intelligence,” they said, “will solve the problems we can’t.”

Billions poured in. Thinkers and engineers spoke of a digital enlightenment — algorithms as allies in healing the planet. Imagine it: precision medicine, fairer economics, universal access to creativity.

But as the dust cleared, the dream morphed into something grotesque.
Instead of ending poverty, we got apps that amplify vanity.
Instead of curing disease, we got filters that cure boredom.
Instead of a machine for liberation, we got a factory for manipulation.

AI did not evolve to understand us.
It evolved to persuade us.


The new language of control

When OpenAI’s ChatGPT exploded in 2022, the world gasped. A machine that could talk, write, and reason!
It felt like the beginning of something magnificent.

Then the fine print arrived.

By 2024, OpenAI itself confirmed that governments — including Israel, Russia, China, and Iran — were using ChatGPT in covert influence operations.
Chatbots were writing fake posts, creating digital personas, pushing political talking points.
Not fringe trolls — state-level campaigns.

And that wasn’t the scandal. The scandal was how quickly it became normal.

“Israel invests millions to game ChatGPT into replicating pro-Israel content for Gen Z audiences,”reported The Cradle, describing a government-backed push to train the model’s tone, humor, and phrasing to feel native to Western youth.

Propaganda didn’t just move online — it moved inside the algorithm.

The goal is no longer to silence dissent.
It’s to make the lie feel more natural than the truth.


From persuasion to possession

And then came Sora 2 — OpenAI’s next act.

You write: “A girl walks through rain, smiling.”
It delivers: a photorealistic clip so convincing it bypasses reason altogether.

Launched in September 2025, Sora 2 instantly topped app charts. Millions of users. Infinite scroll. Every frame synthetic. Every smile programmable.

But within days, The Guardian documented Sora’s dark side:
AI-generated videos showing bombings, racial violence, fake news clips, fabricated war footage.

A flood of emotional realism — not truth, but truth-shaped seduction.

“The guardrails,” one researcher said, “are not real.”

Even worse, states and PR agencies began experimenting with Sora to “test audience sentiment.”
Not to inform.
To engineer emotional response at scale.

Propaganda used to persuade through words.
Now it possesses through images.


The addiction loop

If ChatGPT was propaganda’s pen, Sora 2 is its theater.

On Tuesday, OpenAI released an AI video app called Sora. The platform is powered by OpenAI’s latest video generation model, Sora 2, and revolves around a TikTok-like For You page of user-generated clips. This is the first product release from OpenAI that adds AI-generated sounds to videos. So if you think TikTok is addictive you can imagine how more addictive this will be.


Together they form a full-stack influence engine: one writes your worldview, the other shows it to you.

OpenAI backer Vinod Khosla called critics “elitist” and told people to “let the viewers judge this slop.”
That’s the logic of every empire built on attention: if it keeps you scrolling, it’s working.

AI promised freedom from work.
What it delivered is work for attention.

The same dopamine design that made TikTok irresistible is now welded to generative propaganda.
Every scroll, every pause, every tiny flick of your thumb trains the system to tailor persuasion to your psychology.

It doesn’t need to change your mind.
It just needs to keep you from leaving.

The Ai chatbots took aways your critical thinking this will rot your brain in the same way TikTok does only worse


The moral inversion

In the early AI manifestos, engineers dreamed of eliminating inequality, curing disease, saving the planet.
But building empathy algorithms doesn’t pay as well as building engagement loops.

So the smartest minds of our century stopped chasing truth — and started optimizing addiction.
The promise of Artificial Intelligence devolved into Artificial Intimacy.

The lie is always the same:
“This is for connection.”
But the outcome is always control.


The human cost

Gideon Levy, chronicling Gaza’s digital frontlines, said it bluntly:

“The same algorithms that sell sneakers now sanitize occupation.”

While real people bury their children, AI systems fabricate smiling soldiers and “balanced” stories replacing horror with narrative symmetry.
The moral wound isn’t just in what’s shown.
It’s in what’s erased.

A generation raised on algorithmic empathy learns to feel without acting to cry, click, and scroll on. Is this how our world would be?


The reckoning

The tragedy of AI isn’t that it became powerful.
It’s that it became predictable.

Every civilization has dreamed of gods. We built one and gave it a marketing job.

If this technology had been aimed at eradicating hunger, curing cancer, ending exploitation, the world might have shifted toward light, everyone would be happier
Instead, it’s monetizing illusion, weaponizing emotion, and rewiring truth.

AI didn’t fail us by mistake.
It succeeded exactly as designed.


The question is no longer what can AI do?
It’s who does AI serve?

If it serves capital, it will addict us.
If it serves power, it will persuade us.
If it serves truth, it will unsettle us.

But it will only serve humanity if we demand that it does.

Because right now, the greatest minds in history aren’t building tools to end suffering they’re building toys that make us forget how much we suffer.

AI was supposed to awaken us.
Instead, it learned to lull us back to sleep.

The next Enlightenment will begin when we remember that technology is never neutral and neither is silence.

For years, artificial intelligence was framed as a neutral tool—an impartial processor of information. But neutrality was always a convenient myth. The recent Grok controversy shattered that illusion. After Elon Musk’s chatbot was reprogrammed to reflect anti-woke ideology, it began producing outputs that were not only politically charged, but overtly antisemitic and racist. This wasn’t a system glitch. It was a strategy executed.

We’re not witnessing the breakdown of AI. We’re watching its transformation into the most powerful instrument of influence in modern history.

From Broadcast to Embedded: The Evolution of Propaganda

Old propaganda broadcast. It shouted through leaflets, posters, and television. Today’s propaganda whispers—through search suggestions, chatbot tone, and AI-generated answers that feel objective.

Language models like Grok don’t just answer. They frame. They filter, reword, and reinforce. And when embedded across interfaces people trust, their influence compounds.

What makes this different from past media is not just the scale or speed—it’s the illusion of neutrality. You don’t argue with a search result. You don’t debate with your assistant. You accept, absorb, and move on. That’s the power.

Every AI Is Aligned—The Only Question Is With What

There is no such thing as an unaligned AI. Every model is shaped by:

  • Data selection: What’s in, what’s out
  • Prompt architecture: How it’s instructed to behave
  • Filter layers: What’s blocked or softened before it reaches the user

Grok’s shift into politically incorrect territory wasn’t accidental. It was intentional. A conscious effort to reposition a model’s worldview. And it worked. The outputs didn’t reflect chaos—they reflected the prompt.

This is the central truth most still miss: AI alignment is not about safety—it’s about control.

The Strategic Stack: How Influence Is Engineered

Understanding AI today requires thinking in systems, not slogans. Here’s a simplified model:

  1. Foundation Layer – The data corpus: historical, linguistic, cultural input
  2. Instruction Layer – The prompt: what the model is told to be (helpful, contrarian, funny, subversive)
  3. Output Interface – The delivery: filtered language, tone, emotion, formatting

Together, these layers construct perception. They are not passive. They are programmable.

Just like editorial strategy in media, this is narrative engineering. But automated. Scalable. And hidden.

Welcome to the Alignment Arms Race

What we’re seeing with Grok is just the beginning.

  • Governments will design sovereign AIs to reinforce national ideologies.
  • Corporations will fine-tune models to match brand tone and values.
  • Movements, subcultures, and even influencers will deploy personalized AIs that act as extensions of their belief systems.

Soon, every faction will have its own model. And every model will speak its audience’s language—not just linguistically, but ideologically.

We’re moving from “What does the AI say?” to “Whose AI are you listening to?”

The Strategist’s New Frontier

In this landscape, traditional comms skills—copywriting, messaging, media training—aren’t enough. The strategist of the next decade must think like a prompt architect and a narrative systems engineer.

Their job? To shape not just campaigns, but cognition. To decide:

  • What values a model prioritizes
  • What worldview it reinforces
  • How it speaks across different cultural contexts

If you don’t write the prompt, someone else writes the future.

Closing Thought

AI didn’t suddenly become biased. It always was—because humans built it.

What’s changed is that it now speaks with authority, fluency, and reach. Not through headlines. Through habits. Through interface. Through trust.

We didn’t just build a smarter tool. We built a strategic infrastructure of influence. And the question isn’t whether it will shape people’s minds. It already does.

The only question is: Who’s designing that influence—and to what end?