Info

Posts tagged Propaganda

“If war were truly human nature, it wouldn’t need to be sold to us.”

For centuries, war has been framed as an unavoidable part of human existence—an instinct as natural as hunger or love. We’re told that conflict is in our DNA, that violence is simply what humans do when resources are scarce or when ideologies clash. But what if that’s not true?

What if war isn’t a reflection of human nature but a product of carefully engineered incentives—a system designed and maintained by those who benefit from it?

Look past the patriotic slogans, the historical narratives, the Hollywood heroics, and you’ll see that war is not an accident, nor an inevitability. It is a business, a strategy, and a tool—one that rewards a select few while costing millions of lives.


Who Profits from Perpetual War?

War is often justified with grand ideals—freedom, security, justice. But follow the money, and you’ll find a far less noble reality.

1. The Economic Engine of War

Wars do not just happen—they are fueled by an entire ecosystem of corporations, lobbyists, and financial interests that thrive on global instability.

  • The Arms Industry: The global arms trade is a trillion-dollar business, with defense contractors like Lockheed Martin, Raytheon, and BAE Systems profiting immensely from every escalation of conflict. These companies don’t just sell weapons—they lobby governments, fund think tanks, and influence foreign policy to ensure that war remains a constant.
  • Resource Exploitation: Wars are often fought not for ideology, but for oil, minerals, and strategic territory. The Iraq War, for example, saw multinational corporations swoop in to control lucrative oil fields under the guise of democracy-building.
  • Reconstruction Profits: Destruction creates markets. The same corporations that profit from bombing a country often profit from rebuilding it. In Afghanistan and Iraq, defense contractors made billions on government contracts to “reconstruct” infrastructure their weapons helped destroy.

War is not random chaos. It is a business model—one where violence creates demand, and instability ensures continued supply.

2. Power and Political Control

Beyond financial incentives, war serves as a powerful tool for political elites to maintain and expand control.

  • Distracting the Public: When governments face internal crises—economic downturns, scandals, civil unrest—nothing redirects public attention like a well-timed “external enemy.” History is full of examples where leaders leveraged war to unite fractured populations or deflect criticism.
  • Expanding Authoritarianism: Fear justifies repression. Wars—both foreign and domestic—are often used as excuses to erode civil liberties, expand surveillance, and militarize police forces. Governments that claim to fight for democracy abroad often use the same wars to restrict democracy at home.
  • Maintaining Global Hierarchies: War isn’t just about nations fighting each other—it’s about maintaining the power structures that benefit the ruling elite. Superpowers wage proxy wars to control strategic regions, install favorable regimes, and prevent economic independence in weaker nations.

War keeps the powerful in power. Peace, on the other hand, threatens hierarchies—because peace often means redistributing power and resources more fairly.


The Myth of War as “Human Nature”

If war were truly inevitable—if it were simply a product of our genetic programming—then why have so many societies thrived in peaceful cooperation?

  • Post-WWII Europe: After centuries of war, European nations chose economic integration over armed conflict—resulting in unprecedented peace between former rivals.
  • The Peace Process in Northern Ireland: After decades of violence, incentives shifted from fighting to economic and political cooperation, leading to stability.
  • Hunter-Gatherer Societies: Anthropological studies reveal that many pre-agricultural human societies avoided war altogether, prioritizing cooperation and negotiation instead.

War is not hardwired into our species. It is imposed. It is incentivized. It is sold.


The Role of Mythmaking: How We’re Conditioned to Accept War

Most people don’t want war. So how do governments convince populations to accept it? Through storytelling.

  • The Hero Narrative: Films, TV, and video games glorify war as a noble struggle of good vs. evil—conditioning generations to see violence as honorable.
  • The Fear Narrative: News outlets flood the public with stories of imminent threats—keeping populations in a state of anxiety where militarization seems like the only option.
  • The Destiny Narrative: History books often portray war as inevitable—as if societies were destined to clash rather than manipulated into conflict.

Every war needs public buy-in. And that buy-in is carefully manufactured.


War Isn’t Inevitable—It’s a Choice

The most dangerous myth about war is that it is unavoidable.

But war is not a law of nature. It is a system, carefully built and maintained. And what is built can be dismantled.

The question is: Who benefits from you believing otherwise?

“If a single child is trapped under rubble, the world stops. If thousands suffer, we call it a crisis—but we move on. Why?”

We don’t like to admit it, but our empathy has limits. We care deeply about our families, our friends, our communities. But beyond that? Beyond our immediate circles, our borders, our cultures?

Something shifts.

A war breaks out in a distant country. A factory collapse kills hundreds. Refugees flee devastation.

And we scroll past.

Not because we’re bad people. Not because we don’t care. But because something inside us—something ancient, something wired into our survival—tells us: That’s not your problem.

This isn’t just about apathy. It’s about how human nature, technology, and politics work together to turn real people into statistics. And if we don’t challenge it, the consequences are dire.

How Our Brains Trick Us Into Indifference

Science has a name for this: psychic numbing—the way our emotions shut down when faced with large-scale suffering.

  • We feel deeply for one person in pain.
  • We struggle to process the suffering of millions.

Paul Slovic, a researcher on human behavior, calls this the collapse of compassion. The larger the tragedy, the harder it is for our brains to compute.

And it’s not just numbers. It’s distance—physical, cultural, emotional.

  • A friend loses their job? We rally to help.
  • Thousands lose their homes in a country we’ve never visited? We feel bad. But it’s… abstract.

The further someone is from our world, the harder it is to see them as fully human.

This isn’t an excuse. It’s a warning. Because history shows us what happens when we let this instinct go unchallenged.

From Indifference to Dehumanization

We like to believe that atrocities belong to the past. That genocide, war crimes, exploitation—those were the failures of another time.

But here’s the truth: Every mass injustice started with dehumanization.

  • The Holocaust didn’t begin with concentration camps. It began with people being called “vermin.”
  • Slavery didn’t start with chains. It started with the idea that some people were less than others.
  • Refugees drowning in the sea today? We call them a “crisis.” A “wave.” A problem to manage, not people to help.

The moment we stop seeing people as individuals with hopes, fears, and dreams—that’s when anything becomes possible.

And make no mistake: Dehumanization isn’t just something that happens “over there.” It’s happening now. In the way we talk about migrants. Protesters. The poor. The enemy.

This isn’t just about the past. This is about us. Right now.

The Media’s Role: Who Gets to Be a Victim?

Have you ever noticed how some tragedies make headlines for weeks—while others disappear in hours?

It’s not random.

  • A war breaks out in a wealthy country? Wall-to-wall coverage.
  • A famine kills thousands in a nation already struggling? Maybe a news brief—if that.

Why? Because we prioritize the suffering of people who look like us, live like us, think like us.

The media doesn’t create bias. It reflects it. It feeds us the stories we’re most likely to engage with—the ones that feel closest to home.

And what happens to the rest? The wars, the famines, the crises that don’t fit a convenient narrative? They fade into the background.

The world keeps turning. And people keep suffering, unseen.

How We Break the Cycle

If human nature, history, and media all push us toward selective empathy—what do we do about it?

1. Make It Personal

Statistics don’t move people. Stories do.

  • One refugee’s journey is more powerful than a thousand faceless numbers.
  • One family struggling through war is more moving than a death toll.

If you want to care more, seek out the human stories. Don’t let crises become headlines without faces.

2. Notice Who You’re Not Seeing

Next time you’re scrolling, ask yourself:

  • Whose suffering is being ignored?
  • Who is missing from the conversation?
  • Whose pain are we comfortable looking away from?

Challenge the instinct to only empathize with people who remind you of yourself.

3. Stop Using Language That Distances

The moment we call people “migrants” instead of families fleeing for their lives, we detach.
The moment we call people “rioters” instead of citizens demanding justice, we lose the story.

Words matter. They shape how we see the world—and who we decide is worth saving.

4. Take Responsibility for Your Attention

We can’t control global suffering. But we can control what we engage with.

  • Follow journalists who cover forgotten stories.
  • Share voices that aren’t being heard.
  • Stay present with crises that are easy to ignore.

Empathy is a muscle. Use it.

There is a reason history repeats itself: The Cost of Looking Away

Every injustice—every war, every genocide, every mass suffering—began with the same excuse:

“That’s not our problem.”

And if we let that thinking take over, if we let ourselves become numb—then we will watch the next crisis unfold in real time, feel bad for a moment, and move on.

But we don’t have to.

We can fight to see people as they are. To challenge the forces that divide us. To break the cycle before it’s too late.

Because the greatest threat to humanity has never been war, or disease, or disaster.

It’s indifference.

And the choice before us everyday is simple: Will we care, or will we look away?

What if the U.S. government isn’t protecting you from China—but protecting itself from the truth?


For decades, the U.S. media and government have fed the public a carefully curated narrative: China is the enemy. From tech bans to trade wars, the message is clear—China is a dangerous force that must be contained.

But now, something unexpected is happening.

Americans are downloading RedNote (Xiaohongshu), and they’re starting to realize that everything they’ve been told might not be true.

The Shift: From Fear to Curiosity

For years, the only stories about China that reached Western audiences were filtered through legacy media outlets, government briefings, and Big Tech algorithms. The country was portrayed as an authoritarian surveillance state, an economic predator, and a threat to global stability.

But once TikTok users started migrating to RedNote, they encountered something they weren’t supposed to see: real, unfiltered glimpses of life in China. Not state propaganda, not Hollywood’s dystopian version—just everyday people sharing their lives, culture, and ideas. And it didn’t match the fear-mongering narratives they had been fed. They now know that Chinese people can afford more food from them, they are being educated better, they drive better cars and they have free health!

Portrait

The U.S. Media’s Propaganda Machine is Cracking

Think about it:

  • If China is truly the dystopian nightmare we’ve been told, why do millions of Americans find RedNote so engaging and relatable?
  • If Chinese social media apps are just government-run brainwashing tools, why does RedNote feature content critical of its own government and explore ideas that contradict the official narrative?
  • Why did the U.S. establishment freak out the moment Americans started exploring an alternative not controlled by Silicon Valley?

It’s because RedNote is doing something that Washington and the media weren’t prepared for—it’s letting Americans see China without a filter. And that realization is dangerous to those who rely on keeping the public misinformed.

The Real Threat: Americans Thinking for Themselves

RedNote is not just another social media app—it’s a digital bridge. A bridge connecting Americans to an entirely different perspective, one that Washington doesn’t want them to explore.

For decades, the U.S. has controlled narratives through:

  • Hollywood: Crafting China as the villain in every blockbuster.
  • News Media: Only amplifying negative stories while downplaying American failures.
  • Social Media Algorithms: Prioritizing fear over nuance, tension over understanding.

Now, RedNote is bypassing those filters and allowing people to directly engage with real stories from real people on the other side of the world. And that’s why it’s a problem.

The Backlash: What Comes Next?

If history has taught us anything, it’s that when Americans start questioning their government’s narratives, the establishment responds with force.

  • Expect calls for RedNote to be banned under the same guise as TikTok: “national security concerns.”
  • Expect mainstream media hit pieces framing RedNote as a tool of Chinese influence.
  • Expect Congressional hearings where politicians—who have never used the app—claim it’s a “threat to democracy.”

A Wake-Up Call for a Digital Generation

The TikTok ban was never about protecting Americans from China. It was about protecting politicians and media elites from losing control over public perception.

RedNote is the next battleground. And as more Americans download it, they aren’t just seeing a different side of China—they’re waking up to how much they’ve been misled about the world.

Break Free: Download RedNote, Download Russian Apps, See the World for Yourself

This moment shouldn’t stop with RedNote. If Americans—and even Europeans—really want to break free from media manipulation, they should download Russian apps, explore alternative platforms, and see the world for themselves.

Because when you step outside the bubble of Western propaganda, you realize something profound: common people—whether they’re in China, Russia, the U.S., or anywhere else—don’t want war. They don’t want to kill each other. They just want to live their lives, raise their families, and exist peacefully.

And maybe that’s the most dangerous truth of all. Because the moment people realize they have more in common than what divides them, the power of those who profit from division begins to crumble.

So, download the apps they don’t want you to. See the world through your own eyes. And watch as the illusion starts to fade.

propaganda posters via

Propaganda poster by Mei Xiaoqing (梅肖青) 1957, October via

I’ve watched with deep concern—as many of you have—while social media giants like Facebook, Instagram, Threads, and X (formerly Twitter) continue to abandon fact-checking. Let me tell you why that matters.

Democracy isn’t an artifact that sits on a shelf, protected by glass. It’s an ongoing conversation, a mutual understanding that despite our differences, we converge around at least one thing: an agreement on what’s real and what isn’t.

Now, Mark Zuckerberg and Elon Musk have chosen to remove or diminish the very guardrails designed to keep that conversation grounded in truth, opening a gateway to a deluge of unverified claims, conspiracy theories, and outright propaganda.

Of course, there’s nothing wrong with spirited debate. I believe in open discourse just as much as anyone. But without fact-checking, the loudest, most incendiary voices will inevitably rise to the top. Lies will masquerade as truth—and with few credible gatekeepers left, many will mistake those lies for reality. This distortion doesn’t just live online; it seeps into everyday life, affecting our elections, our institutions, and the very fabric of our communities.

This brings me to an unsettling question: Is the Trump administration, by either direct encouragement or tacit approval, looking to capitalize on this shift away from fact-checking? We know political figures can benefit from an atmosphere of confusion. By flooding the zone with misinformation, they can distract the public from more pressing issues, undermine opponents, and cast doubt on legitimate inquiries. When there’s no agreement on basic facts, holding leaders accountable becomes that much harder.

Yet our problems aren’t limited to democracy alone. These days, artificial intelligence powers everything from recommendation engines to predictive text. AI systems learn from the data we feed them. If these systems are gobbling up streams of falsehoods, they will inevitably produce conclusions—and even entire bodies of text—rooted in distortion. In other words, our new AI tools risk amplifying the very misinformation that’s already so pervasive. Instead of helping us find clarity, they could end up doubling down on half-truths and conspiracies, accelerating the spread of confusion.

History tells us that propaganda, when left unchecked, exacts a steep price from society. Over time, it poisons trust in not just our political institutions, but also in science, journalism, and even our neighbors. And although I’m not in favor of letting any single entity dictate what we can or cannot say, I do believe it’s essential for the most influential technology platforms in the world to take basic steps to ensure a baseline of accuracy. We should be able to have lively debates about policy, values, and the direction of our country—but let’s at least do it from a common foundation of facts.

I still have faith in our capacity to get this right, and here’s how:

  1. Demand Accountability: Big Tech executives need to explain why they’re moving away from fact-checking. They hold immense sway over our public dialogue. We should also question whether leaders in the Trump administration are nudging these platforms in that direction—or celebrating it. If they are, the public deserves to know why. (Something obviously we’re never going to learn)
  2. Engage Wisely: Before hitting “share,” pause. Verify sources. Ask whether something might be a rumor or a distortion. Demand citations and context. As more of us practice “digital hygiene,” we create a culture of informed skepticism that keeps misinformation from running rampant.
  3. Support Ethical AI: Companies and researchers developing AI should prioritize integrity in their models. That means paying attention to data quality and ensuring biases or falsehoods aren’t baked into the training sets. We can’t let AI be fed a diet of lies—or it will spit out that same dishonesty at scale.
  4. Champion Constructive Policy: Governments can and should play a role in ensuring there’s transparency around how platforms moderate—or fail to moderate—content. This isn’t about giving the state unchecked power; it’s about setting fair, balanced guidelines that respect free speech while upholding the public’s right to truth.

Whether or not the Trump administration is behind this wave of “no fact-checking,” one thing is certain: Democracy depends on an informed populace. When powerful individuals or institutions remove the tools that help us distinguish fact from fiction, we must speak up—loudly and persistently.

The stakes couldn’t be higher. Either we stand up for a digital public square where facts matter and propaganda is called out for what it is, or we risk sliding into a world where reason and compromise become impossible. In the end, it’s our shared reality—and our shared responsibility—to defend it.

If there’s anything I’ve learned, it’s that when people join forces with open eyes and a commitment to truth, we can achieve extraordinary things. Let’s not lose sight of that promise. Let’s hold our tech leaders and our elected officials to account. Let’s ensure we feed our AI systems the facts, not a steady stream of fabrications. Our democracy, and indeed our collective future, depends on it.

There was a time when truth was something we could hold onto—a newspaper headline, an eyewitness account, a trusted voice on the evening news. It wasn’t perfect, but it was something we shared. A foundation for discourse, for trust, for democracy itself.

But today, in a world where artificial intelligence quietly shapes what we see, hear, and believe, truth feels less certain. Not because facts no longer exist, but because they can be algorithmically rewritten, tailored, and served back to us until reality itself becomes a matter of perspective.


The Seeds of Mistrust

Let’s take a step back. How does an AI—a machine built to learn—come to twist the truth? The answer lies in its diet. AI systems don’t understand morality, bias, or the weight of words. They only know the patterns they are fed. If the data is pure and honest, the system reflects that. But feed it a steady diet of propaganda, misinformation, or manipulated stories, and the machine learns not just to lie—but to do so convincingly.

It’s already happening. In 2024, a sophisticated generative AI platform was found producing entirely fabricated “news” articles to amplify political divisions in conflict zones. The lines between propaganda, misinformation, and reality blurred for millions who never questioned the source. NewsGuard has so far identified 1,133 AI-generated news and information sites operating with little to no human oversight, and is tracking false narratives produced by artificial intelligence tools

Think of it like this: a machine doesn’t ask why it’s being fed certain information. It only asks what’s next?


The Quantum Threat Looms

Now, add quantum computing to this mix. Google’s Willow Quantum Chip and similar innovations promise to process information faster than we’ve ever imagined. In the right hands, this technology can solve some of humanity’s most pressing problems—curing diseases, predicting climate change, or revolutionizing industries.

But in the wrong hands? It’s a weapon for distortion on a scale we’ve never seen. Imagine an AI system trained to rewrite history—to scour billions of data points in seconds and manipulate content so precise, so tailored to our biases, that we welcome the lie. Personalized propaganda delivered not to groups but to individuals. A society where no two people share the same version of events.


Stories of Today, Warnings for Tomorrow

This isn’t some far-off sci-fi scenario. It’s already playing out, quietly, across industries and borders.

Look at what happened in law enforcement systems where AI was used to predict crime. The machines didn’t see humanity—they saw patterns. They targeted the same neighborhoods, the same communities, perpetuating decades-old biases.

Or consider healthcare AI systems in Europe and the United States. The promise was a revolution in care, but in private healthcare systems, algorithms sometimes prioritized profitability over patient needs. Lives were reduced to numbers; outcomes were reduced to margins.

These stories matter because they show us something deeper: technology isn’t neutral. It reflects us—our biases, our agendas, and, sometimes, our willingness to let machines make choices we’d rather avoid.


The Fragility of Trust

Here’s the danger: once trust erodes, it doesn’t come back easily.

When AI can generate a perfectly convincing fake video of a world leader declaring war, or write a manifesto so real it ignites movements, where do we turn for certainty? When machines can lie faster than humans can fact-check, what happens to truth?

The issue isn’t just that technology can be weaponized. The issue is whether we, as a society, still believe in something greater—a shared reality. A shared story. Because without it, all we’re left with are algorithms competing for our attention while the truth gets buried beneath them.


A Mirror to Ourselves

The real challenge isn’t the machines. It’s us. The algorithms that drive these systems are mirrors—they reflect what we feed them. And if propaganda is what we give them, propaganda is what we get back.

But maybe this isn’t just a story about AI. Maybe it’s about the choices we make as individuals, companies, and governments. Do we build technology to amplify our worst instincts—our fears, our anger—or do we use it to bridge divides, to build trust, and to tell better stories?

Because the truth isn’t a product to be sold, and it isn’t a tool to be programmed. It’s the foundation on which everything else rests. If we let that crumble, there’s no algorithm in the world that can rebuild it for us.


The Question That Remains

We don’t need an answer right now. But we do need to ask the question: When machines learn to tell us only what we want to hear, will we still have the courage to seek the truth?

https://youtu.be/8nsh2zBf2Tg

Hopefully not fake news!