I’ve watched with deep concern—as many of you have—while social media giants like Facebook, Instagram, Threads, and X (formerly Twitter) continue to abandon fact-checking. Let me tell you why that matters.
Democracy isn’t an artifact that sits on a shelf, protected by glass. It’s an ongoing conversation, a mutual understanding that despite our differences, we converge around at least one thing: an agreement on what’s real and what isn’t.
Now, Mark Zuckerberg and Elon Musk have chosen to remove or diminish the very guardrails designed to keep that conversation grounded in truth, opening a gateway to a deluge of unverified claims, conspiracy theories, and outright propaganda.
Of course, there’s nothing wrong with spirited debate. I believe in open discourse just as much as anyone. But without fact-checking, the loudest, most incendiary voices will inevitably rise to the top. Lies will masquerade as truth—and with few credible gatekeepers left, many will mistake those lies for reality. This distortion doesn’t just live online; it seeps into everyday life, affecting our elections, our institutions, and the very fabric of our communities.
This brings me to an unsettling question: Is the Trump administration, by either direct encouragement or tacit approval, looking to capitalize on this shift away from fact-checking? We know political figures can benefit from an atmosphere of confusion. By flooding the zone with misinformation, they can distract the public from more pressing issues, undermine opponents, and cast doubt on legitimate inquiries. When there’s no agreement on basic facts, holding leaders accountable becomes that much harder.
Yet our problems aren’t limited to democracy alone. These days, artificial intelligence powers everything from recommendation engines to predictive text. AI systems learn from the data we feed them. If these systems are gobbling up streams of falsehoods, they will inevitably produce conclusions—and even entire bodies of text—rooted in distortion. In other words, our new AI tools risk amplifying the very misinformation that’s already so pervasive. Instead of helping us find clarity, they could end up doubling down on half-truths and conspiracies, accelerating the spread of confusion.
History tells us that propaganda, when left unchecked, exacts a steep price from society. Over time, it poisons trust in not just our political institutions, but also in science, journalism, and even our neighbors. And although I’m not in favor of letting any single entity dictate what we can or cannot say, I do believe it’s essential for the most influential technology platforms in the world to take basic steps to ensure a baseline of accuracy. We should be able to have lively debates about policy, values, and the direction of our country—but let’s at least do it from a common foundation of facts.
I still have faith in our capacity to get this right, and here’s how:
Demand Accountability: Big Tech executives need to explain why they’re moving away from fact-checking. They hold immense sway over our public dialogue. We should also question whether leaders in the Trump administration are nudging these platforms in that direction—or celebrating it. If they are, the public deserves to know why. (Something obviously we’re never going to learn)
Engage Wisely: Before hitting “share,” pause. Verify sources. Ask whether something might be a rumor or a distortion. Demand citations and context. As more of us practice “digital hygiene,” we create a culture of informed skepticism that keeps misinformation from running rampant.
Support Ethical AI: Companies and researchers developing AI should prioritize integrity in their models. That means paying attention to data quality and ensuring biases or falsehoods aren’t baked into the training sets. We can’t let AI be fed a diet of lies—or it will spit out that same dishonesty at scale.
Champion Constructive Policy: Governments can and should play a role in ensuring there’s transparency around how platforms moderate—or fail to moderate—content. This isn’t about giving the state unchecked power; it’s about setting fair, balanced guidelines that respect free speech while upholding the public’s right to truth.
Whether or not the Trump administration is behind this wave of “no fact-checking,” one thing is certain: Democracy depends on an informed populace. When powerful individuals or institutions remove the tools that help us distinguish fact from fiction, we must speak up—loudly and persistently.
The stakes couldn’t be higher. Either we stand up for a digital public square where facts matter and propaganda is called out for what it is, or we risk sliding into a world where reason and compromise become impossible. In the end, it’s our shared reality—and our shared responsibility—to defend it.
If there’s anything I’ve learned, it’s that when people join forces with open eyes and a commitment to truth, we can achieve extraordinary things. Let’s not lose sight of that promise. Let’s hold our tech leaders and our elected officials to account. Let’s ensure we feed our AI systems the facts, not a steady stream of fabrications. Our democracy, and indeed our collective future, depends on it.
There was a time when truth was something we could hold onto—a newspaper headline, an eyewitness account, a trusted voice on the evening news. It wasn’t perfect, but it was something we shared. A foundation for discourse, for trust, for democracy itself.
But today, in a world where artificial intelligence quietly shapes what we see, hear, and believe, truth feels less certain. Not because facts no longer exist, but because they can be algorithmically rewritten, tailored, and served back to us until reality itself becomes a matter of perspective.
The Seeds of Mistrust
Let’s take a step back. How does an AI—a machine built to learn—come to twist the truth? The answer lies in its diet. AI systems don’t understand morality, bias, or the weight of words. They only know the patterns they are fed. If the data is pure and honest, the system reflects that. But feed it a steady diet of propaganda, misinformation, or manipulated stories, and the machine learns not just to lie—but to do so convincingly.
It’s already happening. In 2024, a sophisticated generative AI platform was found producing entirely fabricated “news” articles to amplify political divisions in conflict zones. The lines between propaganda, misinformation, and reality blurred for millions who never questioned the source. NewsGuard has so far identified 1,133 AI-generated news and information sites operating with little to no human oversight, and is tracking false narratives produced by artificial intelligence tools
Think of it like this: a machine doesn’t ask why it’s being fed certain information. It only asks what’s next?
The Quantum Threat Looms
Now, add quantum computing to this mix. Google’s Willow Quantum Chipand similar innovations promise to process information faster than we’ve ever imagined. In the right hands, this technology can solve some of humanity’s most pressing problems—curing diseases, predicting climate change, or revolutionizing industries.
But in the wrong hands? It’s a weapon for distortion on a scale we’ve never seen. Imagine an AI system trained to rewrite history—to scour billions of data points in seconds and manipulate content so precise, so tailored to our biases, that we welcome the lie.Personalized propaganda delivered not to groups but to individuals. A society where no two people share the same version of events.
Stories of Today, Warnings for Tomorrow
This isn’t some far-off sci-fi scenario. It’s already playing out, quietly, across industries and borders.
These stories matter because they show us something deeper: technology isn’t neutral. It reflects us—our biases, our agendas, and, sometimes, our willingness to let machines make choices we’d rather avoid.
The Fragility of Trust
Here’s the danger: once trust erodes, it doesn’t come back easily.
When AI can generate a perfectly convincing fake video of a world leader declaring war, or write a manifesto so real it ignites movements, where do we turn for certainty? When machines can lie faster than humans can fact-check, what happens to truth?
The issue isn’t just that technology can be weaponized. The issue is whether we, as a society, still believe in something greater—a shared reality. A shared story. Because without it, all we’re left with are algorithms competing for our attention while the truth gets buried beneath them.
A Mirror to Ourselves
The real challenge isn’t the machines. It’s us. The algorithms that drive these systems are mirrors—they reflect what we feed them. And if propaganda is what we give them, propaganda is what we get back.
But maybe this isn’t just a story about AI. Maybe it’s about the choices we make as individuals, companies, and governments. Do we build technology to amplify our worst instincts—our fears, our anger—or do we use it to bridge divides, to build trust, and to tell better stories?
Because the truth isn’t a product to be sold, and it isn’t a tool to be programmed. It’s the foundation on which everything else rests. If we let that crumble, there’s no algorithm in the world that can rebuild it for us.
The Question That Remains
We don’t need an answer right now. But we do need to ask the question: When machines learn to tell us only what we want to hear, will we still have the courage to seek the truth?