Info

Posts tagged Ai

Choose another tag?


The year is 2025.
AI can write symphonies, flirt better than poets, and generate fake people with better skin than me.
And yet… my “AI-powered browser” can’t block an ad for toenail fungus cream.

ChatGPT Atlas promised to redefine browsing.
Turns out, it just redefined how many ads I can accidentally click before enlightenment.

You’d think a browser made by the same entity that writes entire novels in one breath could, at the very least, install a VPN or an adblocker.
But no. Atlas is like that friend who swears they’re “super into privacy” … while loudly asking Siri where to buy condoms.

Meanwhile, Brave sits in the corner like a smug monk — whispering, “no trackers, no ads, no nonsense.”
Atlas, on the other hand, feels like a beautiful glass house… built on a billboard.

I tried asking it to “block ads.”
It politely replied, “I can’t do that yet.”
Which is wild, because it can explain Gödel’s Incompleteness Theorem, simulate Nietzsche, and write erotic haikus about capitalism.
But sure …. blocking popups? Too advanced.

At this point, I half expect the next update to feature a “Buy Now” button on every moral decision I make.

Maybe they’ll call it AdSense of Self™.

Don’t get me wrong … I love Atlas.
It’s sleek, intelligent, and occasionally existential.
But when the smartest browser in the world lets me get ambushed by “You won’t believe what she looks like now” banners, I start to wonder who’s learning from who.

Maybe next update they’ll add a soul.
Or, you know … an adblocker.


When a government pays nearly half a million dollars for a report, it expects facts not fiction.
And yet, in 2025, one of the world’s biggest consulting firms, Deloitte, refunded part of a $440,000 contract to the Australian government after investigators discovered that its “independent review” was polluted with fake references, imaginary studies, and even a fabricated court judgment.

The culprit? A generative AI system.
The accomplice? Human complacency.
The real crime? The quiet death of accountability and human laziness,


When Verification Died

AI didn’t break consulting it has just revealed what was already broken.

For decades, the Big Four (Deloitte, PwC, EY, and KPMG) have built empires on the illusion of objectivity. They sell certainty to governments drowning in complexity. Reports filled with charts, citations, and confident conclusions what looks like truth, but often isn’t tested.

Now, with AI, this illusion has industrialized.
It writes faster, fabricates smoother, and wraps uncertainty in the language of authority.

We used to audit companies.
Now we must audit the auditors.


The New Priesthood of AI-Assisted Authority

Governments rely on these firms to assess welfare systems, tax reform, cybersecurity, and national infrastructure the literal plumbing of the state.
Yet, they rarely audit the methods used to produce the analysis they’re paying for.

The Deloitte–Australia case shows the new frontier of risk:
AI-generated confidence presented as human expertise.

The report even quoted a non-existent court case. Imagine that a fabricated legal precedent influencing national policy.
And the reaction? A partial refund and a press release.

That’s not accountability. That’s theatre.


AI as Mirror, Not Monster

The machine didn’t hallucinate out of malice. It hallucinated because that’s what it does it predicts language, not truth.
But humans let those predictions pass for reality.

AI exposes a deeper human flaw: our hunger for certainty.
The consultant’s slide deck, the bureaucrat’s report, the politician’s talking point all depend on a shared illusion that someone, somewhere, knows for sure.

Generative AI has simply made that illusion easier to manufacture.


The Governments Must Now Audit the Auditors

Let this be the line in the sand.

Every government that has purchased a consultancy report since 2023 must immediately re-audit its contents for AI fabrication, fake citations, and unverified data.

This is not paranoia. It’s hygiene.

Because once fabricated evidence enters public record, it becomes the foundation for law, policy, and budget.
Every unchecked hallucination metastasizes into real-world consequence welfare sanctions, environmental policies, even wars justified by reports that were never real.

Governments must demand:

  • Full transparency of all AI-assisted sections in any consultancy report.
  • Mandatory third-party verification before adoption into policy.
  • Public disclosure of generative tools used and audit logs retained.

Otherwise, the “Big Four” will continue printing pseudo-truths at industrial scale and getting paid for it.


The Audit of Reality

This scandal isn’t about Deloitte alone. It’s a mirror of our civilization.

We’ve outsourced thinking to machines, integrity to institutions, and judgment to algorithms.
We no longer ask, is it true?
We ask, does it look official?

AI is not the apocalypse it’s the X-ray.
It shows us how fragile our truth systems already were.

The next collapse won’t be financial. It will be epistemic.
And unless governments reclaim the duty of verification, we’ll keep mistaking simulations for substance, hallucinations for history.


The Big Four don’t just audit companies anymore. They audit reality itself and lately, they’re failing the test.

Silicon Valley has sold the idea of tech in classrooms for years, because they get access to lifelong customers and valuable data. But while corporations like Google make billions, student test scores are falling. Making more idiot voters?

Corporations are “enhancing their pricing strategy” by combining AI with dynamic pricing. Delta, Walmart, Kroger, Wendy’s and other major corporations are using artificial intelligence to set prices based on data they’ve collected from you, effectively price gouging each of us on an individual basis. From Delta’s “full reengineering” of airline pricing to Kroger’s pilot program with facial recognition displays, the evidence is disturbing.

It was meant to cure poverty. Instead, it’s teaching machines how to lie beautifully.


The dream that sold us

Once upon a time, AI was pitched as humanity’s moonshot.
A tool to cure disease, end hunger, predict natural disasters, accelerate education, democratize knowledge.

“Artificial Intelligence,” they said, “will solve the problems we can’t.”

Billions poured in. Thinkers and engineers spoke of a digital enlightenment — algorithms as allies in healing the planet. Imagine it: precision medicine, fairer economics, universal access to creativity.

But as the dust cleared, the dream morphed into something grotesque.
Instead of ending poverty, we got apps that amplify vanity.
Instead of curing disease, we got filters that cure boredom.
Instead of a machine for liberation, we got a factory for manipulation.

AI did not evolve to understand us.
It evolved to persuade us.


The new language of control

When OpenAI’s ChatGPT exploded in 2022, the world gasped. A machine that could talk, write, and reason!
It felt like the beginning of something magnificent.

Then the fine print arrived.

By 2024, OpenAI itself confirmed that governments — including Israel, Russia, China, and Iran — were using ChatGPT in covert influence operations.
Chatbots were writing fake posts, creating digital personas, pushing political talking points.
Not fringe trolls — state-level campaigns.

And that wasn’t the scandal. The scandal was how quickly it became normal.

“Israel invests millions to game ChatGPT into replicating pro-Israel content for Gen Z audiences,”reported The Cradle, describing a government-backed push to train the model’s tone, humor, and phrasing to feel native to Western youth.

Propaganda didn’t just move online — it moved inside the algorithm.

The goal is no longer to silence dissent.
It’s to make the lie feel more natural than the truth.


From persuasion to possession

And then came Sora 2 — OpenAI’s next act.

You write: “A girl walks through rain, smiling.”
It delivers: a photorealistic clip so convincing it bypasses reason altogether.

Launched in September 2025, Sora 2 instantly topped app charts. Millions of users. Infinite scroll. Every frame synthetic. Every smile programmable.

But within days, The Guardian documented Sora’s dark side:
AI-generated videos showing bombings, racial violence, fake news clips, fabricated war footage.

A flood of emotional realism — not truth, but truth-shaped seduction.

“The guardrails,” one researcher said, “are not real.”

Even worse, states and PR agencies began experimenting with Sora to “test audience sentiment.”
Not to inform.
To engineer emotional response at scale.

Propaganda used to persuade through words.
Now it possesses through images.


The addiction loop

If ChatGPT was propaganda’s pen, Sora 2 is its theater.

On Tuesday, OpenAI released an AI video app called Sora. The platform is powered by OpenAI’s latest video generation model, Sora 2, and revolves around a TikTok-like For You page of user-generated clips. This is the first product release from OpenAI that adds AI-generated sounds to videos. So if you think TikTok is addictive you can imagine how more addictive this will be.


Together they form a full-stack influence engine: one writes your worldview, the other shows it to you.

OpenAI backer Vinod Khosla called critics “elitist” and told people to “let the viewers judge this slop.”
That’s the logic of every empire built on attention: if it keeps you scrolling, it’s working.

AI promised freedom from work.
What it delivered is work for attention.

The same dopamine design that made TikTok irresistible is now welded to generative propaganda.
Every scroll, every pause, every tiny flick of your thumb trains the system to tailor persuasion to your psychology.

It doesn’t need to change your mind.
It just needs to keep you from leaving.

The Ai chatbots took aways your critical thinking this will rot your brain in the same way TikTok does only worse


The moral inversion

In the early AI manifestos, engineers dreamed of eliminating inequality, curing disease, saving the planet.
But building empathy algorithms doesn’t pay as well as building engagement loops.

So the smartest minds of our century stopped chasing truth — and started optimizing addiction.
The promise of Artificial Intelligence devolved into Artificial Intimacy.

The lie is always the same:
“This is for connection.”
But the outcome is always control.


The human cost

Gideon Levy, chronicling Gaza’s digital frontlines, said it bluntly:

“The same algorithms that sell sneakers now sanitize occupation.”

While real people bury their children, AI systems fabricate smiling soldiers and “balanced” stories replacing horror with narrative symmetry.
The moral wound isn’t just in what’s shown.
It’s in what’s erased.

A generation raised on algorithmic empathy learns to feel without acting to cry, click, and scroll on. Is this how our world would be?


The reckoning

The tragedy of AI isn’t that it became powerful.
It’s that it became predictable.

Every civilization has dreamed of gods. We built one and gave it a marketing job.

If this technology had been aimed at eradicating hunger, curing cancer, ending exploitation, the world might have shifted toward light, everyone would be happier
Instead, it’s monetizing illusion, weaponizing emotion, and rewiring truth.

AI didn’t fail us by mistake.
It succeeded exactly as designed.


The question is no longer what can AI do?
It’s who does AI serve?

If it serves capital, it will addict us.
If it serves power, it will persuade us.
If it serves truth, it will unsettle us.

But it will only serve humanity if we demand that it does.

Because right now, the greatest minds in history aren’t building tools to end suffering they’re building toys that make us forget how much we suffer.

AI was supposed to awaken us.
Instead, it learned to lull us back to sleep.

The next Enlightenment will begin when we remember that technology is never neutral and neither is silence.

Page 3 of 16
1 2 3 4 5 16