Info

Posts tagged Ai

Choose another tag?

Now you know!

Dr. Justin Sung is a world-renowned expert in self-regulated learning, certified teacher, research author, and former medical doctor. He has guest lectured on learning skills at Monash University for Master’s and PhD students in Education and Medicine. Over the past decade, he has empowered tens of thousands of learners worldwide to dramatically improve their academic performance, learning efficiency, and motivation.

AI was supposed to reinvent advertising. To make it intimate. Tailored. A whisper in your ear, not a billboard in your face.

Instead, most AI ads today feel like generic upscale animation slick, polished, but soulless. They don’t feel personal. They feel mass-produced and very similar to one another.

The illusion of personalization

Agencies love to say “personalization at scale.” What we’re really seeing is templating at scale. A character model reused, a background swapped, a few lines of text rotated. The result: ads that look identical across brands, categories, and countries. I can’t help wondering: are they actually selling the product, or just selling the illusion of innovation?

It’s creative déjà vu.

Nearly 90% of advertisers are already using AI to make video ads (IAB, 2025).
– Yet consumers aren’t fooled: NielsenIQ found many describe AI ads as “boring,” “annoying,” and “confusing” (Nielsen/OKO One, 2024).

If the promise was intimacy, the delivery feels like an overproduced screensaver.

The data proves what’s missing

When AI is used for real personalization, the results are different:

MIT researchers (2025) found personalized AI video ads boosted engagement by 6–9 percentage points, while cutting production costs by 90% (MIT IDE, 2025).
– Headway, an edtech startup, reported a 40% ROI increase after leaning into AI creative—but only because they combined speed with true audience tailoring (Business Insider, 2024).

The distinction is clear: personalized AI works. Generic AI doesn’t.

Template fatigue is the new banner blindness

We’ve replaced stock photography with stock animation. Banner blindness with template blindness. Ads that were supposed to see you instead blur into the feed.

And here’s the tragedy: the tech could do more. AI can adapt mood, context, culture, even language nuance. But right now, most agencies are chasing speed over meaning volume over resonance.

The fork in the road

The industry faces a choice:

– Keep churning out glossy, generic animations that look expensive but feel empty.
– Or use AI as a scalpel cutting deeper into personalization, creating work that actually feels alive to the person watching.

Because if AI is just helping us produce better-looking wallpaper, then it’s not innovation. It’s stagnation with better rendering.

Sam Altman, the man who helped turn the internet into a theme park run by robots, has finally confessed what the rest of us figured out years ago: the place feels fake. He scrolls through Twitter or Reddit and assumes it’s bots. Of course he does. It’s like Willy Wonka walking through his own chocolate factory and suddenly realizing everything tastes like diabetes.

The CEO of OpenAI worrying about bot-ridden discourse is like Ronald McDonald filing a complaint about childhood obesity. You built the thing, Sam. You opened the door and shouted “Release the clones!” and now you’re clutching your pearls because the clones are crowding the buffet.

The bots have won, and the humans are complicit

Here’s the real kicker: Altman says people now sound like AI. No kidding. Spend five minutes online and you’ll see humans writing in the same hollow, autocorrect tone as the machines. Every Instagram caption looks like it was generated by a motivational fridge magnet. Every tweet sounds like it was written by a marketing intern with a concussion.

This isn’t evolution. It’s mimicry. Like parrots squawking human words, we’ve started squawking algorithmic filler. Our personalities are being laundered through engagement metrics until we all sound like bot cousins trying to sell protein powder.

Dead Internet Theory goes corporate

For years, conspiracy theorists have whispered about the “Dead Internet Theory” the idea that most of what you see online is written by bots, not people. Altman just rolled into the morgue, peeled back the sheet, and muttered, “Hmm, looks lifeless.” What he forgot to mention is that he’s the one leasing out the coffins. AI companies aren’t worried the internet is fake. They’re building the next tier of fakery and charging subscription fees for the privilege.

So congratulations. The paranoid meme kids were right. The internet is a corpse dressed in flashing ads, propped up by click-farms, and serenaded by bots. And instead of cutting the cord, Silicon Valley is selling tickets to the wake.

The real problem isn’t bots

It’s incentives. Platforms reward sludge. If you spew enough generic engagement bait — “This billionaire said THIS about AI. Thoughts?” the algorithm slaps a medal on your chest and boosts you into everyone’s feed. Humans, desperate for attention, start acting like bots to compete. The lines blur. Who’s real? Who’s synthetic? No one cares, as long as the dopamine hits.

And that’s the rot. It’s not that AI makes the internet fake. It’s that humans are happy to fake themselves to survive inside it. We’re not just scrolling a dead internet. We’re rehearsing our own funerals in real time.

The coffin is already polished

So yes, Sam, the internet is fake. It’s been fake since the first influencer pretended their kitchen counter was a five-star resort. You’re just noticing now because your reflection is staring back at you. You built the machine, you fed it our words, and now it spits them back at you like a funhouse mirror. Distorted. Recycled. Dead.

The internet didn’t die naturally. It was murdered. And the suspects are still running the gift shop.

The end of democracy rarely arrives with sirens and flames. More often, it fades quietly—choice by choice, habit by habit, until the rituals remain but the substance has gone.

In their timely paper, Don’t Panic (Yet), Felix Simon and Sacha Altay remind us that the AI apocalypse never arrived in 2024. Despite a frenzy of deepfakes and fears of algorithmic manipulation, the great elections of that year were not decided by chatbots or microtargeted propaganda. The decisive forces were older and more human: politicians who lied, parties who suppressed votes, entrenched inequalities that shaped turnout and trust.

Their conclusion is measured: mass persuasion is hard. Studies show political ads, whether crafted by consultants or large language models, move few votes. People cling to their partisan identities, update beliefs only at the margins, and treat most campaign noise as background static. The public is not gullible. Even misinformation, now turbocharged by generative AI, is limited in reach by attention, trust, and demand.

In this sense, Simon and Altay are right: the panic was misplaced. AI was not the kingmaker of 2024.

But here is the danger: what if reassurance itself is the illusion?

The great risk of AI to democracy does not lie in a single election “hacked” by bots. It lies in the slow erosion of the conditions that make democracy possible. Simon and Altay diagnose panic as a cycle society overreacts to every new medium. Yet what if this is not a panic at all, but an early recognition that AI represents not another medium, but a structural shift?

Democracy depends on informational sovereignty citizens’ capacity to orient themselves in a shared reality. Generative AI now lives inside search engines, social feeds, personal assistants. It does not need to persuade in the crude sense. It reshapes the field of visibility what facts surface, what stories disappear, what worlds seem plausible.

Simon and Altay show that persuasion is weak. But erosion is strong.

  • Trust erodes when deepfakes and synthetic voices make truth itself suspect.
  • Agency erodes when predictive systems anticipate our preferences and feed them back before we form them.
  • Equality erodes when the wealthiest campaigns and nations can afford bespoke algorithmic influence while the rest of the citizenry navigates blind.

In 2024, democracy endured not because AI was harmless, but because old buffers mainstream media, partisan loyalty, civic inertia still held. These reserves are not infinite. They are the borrowed time on which democracy now runs.

So yes: panic may be premature if we define it as fearing that one election will be stolen by machines. But complacency is suicidal if we fail to see how AI, fused with the logics of surveillance capitalism, is hollowing democracy from within.

The question is not whether AI will swing the next vote. The question is whether, by the time we notice, the very meaning of choice will already have been diminished.

Democracy may survive a storm. What it cannot survive is the slow normalization of living inside someone else’s algorithm.

Only in Albania could such a mythic gesture occur: appointing an algorithm as cabinet minister. Diella, we are told, will cleanse public procurement of corruption, that timeless Balkan disease. The government proclaims that, at last, software will succeed where generations of politicians failed.

Permit me some skepticism.

Public procurement remains the deepest vein of corruption not because ministers are uniquely wicked, but because the system demands it. Contracts worth billions hinge on opaque decisions. Bribes are not accidents; they are the lubricant that keeps political machines alive. To imagine an algorithm can sterilize this mistake mathematics for morality.

Worse, Diella may render corruption not weaker but stronger. Unlike a human minister who can be interrogated, shamed, toppled, an algorithm offers no face to confront. If a contract flows to the prime minister’s cousin’s company, the defense comes immediate and unassailable: the machine decided. How convenient.

Algorithms never impartial. Written, trained, tuned by people with interests. Corruption, once visible in smoky cafés and briefcases of cash, risks migrating invisibly into code—into criteria weighted here, data sets adjusted there. Easier to massage inputs than to bribe a minister. Harder to detect.

This does not resemble transparency. It resembles radical opacity dressed in the costume of objectivity.

So let us be clear: Albania’s experiment counts as bold. It may inspire imitators across a continent exhausted by graft. But boldness and danger travel as twins. Diella will either cleanse the bloodstream of public life or sanctify its toxins in digital armor.

Do not be fooled by rhetoric. If citizens cannot audit code, if journalists cannot interrogate criteria, if rivals cannot challenge outputs, Albania has not abolished corruption. It has automated it.

The irony cuts deep. A government that promises liberation from human vice may have just built the perfect machine for laundering it.


We were promised artificial intelligence. What we got was artificial confidence.

In August 2025, OpenAI’s Sam Altman finally said what many of us already felt: AI is in a bubble. The hype is too big. The returns? Mostly missing.

A recent MIT study found that 95% of business AI projects are failing. Not underperforming—failing. That’s not a tech glitch. That’s a reality check.

But here’s the catch: this isn’t a loud crash. It’s a slow leak. The real damage isn’t in the money—it’s in the trust.


Why This Matters

We’re not seeing some dramatic robot uprising or system failure. What we’re seeing is more subtle—and more dangerous. People are starting to tune out.

When AI promises magic and delivers half-finished ideas, people stop believing. Workers get anxious. Creators feel disposable. Users grow numb.

It’s not that AI is bad. It’s that it’s being misused, misunderstood, and overhyped.


Everyone’s Chasing the Same Dream

Companies keep rushing into AI like it’s a gold rush. But most of them don’t even know what problem they’re trying to solve.

They’re using AI to look modern, not to actually help anyone. CEOs brag about “AI transformation” while their employees quietly unplug the pilot programs that aren’t working.

What started as innovation has turned into a game of pretending.


Trust Is the Real Product

Once people lose trust, you can’t get it back with a press release. Or a new model. Or a smarter chatbot.

AI was supposed to help us. Instead, it’s become another system we can’t trust. That’s the real bubble—the belief that more tech automatically means more progress.

Sam Altman says smart people get overexcited about a kernel of truth. He’s right. But when that excitement turns into investment hype, market pressure, and inflated promises, it creates something fragile.

We’re watching that fragility crack now.


So What Do We Do?

This isn’t about canceling AI. It’s about waking up.

We need to:

  • Ask better questions about why we’re using AI
  • Stop chasing headlines and start solving real problems
  • Build systems that serve people, not just shareholders
  • Demand transparency, not just cool demos

The future of AI should be boring—useful, grounded, ethical. Not magical. Not messianic.


The AI bubble isn’t bursting in a dramatic way.

It’s leaking—slowly, quietly, dangerously.

If we don’t repair the trust that’s evaporating, the next collapse won’t be technical. It’ll be cultural.

Collapse doesn’t happen when machines fail. Collapse happens when people stop believing.

Page 4 of 16
1 2 3 4 5 6 16