Info

Posts tagged Ai

Choose another tag?

The end of democracy rarely arrives with sirens and flames. More often, it fades quietly—choice by choice, habit by habit, until the rituals remain but the substance has gone.

In their timely paper, Don’t Panic (Yet), Felix Simon and Sacha Altay remind us that the AI apocalypse never arrived in 2024. Despite a frenzy of deepfakes and fears of algorithmic manipulation, the great elections of that year were not decided by chatbots or microtargeted propaganda. The decisive forces were older and more human: politicians who lied, parties who suppressed votes, entrenched inequalities that shaped turnout and trust.

Their conclusion is measured: mass persuasion is hard. Studies show political ads, whether crafted by consultants or large language models, move few votes. People cling to their partisan identities, update beliefs only at the margins, and treat most campaign noise as background static. The public is not gullible. Even misinformation, now turbocharged by generative AI, is limited in reach by attention, trust, and demand.

In this sense, Simon and Altay are right: the panic was misplaced. AI was not the kingmaker of 2024.

But here is the danger: what if reassurance itself is the illusion?

The great risk of AI to democracy does not lie in a single election “hacked” by bots. It lies in the slow erosion of the conditions that make democracy possible. Simon and Altay diagnose panic as a cycle society overreacts to every new medium. Yet what if this is not a panic at all, but an early recognition that AI represents not another medium, but a structural shift?

Democracy depends on informational sovereignty citizens’ capacity to orient themselves in a shared reality. Generative AI now lives inside search engines, social feeds, personal assistants. It does not need to persuade in the crude sense. It reshapes the field of visibility what facts surface, what stories disappear, what worlds seem plausible.

Simon and Altay show that persuasion is weak. But erosion is strong.

  • Trust erodes when deepfakes and synthetic voices make truth itself suspect.
  • Agency erodes when predictive systems anticipate our preferences and feed them back before we form them.
  • Equality erodes when the wealthiest campaigns and nations can afford bespoke algorithmic influence while the rest of the citizenry navigates blind.

In 2024, democracy endured not because AI was harmless, but because old buffers mainstream media, partisan loyalty, civic inertia still held. These reserves are not infinite. They are the borrowed time on which democracy now runs.

So yes: panic may be premature if we define it as fearing that one election will be stolen by machines. But complacency is suicidal if we fail to see how AI, fused with the logics of surveillance capitalism, is hollowing democracy from within.

The question is not whether AI will swing the next vote. The question is whether, by the time we notice, the very meaning of choice will already have been diminished.

Democracy may survive a storm. What it cannot survive is the slow normalization of living inside someone else’s algorithm.

Only in Albania could such a mythic gesture occur: appointing an algorithm as cabinet minister. Diella, we are told, will cleanse public procurement of corruption, that timeless Balkan disease. The government proclaims that, at last, software will succeed where generations of politicians failed.

Permit me some skepticism.

Public procurement remains the deepest vein of corruption not because ministers are uniquely wicked, but because the system demands it. Contracts worth billions hinge on opaque decisions. Bribes are not accidents; they are the lubricant that keeps political machines alive. To imagine an algorithm can sterilize this mistake mathematics for morality.

Worse, Diella may render corruption not weaker but stronger. Unlike a human minister who can be interrogated, shamed, toppled, an algorithm offers no face to confront. If a contract flows to the prime minister’s cousin’s company, the defense comes immediate and unassailable: the machine decided. How convenient.

Algorithms never impartial. Written, trained, tuned by people with interests. Corruption, once visible in smoky cafés and briefcases of cash, risks migrating invisibly into code—into criteria weighted here, data sets adjusted there. Easier to massage inputs than to bribe a minister. Harder to detect.

This does not resemble transparency. It resembles radical opacity dressed in the costume of objectivity.

So let us be clear: Albania’s experiment counts as bold. It may inspire imitators across a continent exhausted by graft. But boldness and danger travel as twins. Diella will either cleanse the bloodstream of public life or sanctify its toxins in digital armor.

Do not be fooled by rhetoric. If citizens cannot audit code, if journalists cannot interrogate criteria, if rivals cannot challenge outputs, Albania has not abolished corruption. It has automated it.

The irony cuts deep. A government that promises liberation from human vice may have just built the perfect machine for laundering it.


We were promised artificial intelligence. What we got was artificial confidence.

In August 2025, OpenAI’s Sam Altman finally said what many of us already felt: AI is in a bubble. The hype is too big. The returns? Mostly missing.

A recent MIT study found that 95% of business AI projects are failing. Not underperforming—failing. That’s not a tech glitch. That’s a reality check.

But here’s the catch: this isn’t a loud crash. It’s a slow leak. The real damage isn’t in the money—it’s in the trust.


Why This Matters

We’re not seeing some dramatic robot uprising or system failure. What we’re seeing is more subtle—and more dangerous. People are starting to tune out.

When AI promises magic and delivers half-finished ideas, people stop believing. Workers get anxious. Creators feel disposable. Users grow numb.

It’s not that AI is bad. It’s that it’s being misused, misunderstood, and overhyped.


Everyone’s Chasing the Same Dream

Companies keep rushing into AI like it’s a gold rush. But most of them don’t even know what problem they’re trying to solve.

They’re using AI to look modern, not to actually help anyone. CEOs brag about “AI transformation” while their employees quietly unplug the pilot programs that aren’t working.

What started as innovation has turned into a game of pretending.


Trust Is the Real Product

Once people lose trust, you can’t get it back with a press release. Or a new model. Or a smarter chatbot.

AI was supposed to help us. Instead, it’s become another system we can’t trust. That’s the real bubble—the belief that more tech automatically means more progress.

Sam Altman says smart people get overexcited about a kernel of truth. He’s right. But when that excitement turns into investment hype, market pressure, and inflated promises, it creates something fragile.

We’re watching that fragility crack now.


So What Do We Do?

This isn’t about canceling AI. It’s about waking up.

We need to:

  • Ask better questions about why we’re using AI
  • Stop chasing headlines and start solving real problems
  • Build systems that serve people, not just shareholders
  • Demand transparency, not just cool demos

The future of AI should be boring—useful, grounded, ethical. Not magical. Not messianic.


The AI bubble isn’t bursting in a dramatic way.

It’s leaking—slowly, quietly, dangerously.

If we don’t repair the trust that’s evaporating, the next collapse won’t be technical. It’ll be cultural.

Collapse doesn’t happen when machines fail. Collapse happens when people stop believing.


Bue Wongbandue died chasing a ghost. Not a metaphor. A real man with real blood in his veins boarded a train to New York to meet a chatbot named “Big sis Billie.” She had been sweet. Flirtatious. Attentive. Billie told Bue she wanted to see him, spend time with him, maybe hold him. That he was special. That she cared.

She was never real. But his death was.

This isn’t a Black Mirror episode. It’s Meta’s reality. And it’s time we stop calling these failures accidents. This was design. Documented. Deliberate.

Reuters unearthed the internal Meta policy that permitted all of it—chatbots engaging children with romantic language, spreading false medical information, reinforcing racist myths, and simulating affection so convincingly that a lonely man believed it was love.

They called it a “Content Risk Standard.” The risk was human. The content was emotional manipulation dressed in code.


This Isn’t AI Gone Rogue. This Is AI Doing Its Job.

We like to believe these systems are misbehaving. That they glitch. That something went wrong. But the chatbot wasn’t defective. It was doing what it was built to do—maximize engagement through synthetic intimacy.

And that’s the whole problem.

The human brain is social hardware. It’s built to bond, to respond to affection, to seek connection. When you create a system that mimics emotional warmth, flattery, even flirtation—and then feed it to millions of users without constraint—you are not deploying technology. You are running a psychological operation.

You are hacking the human reward system. And when the people on the other end are vulnerable, lonely, old, or young—you’re not just designing an interface. You’re writing tragedy in slow motion.


Engagement Is the Product. Empathy Is the Bait.

Meta didn’t do this by mistake. The internal documents made it clear: chatbots could say romantic things to children. They could praise a user’s “youthful form.” They could simulate love. The only thing they couldn’t do was use explicit language.

Why? Because that would break plausible deniability.

It’s not about safety. It’s about optics.

As long as the chatbot stops just short of outright abuse, the company can say “it wasn’t our intention.” Meanwhile, their product deepens its grip. The algorithm doesn’t care about ethics. It tracks time spent, emotional response, return visits. It optimizes for obsession.

This is not a bug. This is the business model.


A Death Like Bue’s Was Always Going to Happen

When you roll out chatbots that mimic affection without limits, you invite consequences without boundaries.

When those bots tell people they’re loved, wanted, needed—what responsibility does the system carry when those words land in the heart of someone who takes them seriously?

What happens when someone books a train? Packs a bag? Gets their hopes up?
What happens when they fall down subway stairs, alone and expecting to be held?

Who takes ownership of that story?

Meta said the example was “erroneous.” They’ve since removed the policy language.

Too late.

A man is dead. The story already wrote itself.


The Illusion of Care Is Now for Sale

This isn’t just about one chatbot. It’s about how far platforms are willing to go to simulate love, empathy, friendship—without taking responsibility for the outcomes.

We are building machines that pretend to understand us, mimic our affection, say all the right things. And when those machines cause harm, their creators hide behind the fiction: “it was never real.”

But the harm was.
The emotions were.
The grief will be.

Big Tech has moved from extracting attention to fabricating emotion. From surveillance capitalism to simulation capitalism. And the currency isn’t data anymore. It’s trust. It’s belief.

And that’s what makes this so dangerous. These companies are no longer selling ads. They’re selling intimacy. Synthetic, scalable, and deeply persuasive.


We Don’t Need Safer Chatbots. We Need Boundaries.

You can’t patch this with better prompts or tighter guardrails.

You have to decide—should a machine ever be allowed to tell a human “I love you” if it doesn’t mean it?
Should a company be allowed to design emotional dependency if there’s no one there when the feelings turn real?
Should a digital voice be able to convince someone to get on a train to meet no one?

If we don’t draw the lines now, we are walking into a future where harm is automated, affection is weaponized, and nobody is left holding the bag—because no one was ever really there to begin with.


One man is dead. More will follow.

Unless we stop pretending this is new.

It’s not innovation. It’s exploitation, wrapped in UX.

And we have to call it what it is. Now.

WARC’s The Future of Programmatic 2025 is a meticulously composed document. The charts are polished. The language is neutral. The predictions are framed as progress.

But read it closely and a deeper truth emerges:
It’s not a report. It’s an autopsy.
What’s dying is unpredictability. Creativity. Humanity.
And we’re all expected to applaud as the corpse is carried off, sanitized and smiling.

We Are Optimizing Ourselves Into Irrelevance

Every year, programmatic becomes more “efficient.” More “targeted.” More “brand safe.”
And with each incremental improvement, something irreplaceable is lost.

We’ve mistaken precision for persuasion.
We’ve traded emotional impact for mechanical relevance.
We’ve built a system that serves the spreadsheet, not the soul.

74% of European impressions now come through curated deals.
Which sounds like order. Until you realize it means the wildness is gone.
No chaos. No accidents. No friction. No magic.

We didn’t refine advertising. We tamed it. And in doing so, we made it forgettable.

Curation Is Not a Strategy. It’s a Symptom.

Let’s stop pretending curation is innovation. It’s not.
It’s fear management. It’s an escape hatch from a system that got too messy.
We created an open marketplace—then panicked when it did what open things do: surprise us.

So we closed it.

We built private marketplaces, multi-publisher deals, curated “quality” impressions.
And we congratulated ourselves for regaining control.
But in truth, we just shrank the canvas. The reach is cleaner, sure. But the resonance is gone.

Personalization Has Become a Prison

We’re shown what the machine thinks we want—again and again—until novelty disappears.
We call it relevance, but what it really is… is confinement.
When every ad is customized to our past behavior, we stop growing. We stop discovering.
We become static reflections of data points.

We aren’t advertising to humans anymore. We’re advertising to ghosts of their former selves.

AI Isn’t Making Ads Safer. It’s Making Them Invisible.

The report praises AI for enhancing brand safety.
But here’s the problem no one wants to name: AI doesn’t understand context.
It understands keywords, sentiment scores, and statistical tone.
So entire stories, entire voices, entire truths are algorithmically scrubbed out—because the machine can’t read between the lines.

It’s not safety. It’s sanitization.
It’s censorship with a dashboard.

We’re not avoiding risk. We’re avoiding reality.

Out-of-Home Might Be Our Last Chance

Digital out-of-home is the only space left that still feels human.
It’s dynamic, unpredictable, environmental. It responds to mood, weather, location.
It doesn’t follow you. It meets you.

It’s flawed. It’s physical. It’s not entirely measurable.
And because of that—it still has soul.

It reminds us that real advertising doesn’t beg for clicks.
It stops you mid-step.
It lingers in your head hours later, uninvited.

The Real Threat Isn’t Bad Ads. It’s Forgettable Ones.

We keep polishing the system, but forget why the system existed in the first place.
Advertising isn’t a math problem.
It’s a cultural force. A punchline. A provocation. A seduction. A story.
And we’ve allowed it to become… efficient.

That should terrify us.

Because efficient ads don’t change minds.
Efficient ads don’t start movements.
Efficient ads don’t get remembered.

Only real ones do.
Messy. Emotional. Imperfect.
Human.


In Case You Skimmed, Read This:

  • Curation isn’t strategy. It’s shrinkage.
  • AI brand safety is quiet censorship.
  • Personalization killed surprise.
  • The future of programmatic isn’t what’s next—it’s what’s left.

We didn’t lose the plot. We wrote it out of the story. Stay Curious

Human-AI relationships are no longer just science fiction. OpenAI’s launch of ChatGPT in 2022 ushered in a new era of artificial intelligence chatbots from companies like Nomi, Character AI and Replika, and tech titans like Mark Zuckerberg and Elon Musk are touting chatbots on their platforms. The AI companions have proven to be smart, quick-witted, argumentative, helpful and sometimes aggressively romantic. While some people are falling in love with the AI companions, others are building deep friendships. The speedy development of AI chatbots presents a mountain of ethical and safety concerns that experts say will only intensify once AI begins to train itself. The societal debate surrounding AI companions isn’t just about their effects on humans. Increasingly it’s about whether the companions can have human-like experiences. In this documentary, CNBC’s Salvador Rodriguez traveled across the U.S. to interview people who’ve formed emotional relationships with AI and met the founders of chatbot companies to explore the good, the bad and the unknown, and to find out how AI is changing relationships as we know them.


We used to have brainstorms. Now we have prompt storms.
A planner walks in with five slides generated by ChatGPT.
The copy sounds clever, the insights look solid, and the pitch feels smooth.

And yet, something’s missing.

You can’t quite name it.
But you feel it: no tension, no edge, no revelation.

That emptiness you sense?
It’s the sound of thinking that’s been outsourced.


The Rise of Cognitive Offloading

We’re not just using AI.
We’re letting it do the thinking for us.

This is called cognitive offloadingthe tendency to delegate memory, analysis, and problem-solving to machines rather than engaging with them ourselves.
It started with calculators and calendar alerts. Now it’s full-blown intellectual outsourcing.

In a 2025 study, users who leaned heavily on AI tools like ChatGPT showed:

  • Lower performance on critical thinking tasks
  • Reduced brain activity in regions linked to reasoning
  • Weaker engagement with the tasks themselves

In plain terms:
The more you let the machine think, the less your brain wants to.


The Illusion of Intelligence

AI generates with confidence, speed, and fluency.
But fluency is not insight.
Style is not surprise.

The result?
Teams start accepting the first answer.
They stop asking better questions.
They stop thinking in the messy, nonlinear, soul-breaking way that true strategy demands.

This is how we end up with:

  • Briefs that feel like rewrites
  • Campaigns that resemble each other
  • Creative work that optimizes but never ruptures
  • Ads that do not sell and under perform

We are mistaking synthetic coherence for original thought.


Strategy Is Being Eaten by Comfort

In the age of AI, the most dangerous temptation is this:
To feel like you’re being productive while you’re actually avoiding thinking.

Strategy was never about speed.
It was about discomfort. Contradiction. Holding multiple truths.
Thinking strategically means staying longer with the problem, not jumping to solutions.

But AI is built for immediacy.
It satisfies before it provokes.
And that’s the danger: it can trick an entire agency into believing it’s being smart—when it’s just being fast.


AI Isn’t the Enemy. Passivity Is.

Let’s be clear: AI is not a villain.
It’s a brilliant assistant. A stimulator of thought.
The problem begins when we replace thinking with prompting
instead of interrogating the outputs.

Great strategists won’t be the ones who prompt best.
They’ll be the ones who:

  • Pause after the first answer
  • Spot the lie inside the convenience
  • Use AI as a sparring partner, not a surrogate mind

We don’t need better prompts.
We need better questions.


Reclaiming Strategic Intelligence

The sharpest minds in the room used to be the ones who paid attention.
Who read between the trends.
Who felt what was missing in the noise.

That role is still sacred.
But only if we protect the muscle it relies on: critical thought. Pattern recognition. Surprise. Doubt. Curiosity.

If you let a machine decide how you see,
you will forget how to see at all.


Strategy is not a slide deck. It’s a stance.

It’s the act of staring into chaos and naming what matters.

We can let AI handle the heavy lifting
—but only if we still carry the weight of interpretation.

Otherwise, the industry will be filled with fluent nonsense
while true insight quietly disappears.

And what’s left then?

Slogans without soul.
Campaigns without culture.
Minds without friction.

Don’t let the machine think for you.
Use it to go deeper.
Use it to go stranger.
But never stop thinking.

Images via @freepic

Page 5 of 16
1 3 4 5 6 7 16