Info

Archive for

Why AI-Generated Ads Are Killing the One Thing Money Can’t Buy: Meaning


There is something unsettling about watching a machine try to seduce you.

It can generate images of silk, gold, and bone structure so symmetrical it feels divine. It can mimic opulence with terrifying precision. But you walk away cold. Not because it wasn’t beautiful—but because no one bled for it.

Luxury, at its core, is not a product. It is a performance of care. A theater of intention. A whisper that says: “Someone made this. And they made it for you.”

That whisper dies the moment a brand discloses: This ad was generated by AI.

And consumers—instinctively, almost viscerally—pull back.


This isn’t speculation. In March 2025, researchers at Tarleton University’s Sam Pack College of Business conducted a series of experiments that lifted the veil on AI in luxury advertising.

They found that when people were told an ad was AI-generated, their perception of the brand soured—even if the ad itself was flawless. It wasn’t the aesthetics that offended. It was the implication that no human effort was involved. No obsession. No sleepless nights. Just pixels, puppeteered by code.

Because in luxury, effort is the aura. You’re not buying the bag, the scent, the silk—you’re buying the story of the hands that made it.

“Luxury without labor is just a JPEG with a price tag.”


AI doesn’t yearn. It doesn’t dream. It doesn’t understand what it means to long for something across a lifetime and finally touch it. And so when it speaks the language of luxury, it sounds like a tourist repeating poetry phonetically. The form is there. But the soul is missing.

In the same study, researchers found something else. When AI-generated visuals were truly original—surreal, impossible, avant-garde—the backlash weakened. Consumers were more forgiving when the machine dared to be weird, not just perfect. Novelty redeemed automation. Why? Because it felt like art, not optimization.

This is the thin line AI must walk: between mimicry and magic. Between replication and revelation.


What brands must now realize is this: you can’t fake the sacred.

You can’t outsource reverence. Not when your entire mythology is built on the illusion of effort, exclusivity, and the impossible-to-scale. When luxury becomes scalable, it becomes ordinary. And nothing kills desire faster than convenience.

The real scandal isn’t that AI is being used. It’s how cheaply it’s being used.
Not as a collaborator in creation—but as a replacement for it.

“We don’t fall in love with perfection—we fall in love with presence.”


So what now? Must we banish AI from the house of beauty?

No. But it must be tamed. Not in the name of nostalgia, but in the name of mystery.

Let it enhance the myth—not expose the machinery. Let it generate visions too strange for human hands—but never let it erase the hands entirely. Let it serve the story—not become the storyteller.

Use it to deepen the dream. Not to save on production costs.

“The new luxury isn’t scarcity. It’s soul.”


AI can make images. But it cannot make meaning.
Because meaning requires longing. It requires imperfection. It requires a face behind the mask.

And so, in an age of perfect replicas, the true luxury will be this:

Proof that someone cared.


Based on the study “The Luxury Dilemma: When AI-Generated Ads Miss the Mark,”
Tarleton University, Sam Pack College of Business, March 2025.

40% of the global population is overweight or obese. Highly processed industrial foodstuffs are largely to blame. But food companies continue to focus on products that are addictive. Sugar is one of the strongest “drugs” and can get consumers really hooked. Food giants know this only too well. That’s why they use sugar, fats and flavor enhancers to encourage people to buy their products and boost their profits. The result: more and more people around the world are overweight or obese. Illnesses such as diabetes and cardiovascular disease are becoming more prevalent. What can be done to change or even put a stop to the food industry’s strategies?

The Agenda: Their Vision – Your Future (2025)

The Agenda: Their Vision | Your Future is a feature-length independent documentary produced by Mark Sharman; former UK broadcasting executive at ITV and Sky (formerly BSkyB). In fiction and fact, there have always been people and organisations with ambitions to control the world. And now the oligarchs who pull the strings of finance and power finally have the tools to achieve their global objectives; omnipresent surveillance, artificial intelligence, digital currency and ultimately digital identities. The potential for social control of our lives and minds is alarmingly real. The plan has been decades in the making and has seen infiltration of Governments, local councils, big business, civil society, the media and, crucially, education. A ceaseless push for a new reality, echoing Aldous Huxley’s Brave New World, or George Orwell’s 1984. The Agenda: Their Vision, Your Future examines the digital prison which awaits us if we do not push back right now. How your food, energy, money, travel and even your access to the internet could be limited and controlled; how financial power is strangling democracy and how global institutions like the World Health Organisation are commandeered to champion ideological and fiscal objectives. The centrepiece is man-made climate change and with it, the race to Net Zero. Both are encapsulated in the United Nations and its Agenda 2030. A force for good? Or “a blank cheque for totalitarian global control”? The Agenda presents expert views from the UK, the USA and Europe.


Imagine giving a supercomputer a brain teaser and watching it stare blankly, then start mumbling nonsense, then suddenly stop talking altogether.

That’s basically what Apple just did.

This week, Apple researchers released a paper called “The Illusion of Thinking” — and it might go down as the moment we all collectively realized: AI can fake intelligence, but it can’t think. Download it here

Let’s break this down so your non-tech uncle, your boss, and your teenage cousin can all understand it.


The Puzzles That Broke the Machines

Apple fed today’s smartest AI models logic puzzles. Simple ones at first: move some disks, cross a river without drowning your goat.

The AIs did okay.

Then Apple made the puzzles harder. Not impossible — just more steps, more rules.

That’s when the collapse happened.

These large reasoning models (the ones that are supposed to “think” better than chatbots) didn’t just struggle.

They failed. Completely. Like, zero accuracy.

They didn’t even try to finish their reasoning. They just… gave up.

Imagine hiring a math tutor who can add 2+2 but short-circuits when asked 12+34.


What It Means (And Why You Should Care)

This wasn’t some random test. This is Apple — the company that makes your phone and, oh yeah, just rolled out its own AI systems.

So why would they publish this?

Because it reveals something nobody wants to say out loud:

AI right now is a brilliant bullsh*t artist.

It can write essays. It can code. It can mimic thinking. But as soon as you throw a multi-step logic problem at it, it folds faster than a cheap lawn chair.

This matters a lot because we’re putting these systems into:

  • Healthcare
  • Legal advice
  • Autonomous vehicles
  • Education

…and assuming they know what they’re doing.

But Apple just proved: They don’t.


The Illusion of Thinking

Most AIs work by predicting the next word in a sentence. It’s fancy autocomplete. Chain-of-thought prompting (like showing your work in math class) helps — until it doesn’t.

In fact, Apple found that when tasks got harder, the AI actually started using less reasoning. Like a student who panics mid-exam and starts guessing.

This is what Apple called “complete accuracy collapse.”

Translation: AI doesn’t know it’s wrong. It just acts like it does.

And that’s the danger.


So What Do We Do?

The takeaway isn’t “AI is useless.”

It’s: Stop worshipping the illusion.

We need:

  • Better benchmarks (that actually test reasoning, not memorization)
  • Systems that know when they don’t know
  • Hybrid models that mix language prediction with real logic engines

And most importantly, we need humility. From engineers. From startups. From governments. From us.

Because right now, we’re mistaking a parrot for a philosopher.