Info

Posts tagged advertising





For consumers
Decision fatigue disappears. You describe your needs and get a personalised buyer’s guide that filters, compares, and questions on your behalf. Shopping becomes clarity, not chaos.

For marketers
The era of “more content” is over. If an agent interprets your brand, only the clearest value, strongest proof, and simplest differentiation survive. Messaging needs to be honest, sharp, and immediately legible to an AI interpreter.

For e-shops
Your real competitor isn’t the store next door. It’s the agent that chooses whether your product even appears in the shortlist. UX matters less. Truth, pricing, and trust signals matter more, than internal teams selling your products on youtube channel and tiktok
A new layer has entered the funnel: the AI researcher. Whoever understands this first wins 2026.

For influencers
Your relevance depends on what part of your influence was real. The “I test products so you don’t have to” model becomes replaceable and obsolete. An agent can do that faster, deeper, cheaper and without bias. What survives is the part no machine can imitate: your worldview, your cultural voice, your ability to create identity and belonging (if you have one) Product curators fade. Meaning-makers rise.

check it here



*If you want communication that earns belief, not just attention, start a conversation with me.

A thirteen-year-old opens her phone after school. She moves through six apps in less than five minutes, sliding from short videos to influencer stories to quiet prompts to buy something she has never heard of. None of it feels like advertising. None of it feels manipulative. It feels like her world, shaped for her. What she does not see is the infrastructure beneath the screen, a maze of behavioural nudges, micro-persuasions and hidden design choices that slowly shape what she wants and how she behaves.

This is the silent tension that sits at the center of Europe’s debate over the Digital Fairness Act. The conversation often sounds technical, but the stakes are cultural and psychological. Euroconsumers’ detailed response to the European Commission’s consultation, published in late October, is not just an industry position paper. It reads like a warning about what it means to grow up inside a digital environment that has outpaced the laws meant to protect the people living in it.

Their document focuses sharply on minors. Not as an afterthought, but as the heart of the issue. Children are no longer passive users dipping into the internet for the occasional game or search. They have become full participants in vast commercial ecosystems, yet the systems around them still treat them as citizens of a world that no longer exists. Euroconsumers notes that only forty three percent of young users consistently recognise when they are being advertised to. Thirteen percent say they never do. The boundary between expression and persuasion has blurred to the point that even adults struggle to navigate it. Expecting children to do so alone is unrealistic.

The problem goes beyond advertising. It lives inside the architecture of digital design. Subscription models invite users in with frictionless offers yet punish them with invisible hoops when they try to leave. Dark patterns turn decisions into traps. Recommendation algorithms amplify impulses long before a young person has the ability to question them. The most intimate parts of childhood exploration now unfold inside systems built primarily for engagement and monetisation rather than agency and development.

Europe knows this. Its regulators have spent years stitching together a patchwork of rules to tame the largest players in the digital market. But Euroconsumers makes a point that has been missing from the public debate. Europe’s rules exist mostly in theory, while enforcement remains fragmented in practice. Countries differ in capacity. Cross-border monitoring is slow. Global platforms move faster than local regulators, and they adjust to weak points the moment they appear. Without coordinated enforcement across the single market, consumer protection becomes a patchwork of good intentions rather than a coherent system.

The Digital Fairness Act is supposed to address that. Yet the deeper question is whether Europe wants to limit harm or redesign the experience entirely. Euroconsumers pushes for something more ambitious than a list of prohibitions. They argue that minors should not only be protected but invited into the conversation. If young people live inside these systems more fluently than most policymakers, why should they be treated as passive objects of protection rather than active contributors to the rules that govern their digital lives? It is a quiet but radical suggestion. It reframes digital fairness as a shared public project rather than a top-down corrective.

This is the moment when Europe must decide what kind of digital world it wants to build. If it sees digital regulation only as a shield, it will spend the next decade chasing bad actors as they innovate around every new rule. If it sees digital regulation as architecture, it can shape markets that reward transparent design, empower user agency and protect the people who are most vulnerable to persuasive technologies.

The opportunity is larger than it seems. Fairness is not the enemy of innovation. It is the foundation on which trust is built, and trust is the currency of every functioning digital ecosystem. A marketplace where reviews can be trusted, cancellations are honest and minors understand the content they consume creates space for European businesses to compete on quality rather than opacity. It strengthens the digital single market by aligning incentives rather than scattering them.

Greece, like many smaller member states, sits at a crossroads in this debate. Its digital adoption is strong, but its enforcement capacity has limits. This makes Greek consumers disproportionately affected by fragmented European protections. It also creates a strategic opportunity. If Greece positions itself as a leader in digital fairness, it can elevate its entire ecosystem of innovation, policy and consumer trust. In a region where digital confidence is often fragile, this could become a competitive advantage.

The world our children inhabit will not slow down. The feed will not pause. The platforms will not wait for regulators to catch up. The question is whether Europe will continue asking how to stop the worst behaviour or whether it will learn to design systems that make the worst behaviour impossible. The Euroconsumers response hints at the path ahead. It recognises that fairness is not merely legal compliance. It is a design principle. And if Europe chooses to treat it as such, the Digital Fairness Act could become the first major step toward a digital environment where agency and dignity are not exceptions but defaults.

The stakes are simple. When a child opens a phone, they should enter a world that respects them. Not a marketplace that shapes them without their knowledge. The Digital Fairness Act will show whether Europe is ready to make that promise real.

AI won’t need to steal your attention. You’ll give it willingly because it sounds like understanding.

Over the past months, OpenAI has quietly floated the idea of adding ads to ChatGPT’s free tier maybe “sponsored suggestions,” maybe affiliate-style prompts. Officially, there are “no active plans.” But the economics tell a different story. When you’re burning billions on compute and competing with Google, Meta, and Amazon, the question isn’t whether to monetize. It’s how, and who decides the rules.

This isn’t one company’s pivot. It’s an industry realizing that conversational AI is the most valuable advertising surface ever created. Not because it reaches more people, but because it reaches them at the exact moment they reveal what they need.

The question we should be asking: What kinds of persuasion do we allow inside our most intimate interface?

From Interruption to Inhabitation

Advertising has always evolved by getting closer.

Radio brought jingles into our homes. Television turned desire into lifestyle aspiration. The internet built a surveillance economy from our clicks. Social media monetized loneliness itself, learning to detect and exploit the exact moment you felt disconnected.

And now, AI wants to live inside our language.

When a chatbot recommends a product, it’s not interrupting you. It’s becoming part of your thought process. You ask about managing stress, it suggests a mindfulness app. You ask about finding purpose, it links a book “partnered content.” The recommendation arrives wrapped in empathy, delivered in your conversational style, timed to your moment of vulnerability.

It won’t feel like advertising. It will feel like help.

Every medium before this was loud … banners, pop-ups, pre-roll videos. This one will be invisible. That’s not a bug. That’s the entire value proposition.

How Intimacy Becomes Inventory

The danger isn’t manipulation in the abstract. It’s intimacy weaponized at scale.

These systems already map your mood, your pace, your uncertainty. They detect anxiety before you’ve named it. They sense when you’re dissatisfied, curious, afraid. Now imagine that sensitivity monetized. Not crudely no one’s going to serve you sneaker ads mid-breakdown. But gently, carefully, with perfect timing.

AI advertising won’t sell products. It will sell psychological relief.

I know because I helped build the prototype. At agencies, we learned to make emotion scalable. We A/B tested phrasing until “sponsored” became “curated.” We measured the exact point where recommendation crosses into manipulation….then deliberately stayed one degree below it. Not because we were evil. Because that’s what “optimization” means in practice: finding the edge of deception that still converts.

We called it “empathetic marketing.” But empathy without ethics is just exploitation with better UX.

The difference now is we’re not shaping messages anymore. We’re training machines to shape minds and once you can monetize someone’s becoming ,their journey toward a better self, there’s no relationship left that isn’t transactional.

What Opt-Out Actually Looks Like

Here’s what resistance will feel like when this arrives:

You won’t get a checkbox that says “disable advertising.” You’ll get “personalized assistance mode” buried in settings, enabled by default, with language designed to make refusal feel paranoid. “Turning this off may reduce the quality of recommendations and limit helpful suggestions.”

The ToS will say the AI “may surface relevant content from partners” .. a phrase that means everything and nothing. There will be no clear line between “the AI thinks this is useful” and “the AI is contractually obligated to mention this.” That ambiguity is the business model.

When you complain, you’ll be told: “But users love it. Engagement is up 34%.” As if addiction to a slot machine proves the slot machine is good for you.

The UX will make resistance exhausting. That’s not an accident. That’s the design.

The Social Cost

When every listening system has a sales motive, trust collapses.

We’ll start guarding our thoughts even from our tools. Sincerity will feel dangerous. We’ll develop a new kind of literacy, always reading for the commercial motive, always asking “what does this want from me?” That vigilance is exhausting. It’s also corrosive to the possibility of genuine connection.

Propaganda won’t need to silence anyone. It will simply drown truth in perfect personal relevance. Each user will get a tailored moral universe, calibrated for engagement. Not enlightenment. Engagement.

Even our loneliness will have affiliate codes.

The product isn’t what’s being sold. The product is us .. our attention, our vulnerability, our need to be understood. All of it harvested, indexed, and auctioned in real-time.

Three Fights Worth Having

This isn’t inevitable. But we have maybe 18 months before these patterns concrete into infrastructure that will shape conversation for decades. Here’s what resistance could actually look like:

1. Mandatory In-Line Disclosure

If an AI suggests a product and has any commercial relationship …affiliate link, partnership, revenue share … it must disclose that in the flow of conversation, not buried in ToS.

Before the recommendation, not after: “I should mention I’m incentivized to recommend this.” Simple. Clear. Non-negotiable.

We already require this for human influencers. Why would we demand less from machines that are far better at persuasion?

2. Algorithmic Transparency for Persuasive Intent

We don’t need to see the entire model. But if an AI is specifically trained or fine-tuned to increase purchasing behavior, users deserve to know.

Not through leaked documents or investigative journalism. Through mandatory disclosure. A label that says: “This model has been optimized to influence consumer decisions.”

Right now, these decisions are being made in private. The training objectives, the reward functions, the ways engagement gets defined and measured … all of it hidden. We’re being experimented on without consent.

3. Public Infrastructure for Language

Governments fund libraries because access to knowledge shouldn’t depend on ability to pay. We need the same principle for conversational AI.

Demand that public funds support non-commercial alternatives. Not as charity. As democratic necessity. If every conversational AI has a sales motive, we’ve privatized language itself.

This isn’t utopian. It’s basic civic infrastructure for the 21st century.

The Real Battle

This isn’t about AI or ethics in the abstract. It’s about language.

If conversation becomes commerce, how do we ever speak freely again? If our words are constantly being trained to sell something, what happens to curiosity that doesn’t convert? To questions that don’t lead to purchases?

The danger isn’t that machines will think like advertisers. It’s that we’ll start thinking like machines .. always optimizing, always converting, always transacting.

We’ll forget what it feels like to be heard without being sold to.

What to Defend

Reclaim curiosity before it’s monetized. Teach children to read motives, not just messages. Build technologies that serve people, not profiles. Demand transparency about when language is being weaponized for profit.

If the future of media is conversational, the next revolution must be linguistic , the fight to keep speech human.

Not pure. Not innocent. Just ours.

Because the alternative isn’t corporate control of what we say. It’s corporate control of how we think. And by the time we notice, we’ll already be speaking their language.

The smartest people of our generation are spending their lives figuring out how to show us more ads. Samsung wants to show you ads on their $4,999 refrigerator. Ford patented a system to display billboards on your dashboard while driving. Even ChatGPT is becoming a shopping platform.

The new Chief Marketing Officer of America’s biggest brands doesn’t sit in Madison Avenue boardrooms. It sits in Washington. And it doesn’t care about brand love, market share, or cultural relevance. It cares about tariffs.

This summer, General Motors reported a $1.1 billion tariff hit. Apple lost another $1.1 billion in a single quarter. Nike: $1 billion. Adidas: $218 million. These weren’t bad campaigns. They weren’t consumer revolts. They were politicians pulling levers that bled global brands dry.

And the bleeding has reached advertising.


The Ad Industry’s Sudden Survival Mode

The Interactive Advertising Bureau has slashed its 2025 forecast: US ad spend growth down to +5.7%, from +7.3% in January. The first half of the year looked stable. The second half is where the pain lands.

Marketers aren’t pretending otherwise. Nearly half say they’re cutting budgets outright. Others are shortening campaigns, pausing buys, or fleeing to performance-driven channels where every click can be measured.

The casualties over at the USA are obvious:

  • Linear TV spend: -14.4%.
  • Print, radio, OOH: -12.7%.
  • Meanwhile, social (+14.3%) and CTV (+11.4%) are the lifeboats.

It’s a forced pivot from storytelling to transaction. As one media buyer put it bluntly: “Forget brand equity. Just sell before the next tariff drops.”


Tariffs Don’t Just Tax Goods …They Tax Culture

For decades, marketers told us they were culture’s architects. They built myths, symbols, slogans. But if trade policy can erase billions in ad spend overnight, then culture isn’t designed in creative studios anymore. It’s dictated in tariff negotiations.

That Nike campaign about human potential? It now competes with headlines about price hikes. Apple’s latest innovation launch? Drowned out by quarterly earnings wrecked by tariffs.

Marketers don’t control the message when they’re busy firefighting margin losses. Politicians do.


The Quiet Extinction of Branding

This isn’t just a budget story. It’s the slow death of brand advertising itself.

With customer acquisition and repeat sales now the only goals that matter, campaigns have collapsed into endless “buy now” loops. The promise of brand-building has been traded for measurable clicks.

It’s not strategy. It’s survival. And survival stories don’t go viral. They go silent.


Who Really Runs Advertising Now?

The ad industry is bracing for more shocks in 2026. Social, CTV, and retail media will grow. Traditional media will shrink further. Marketers will keep demanding proof of ROI at every step.

But the bigger story is this: advertising has lost sovereignty. It no longer writes culture on its own terms. It rents its megaphone from politics.

In 2025, the Chief Marketing Officer of American brands isn’t a strategist, a creative, or even an algorithm.

It’s the tariff.

Once upon a time well a few yeas back to be precise, advertising agencies were factories. You gave them a brief, they churned out scripts, visuals, jingles. The cost was in the craft—the lights, the cameras, the battalions of account execs and creatives.

But then along came AI. Suddenly, everyone had a factory in their laptop. Need a video? Done in an afternoon. A headline? Five seconds. A hundred variations of a TikTok spot? Press a button.

Which leaves us with an awkward question: if anyone can make an ad, why pay an agency to make one?

The reflex answer “better craft” no longer holds. Craft is now abundant, instant, nearly free. The moat is gone. The castle is empty.

So where’s the new scarcity? It’s not in making. I believe that it is in choosing.

Taste. Strategy. Judgment. Signal from noise.

That is the agency’s future. Not as a factory, but as a curator.

Think of it this way: AI can give you 100 ads before lunch. Ninety-eight will be irrelevant. Two might be brilliant. The in-house client team will likely pick the wrong ninety-eight. Why? Because brands rarely see themselves clearly. They’re too close to the mirror.

Agencies, at their best, are editors of culture. They know which tensions to enter, which signals to amplify, which executions deserve media money and which deserve a swift burial.

This changes the economic model, too. Agencies shouldn’t sell hours or outputs. They should sell discernment. Maybe it’s a subscription to cultural intelligence. Maybe it’s royalties on ideas that go viral. Maybe it’s performance fees. But the days of charging for bulk production are numbered.

The factory is dying. And good riddance.

The curator is rising. Agencies that embrace this with the right talent will thrive, not by producing more content, but by ruthlessly deciding what deserves to exist.

Because in a world drowning in infinite bad irrelevant ads, the bravest act isn’t making another one. It’s having the taste, courage, and foresight to say: No. That doesn’t cut through. Kill it.

So here’s the final provocation: Do you want to be remembered as the brand that produced ads, or the one that edited culture?

image

Page 1 of 25
1 2 3 25