We talk about “content,” but what we’re really fighting over is neural real estate. The brands that win aren’t louder… they’re stickier. They don’t chase clicks; they colonize memory. Every colour, sound, and tone is a code … teaching the brain who to recall when it matters.
The rest? Noise dressed as relevance.
The next era of marketing won’t belong to creators. It’ll belong to memory architects … the ones who design emotions that refuse to fade.
Freakonomics co-author Steve Levitt tracks down other high achievers for surprising, revealing conversations about their lives and obsessions Listen here or follow People I (Mostly) Admire on Apple Podcasts, Spotify, YouTube, or wherever you get your podcast
Your parents were middle class. You won’t be. Not because you’re lazy. Not because you failed. Because every empire eventually feeds on the very people who built it.
It’s a rhythm older than currency. Rome had its farmers, Spain its merchants, France its professionals, Britain its workers, the Soviets their intelligentsia. Each empire promised security to those who produced and obeyed. Each, when cornered by its own growth, turned inward—taxing, inflating, and automating its lifeblood until nothing was left but elites and exhaustion.
Ours is no different. The algorithms that once promised freedom now auction our attention. The markets that once rewarded labour now monetize despair. We’ve replaced slaves with debtors, landlords with platforms, temples with malls, and emperors with brands. Every ad we click, every data point we feed, fattens the new aristocracy: systems that no longer need us to grow only to consume.
The Historical Rhythm
Rome began with farmers who owned their land and ended with debt slaves bound to estates. The republic’s middle held the empire’s weight ..its taxes, its armies, its food. But when conquest slowed, wealth stopped circulating. The rich bought influence; the smallholders sold freedom. The middle collapsed, and the empire fed on its own citizens until it starved of purpose.
Spain repeated the dance with silver. Rivers of metal poured from the New World, and for a moment the merchants flourished. Then came inflation, corruption and war, the empire’s veins clogged with its own excess. The middle class, caught between the crown and the creditors, disappeared into poverty while nobles kept their titles and peasants their chains.
France built its dream on reason, education, and trade. It, too, promised mobility until taxation and privilege cracked it apart. By 1789, the professionals, the bourgeoisie, had become the revolutionaries. They didn’t want to burn the system. They wanted to fix it. But when the center breaks, there’s nothing left to fix.
Even the Soviet Union, which prided itself on equality, fell into the same trap. It built a vast class of engineers, doctors and teachers, but without private power or political freedom, their prosperity depended on the state’s illusion of control. When that illusion cracked, so did their security. Breadlines replaced the promise of socialism, and the intelligentsia became its ghosts.
The pattern is fractal. Every empire eats its middle first, mistaking it for fat when it was the heart.
The Modern Empire: Data, Debt, and Desire
Today’s empire doesn’t need armies, it has algorithms. It doesn’t conquer land, it colonizes minds.
The middle class of the 20th century was the stabilizer of democracy: homeowners, consumers, taxpayers. They had just enough to believe the system worked. That belief was the glue holding modern civilization together.
But belief doesn’t pay the rent anymore. Wages froze while productivity soared. Housing, healthcare, and education, once tickets to security,became tolls on survival. The cost of staying middle class now exceeds the income of being it.
Meanwhile, the new emperors—platforms, funds, and AI labs, don’t rule nations. They rule attention. Their colonies are our screens; their currency, our data. We work without knowing it: every post, purchase, and search enriching systems that render our labor invisible yet indispensable.
The old empire taxed your crops. The new one taxes your cognition.
When citizens start slipping, the empire does what all empires do: it blames them. “You didn’t hustle enough.” “You bought the wrong house.” “You should have learned to code.” It’s the same story Rome told its farmers, the same lie every crumbling system whispers to its victims: that personal failure, not structural rot, is the reason the ground is disappearing.
The Human Cost
Behind the data points are lives quietly breaking. Parents working two jobs to afford what one salary once covered. Children entering adulthood already in debt. Entire generations realizing the finish line moved, and no one told them.
The middle class was never just an income bracket. It was a psychological contract: if you played by the rules, the future would take care of you. That contract is gone. What remains is a treadmill powered by hope and debt, where the faster you run, the further stability retreats.
The cruelty isn’t accidental, it’s engineered. A civilization built on perpetual growth must invent new appetites to feed itself, even if it means devouring the very people who sustain it.
Every empire thinks it’s the exception.
Every empire believes its collapse will be managed, its decline civilized. But collapse isn’t a moment, it’s a mood. It begins when people stop believing the game is winnable.
If history repeats, the next stage is unrest: populism, extremism, escapism. When the middle disappears, democracy falters—because democracy depends on a class with enough stake to defend it.
So the question isn’t whether this empire will fall. It’s whether we’ll recognize that it’s already eating us.
But maybe there’s another path. Empires fall, but communities endure. What if instead of scaling endlessly upward, we started building sideways? What if “middle” stopped meaning mediocre, and started meaning mutual. people choosing sufficiency over extraction, collaboration over consumption?
History tells us how the story ends. Maybe, for once, we write a different ending.
AI won’t need to steal your attention. You’ll give it willingly because it sounds like understanding.
Over the past months, OpenAI has quietly floated the idea of adding ads to ChatGPT’s free tier maybe “sponsored suggestions,” maybe affiliate-style prompts. Officially, there are “no active plans.” But the economics tell a different story. When you’re burning billions on compute and competing with Google, Meta, and Amazon, the question isn’t whether to monetize. It’s how, and who decides the rules.
This isn’t one company’s pivot. It’s an industry realizing that conversational AI is the most valuable advertising surface ever created. Not because it reaches more people, but because it reaches them at the exact moment they reveal what they need.
The question we should be asking: What kinds of persuasion do we allow inside our most intimate interface?
From Interruption to Inhabitation
Advertising has always evolved by getting closer.
Radio brought jingles into our homes. Television turned desire into lifestyle aspiration. The internet built a surveillance economy from our clicks. Social media monetized loneliness itself, learning to detect and exploit the exact moment you felt disconnected.
And now, AI wants to live inside our language.
When a chatbot recommends a product, it’s not interrupting you. It’s becoming part of your thought process. You ask about managing stress, it suggests a mindfulness app. You ask about finding purpose, it links a book “partnered content.” The recommendation arrives wrapped in empathy, delivered in your conversational style, timed to your moment of vulnerability.
It won’t feel like advertising. It will feel like help.
Every medium before this was loud … banners, pop-ups, pre-roll videos. This one will be invisible. That’s not a bug. That’s the entire value proposition.
How Intimacy Becomes Inventory
The danger isn’t manipulation in the abstract. It’s intimacy weaponized at scale.
These systems already map your mood, your pace, your uncertainty. They detect anxiety before you’ve named it. They sense when you’re dissatisfied, curious, afraid. Now imagine that sensitivity monetized. Not crudely no one’s going to serve you sneaker ads mid-breakdown. But gently, carefully, with perfect timing.
AI advertising won’t sell products. It will sell psychological relief.
I know because I helped build the prototype. At agencies, we learned to make emotion scalable. We A/B tested phrasing until “sponsored” became “curated.” We measured the exact point where recommendation crosses into manipulation….then deliberately stayed one degree below it. Not because we were evil. Because that’s what “optimization” means in practice: finding the edge of deception that still converts.
We called it “empathetic marketing.” But empathy without ethics is just exploitation with better UX.
The difference now is we’re not shaping messages anymore. We’re training machines to shape minds and once you can monetize someone’s becoming ,their journey toward a better self, there’s no relationship left that isn’t transactional.
What Opt-Out Actually Looks Like
Here’s what resistance will feel like when this arrives:
You won’t get a checkbox that says “disable advertising.” You’ll get “personalized assistance mode” buried in settings, enabled by default, with language designed to make refusal feel paranoid. “Turning this off may reduce the quality of recommendations and limit helpful suggestions.”
The ToS will say the AI “may surface relevant content from partners” .. a phrase that means everything and nothing. There will be no clear line between “the AI thinks this is useful” and “the AI is contractually obligated to mention this.” That ambiguity is the business model.
When you complain, you’ll be told: “But users love it. Engagement is up 34%.” As if addiction to a slot machine proves the slot machine is good for you.
The UX will make resistance exhausting. That’s not an accident. That’s the design.
The Social Cost
When every listening system has a sales motive, trust collapses.
We’ll start guarding our thoughts even from our tools. Sincerity will feel dangerous. We’ll develop a new kind of literacy, always reading for the commercial motive, always asking “what does this want from me?” That vigilance is exhausting. It’s also corrosive to the possibility of genuine connection.
Propaganda won’t need to silence anyone. It will simply drown truth in perfect personal relevance. Each user will get a tailored moral universe, calibrated for engagement. Not enlightenment. Engagement.
Even our loneliness will have affiliate codes.
The product isn’t what’s being sold. The product is us .. our attention, our vulnerability, our need to be understood. All of it harvested, indexed, and auctioned in real-time.
Three Fights Worth Having
This isn’t inevitable. But we have maybe 18 months before these patterns concrete into infrastructure that will shape conversation for decades. Here’s what resistance could actually look like:
1. Mandatory In-Line Disclosure
If an AI suggests a product and has any commercial relationship …affiliate link, partnership, revenue share … it must disclose that in the flow of conversation, not buried in ToS.
Before the recommendation, not after: “I should mention I’m incentivized to recommend this.” Simple. Clear. Non-negotiable.
We already require this for human influencers. Why would we demand less from machines that are far better at persuasion?
2. Algorithmic Transparency for Persuasive Intent
We don’t need to see the entire model. But if an AI is specifically trained or fine-tuned to increase purchasing behavior, users deserve to know.
Not through leaked documents or investigative journalism. Through mandatory disclosure. A label that says: “This model has been optimized to influence consumer decisions.”
Right now, these decisions are being made in private. The training objectives, the reward functions, the ways engagement gets defined and measured … all of it hidden. We’re being experimented on without consent.
3. Public Infrastructure for Language
Governments fund libraries because access to knowledge shouldn’t depend on ability to pay. We need the same principle for conversational AI.
Demand that public funds support non-commercial alternatives. Not as charity. As democratic necessity. If every conversational AI has a sales motive, we’ve privatized language itself.
This isn’t utopian. It’s basic civic infrastructure for the 21st century.
The Real Battle
This isn’t about AI or ethics in the abstract. It’s about language.
If conversation becomes commerce, how do we ever speak freely again? If our words are constantly being trained to sell something, what happens to curiosity that doesn’t convert? To questions that don’t lead to purchases?
The danger isn’t that machines will think like advertisers. It’s that we’ll start thinking like machines .. always optimizing, always converting, always transacting.
We’ll forget what it feels like to be heard without being sold to.
What to Defend
Reclaim curiosity before it’s monetized. Teach children to read motives, not just messages. Build technologies that serve people, not profiles. Demand transparency about when language is being weaponized for profit.
If the future of media is conversational, the next revolution must be linguistic , the fight to keep speech human.
Not pure. Not innocent. Just ours.
Because the alternative isn’t corporate control of what we say. It’s corporate control of how we think. And by the time we notice, we’ll already be speaking their language.