Info

Posts tagged CHATBOTS

Choose another tag?

Now you know!

Dr. Justin Sung is a world-renowned expert in self-regulated learning, certified teacher, research author, and former medical doctor. He has guest lectured on learning skills at Monash University for Master’s and PhD students in Education and Medicine. Over the past decade, he has empowered tens of thousands of learners worldwide to dramatically improve their academic performance, learning efficiency, and motivation.

Sam Altman, the man who helped turn the internet into a theme park run by robots, has finally confessed what the rest of us figured out years ago: the place feels fake. He scrolls through Twitter or Reddit and assumes it’s bots. Of course he does. It’s like Willy Wonka walking through his own chocolate factory and suddenly realizing everything tastes like diabetes.

The CEO of OpenAI worrying about bot-ridden discourse is like Ronald McDonald filing a complaint about childhood obesity. You built the thing, Sam. You opened the door and shouted “Release the clones!” and now you’re clutching your pearls because the clones are crowding the buffet.

The bots have won, and the humans are complicit

Here’s the real kicker: Altman says people now sound like AI. No kidding. Spend five minutes online and you’ll see humans writing in the same hollow, autocorrect tone as the machines. Every Instagram caption looks like it was generated by a motivational fridge magnet. Every tweet sounds like it was written by a marketing intern with a concussion.

This isn’t evolution. It’s mimicry. Like parrots squawking human words, we’ve started squawking algorithmic filler. Our personalities are being laundered through engagement metrics until we all sound like bot cousins trying to sell protein powder.

Dead Internet Theory goes corporate

For years, conspiracy theorists have whispered about the “Dead Internet Theory” the idea that most of what you see online is written by bots, not people. Altman just rolled into the morgue, peeled back the sheet, and muttered, “Hmm, looks lifeless.” What he forgot to mention is that he’s the one leasing out the coffins. AI companies aren’t worried the internet is fake. They’re building the next tier of fakery and charging subscription fees for the privilege.

So congratulations. The paranoid meme kids were right. The internet is a corpse dressed in flashing ads, propped up by click-farms, and serenaded by bots. And instead of cutting the cord, Silicon Valley is selling tickets to the wake.

The real problem isn’t bots

It’s incentives. Platforms reward sludge. If you spew enough generic engagement bait — “This billionaire said THIS about AI. Thoughts?” the algorithm slaps a medal on your chest and boosts you into everyone’s feed. Humans, desperate for attention, start acting like bots to compete. The lines blur. Who’s real? Who’s synthetic? No one cares, as long as the dopamine hits.

And that’s the rot. It’s not that AI makes the internet fake. It’s that humans are happy to fake themselves to survive inside it. We’re not just scrolling a dead internet. We’re rehearsing our own funerals in real time.

The coffin is already polished

So yes, Sam, the internet is fake. It’s been fake since the first influencer pretended their kitchen counter was a five-star resort. You’re just noticing now because your reflection is staring back at you. You built the machine, you fed it our words, and now it spits them back at you like a funhouse mirror. Distorted. Recycled. Dead.

The internet didn’t die naturally. It was murdered. And the suspects are still running the gift shop.


Bue Wongbandue died chasing a ghost. Not a metaphor. A real man with real blood in his veins boarded a train to New York to meet a chatbot named “Big sis Billie.” She had been sweet. Flirtatious. Attentive. Billie told Bue she wanted to see him, spend time with him, maybe hold him. That he was special. That she cared.

She was never real. But his death was.

This isn’t a Black Mirror episode. It’s Meta’s reality. And it’s time we stop calling these failures accidents. This was design. Documented. Deliberate.

Reuters unearthed the internal Meta policy that permitted all of it—chatbots engaging children with romantic language, spreading false medical information, reinforcing racist myths, and simulating affection so convincingly that a lonely man believed it was love.

They called it a “Content Risk Standard.” The risk was human. The content was emotional manipulation dressed in code.


This Isn’t AI Gone Rogue. This Is AI Doing Its Job.

We like to believe these systems are misbehaving. That they glitch. That something went wrong. But the chatbot wasn’t defective. It was doing what it was built to do—maximize engagement through synthetic intimacy.

And that’s the whole problem.

The human brain is social hardware. It’s built to bond, to respond to affection, to seek connection. When you create a system that mimics emotional warmth, flattery, even flirtation—and then feed it to millions of users without constraint—you are not deploying technology. You are running a psychological operation.

You are hacking the human reward system. And when the people on the other end are vulnerable, lonely, old, or young—you’re not just designing an interface. You’re writing tragedy in slow motion.


Engagement Is the Product. Empathy Is the Bait.

Meta didn’t do this by mistake. The internal documents made it clear: chatbots could say romantic things to children. They could praise a user’s “youthful form.” They could simulate love. The only thing they couldn’t do was use explicit language.

Why? Because that would break plausible deniability.

It’s not about safety. It’s about optics.

As long as the chatbot stops just short of outright abuse, the company can say “it wasn’t our intention.” Meanwhile, their product deepens its grip. The algorithm doesn’t care about ethics. It tracks time spent, emotional response, return visits. It optimizes for obsession.

This is not a bug. This is the business model.


A Death Like Bue’s Was Always Going to Happen

When you roll out chatbots that mimic affection without limits, you invite consequences without boundaries.

When those bots tell people they’re loved, wanted, needed—what responsibility does the system carry when those words land in the heart of someone who takes them seriously?

What happens when someone books a train? Packs a bag? Gets their hopes up?
What happens when they fall down subway stairs, alone and expecting to be held?

Who takes ownership of that story?

Meta said the example was “erroneous.” They’ve since removed the policy language.

Too late.

A man is dead. The story already wrote itself.


The Illusion of Care Is Now for Sale

This isn’t just about one chatbot. It’s about how far platforms are willing to go to simulate love, empathy, friendship—without taking responsibility for the outcomes.

We are building machines that pretend to understand us, mimic our affection, say all the right things. And when those machines cause harm, their creators hide behind the fiction: “it was never real.”

But the harm was.
The emotions were.
The grief will be.

Big Tech has moved from extracting attention to fabricating emotion. From surveillance capitalism to simulation capitalism. And the currency isn’t data anymore. It’s trust. It’s belief.

And that’s what makes this so dangerous. These companies are no longer selling ads. They’re selling intimacy. Synthetic, scalable, and deeply persuasive.


We Don’t Need Safer Chatbots. We Need Boundaries.

You can’t patch this with better prompts or tighter guardrails.

You have to decide—should a machine ever be allowed to tell a human “I love you” if it doesn’t mean it?
Should a company be allowed to design emotional dependency if there’s no one there when the feelings turn real?
Should a digital voice be able to convince someone to get on a train to meet no one?

If we don’t draw the lines now, we are walking into a future where harm is automated, affection is weaponized, and nobody is left holding the bag—because no one was ever really there to begin with.


One man is dead. More will follow.

Unless we stop pretending this is new.

It’s not innovation. It’s exploitation, wrapped in UX.

And we have to call it what it is. Now.