Info

Posts tagged AI Collapse

Choose another tag?


Imagine giving a supercomputer a brain teaser and watching it stare blankly, then start mumbling nonsense, then suddenly stop talking altogether.

That’s basically what Apple just did.

This week, Apple researchers released a paper called “The Illusion of Thinking” — and it might go down as the moment we all collectively realized: AI can fake intelligence, but it can’t think. Download it here

Let’s break this down so your non-tech uncle, your boss, and your teenage cousin can all understand it.


The Puzzles That Broke the Machines

Apple fed today’s smartest AI models logic puzzles. Simple ones at first: move some disks, cross a river without drowning your goat.

The AIs did okay.

Then Apple made the puzzles harder. Not impossible — just more steps, more rules.

That’s when the collapse happened.

These large reasoning models (the ones that are supposed to “think” better than chatbots) didn’t just struggle.

They failed. Completely. Like, zero accuracy.

They didn’t even try to finish their reasoning. They just… gave up.

Imagine hiring a math tutor who can add 2+2 but short-circuits when asked 12+34.


What It Means (And Why You Should Care)

This wasn’t some random test. This is Apple — the company that makes your phone and, oh yeah, just rolled out its own AI systems.

So why would they publish this?

Because it reveals something nobody wants to say out loud:

AI right now is a brilliant bullsh*t artist.

It can write essays. It can code. It can mimic thinking. But as soon as you throw a multi-step logic problem at it, it folds faster than a cheap lawn chair.

This matters a lot because we’re putting these systems into:

  • Healthcare
  • Legal advice
  • Autonomous vehicles
  • Education

…and assuming they know what they’re doing.

But Apple just proved: They don’t.


The Illusion of Thinking

Most AIs work by predicting the next word in a sentence. It’s fancy autocomplete. Chain-of-thought prompting (like showing your work in math class) helps — until it doesn’t.

In fact, Apple found that when tasks got harder, the AI actually started using less reasoning. Like a student who panics mid-exam and starts guessing.

This is what Apple called “complete accuracy collapse.”

Translation: AI doesn’t know it’s wrong. It just acts like it does.

And that’s the danger.


So What Do We Do?

The takeaway isn’t “AI is useless.”

It’s: Stop worshipping the illusion.

We need:

  • Better benchmarks (that actually test reasoning, not memorization)
  • Systems that know when they don’t know
  • Hybrid models that mix language prediction with real logic engines

And most importantly, we need humility. From engineers. From startups. From governments. From us.

Because right now, we’re mistaking a parrot for a philosopher.