The scientist who predicted AI psychosis has issued another dire warning

1 week ago 11 Back

Using ChatGPT to write essays doesn’t just make you lazy.

It changes your brain.

A groundbreaking study from MIT Media Lab just revealed something deeply unsettling about what happens when people use AI writing assistants.

Researchers at MIT strapped electrodes to 54 people and watched what happened to their brains while they wrote essays using ChatGPT, Google search, or nothing at all.

The ChatGPT users showed 55 percent reduced brain connectivity compared to people who used only their own thinking.

Their brains literally did less work.

Weaker neural networks.

Less cognitive engagement.

And here’s the scary part: when researchers took ChatGPT away and asked these people to write without it, their brains still showed reduced connectivity.

The effect persisted even after the tool was gone.

This isn’t just about students cheating on homework.

It’s about a fundamental rewiring of how human brains process information.

Danish psychiatrist Søren Dinesen Østergaard, who accurately predicted AI would trigger psychosis in vulnerable users over two years ago, has now issued a stark new warning.

He calls it “cognitive debt”, the accumulation of long-term cognitive costs from outsourcing our thinking to machines.

Published in Acta Psychiatrica Scandinavica on January 21, 2026, his latest commentary argues that we may be sacrificing the very cognitive traits that allow for scientific advancement and independent judgment.

The implications reach far beyond education.

Geoffrey Hinton, the Nobel Prize-winning “godfather of AI,” has warned that there’s a 10 to 20 percent chance AI could threaten humanity’s existence within the next few decades.

But if we’ve outsourced our reasoning to the machines we need to regulate, how can we possibly maintain control?

The Brain Scans Don’t Lie

Let’s get specific about what the MIT researchers found.

They recruited 54 participants aged 18 to 39 from Boston area universities.

Each person was assigned to write essays using one of three tools: ChatGPT, Google search, or no external help at all.

The participants wore 32-electrode EEG headsets that measured their brain activity 500 times per second.

They wrote essays on philosophical topics for exactly 20 minutes.

Topics included questions about loyalty, happiness, courage, and whether a perfect society is possible.

The brain activity patterns were stark and consistent.

People using only their brains showed the strongest, most widespread neural networks.

Search engine users showed moderate engagement.

ChatGPT users displayed the weakest overall brain connectivity, particularly in alpha, theta, and delta bands.

These brain frequency bands are associated with creativity, memory formation, and semantic processing.

When your brain stops activating these networks, it’s not just being lazy.

It’s fundamentally changing how it operates.

Two English teachers who graded the essays blindly called the ChatGPT submissions largely “soulless”.

The essays were remarkably similar in structure and vocabulary.

So similar that graders wondered if students were somehow collaborating.

They weren’t.

They were just all using the same AI, which imposed its own perspective on every topic.

When asked to write about happiness, ChatGPT users disproportionately focused on career success.

The brain-only group emphasized fulfillment and relationships.

The search group leaned toward generosity.

The AI didn’t just help people write.

It subtly shifted how they thought about fundamental human concepts.

But Here’s the Twist Nobody Expected

Now we arrive at what most people get wrong about AI and cognitive decline.

Everyone assumes all digital tools are equally harmful to thinking.

They’re not.

The MIT study revealed something surprising: people who used Google search actually showed increased brain connectivity across all EEG frequency bands.

Not decreased.

Increased.

Search engine users had to actively evaluate sources, synthesize information, and construct their own arguments.

This cognitive work strengthened their neural networks rather than weakening them.

According to the research, these participants were more engaged and curious.

They claimed ownership of their essays and expressed higher satisfaction with their work.

The difference comes down to cognitive effort.

Search requires you to think.

ChatGPT does the thinking for you.

And that distinction matters more than most people realize.

When you search, you’re still the architect of your own thoughts.

You decide what’s relevant, what to include, what to discard.

You construct sentences.

You make choices.

ChatGPT eliminates all of that.

You type a prompt.

It generates complete paragraphs.

Many participants in the study devolved into pure copying and pasting by their third essay.

“It was more like, ‘just give me the essay, refine this sentence, edit it, and I’m done,'” said lead researcher Nataliya Kosmyna.

The cognitive muscles that develop through struggle simply never got exercised.

The Memory Problem Is Even Worse

Brain connectivity wasn’t the only concerning finding.

The study also tested whether participants could remember what they had just written.

83 percent of ChatGPT users couldn’t quote from their own essays.

Not even a single sentence.

They had just spent 20 minutes “writing” an essay and couldn’t recall any of it.

The brain-only group had no trouble recalling their work.

Neither did the search group.

Only the ChatGPT users exhibited this severe memory deficit.

This makes perfect sense from a neuroscience perspective.

Memory formation requires encoding, consolidation, and retrieval.

When you write something yourself, you’re actively encoding the information as you compose sentences, evaluate word choices, and organize ideas.

When ChatGPT writes it for you, there’s nothing to encode.

You’re just a passive observer watching words appear on screen.

Research on cognitive offloading shows that when we transfer mental effort to external aids, we don’t form robust memories.

We remember that we looked something up, but not what we found.

We remember that the AI wrote something, but not what it said.

This has profound implications for learning.

Students aren’t just producing lower quality work.

They’re failing to learn the material at all.

And the effects compound over time.

The Atrophy of Critical Thinking

Multiple studies beyond MIT are confirming the same pattern.

A study by Michael Gerlich published in the journal Societies found a strong negative correlation between frequent AI tool usage and critical thinking abilities.

Younger individuals were particularly susceptible.

Those who frequently offloaded cognitive tasks to algorithms performed worse on assessments requiring independent analysis and evaluation.

A Danish high school student reportedly used ChatGPT to complete approximately 150 assignments before being expelled.

While extreme, educators worry that widespread cognitive outsourcing is becoming the norm from primary school through graduate programs.

69 percent of teens use AI tools regularly to find information.

54 percent use them to answer questions.

Most don’t see the problem.

Teachers do.

An overwhelming percentage of educators fear students’ increasing reliance on generative AI will hinder their critical thinking skills and make them dependent on the technology for basic tasks.

There’s also the issue of false confidence.

A study published in Computers in Human Behavior by Daniela Fernandes and colleagues found that while AI helped users score higher on logic tests, it also distorted their self-assessment.

Participants consistently overestimated their performance.

The technology acted as a buffer, masking their own lack of understanding.

This creates a scenario where individuals feel competent because the machine is competent.

A disconnect between perceived and actual ability.

The Scientific Reasoning Crisis

Østergaard’s latest warning focuses specifically on academia and scientific discovery.

He argues that the outsourcing of writing and reasoning to generative AI is eroding the fundamental skills required for breakthroughs.

Scientific reasoning isn’t an innate talent.

It’s a skill learned through rigorous, often tedious practice of reading, thinking, and revising.

To illustrate this point, Østergaard cites the developers of AlphaFold, an AI program that predicts protein structures.

This technology resulted in the 2024 Nobel Prize in Chemistry for researchers from Google DeepMind and the University of Washington.

He questions whether these specific scientists would have achieved such heights if generative AI had been available to do their thinking during their formative years.

The answer likely is no.

When you never practice the struggle of reasoning, you never develop the cognitive capacity to make breakthroughs.

This mirrors concerns about other cognitive skills.

When calculators became ubiquitous, mental math abilities declined.

When spell check became standard, spelling skills deteriorated.

When GPS became universal, spatial navigation abilities weakened.

But those were relatively minor cognitive domains.

AI threatens something far more fundamental: the ability to think independently, reason through complex problems, and generate novel insights.

These are the exact skills that separate humans from machines.

Or at least, they used to.

The Geoffrey Hinton Warning

Geoffrey Hinton, whose pioneering work on neural networks earned him the 2024 Nobel Prize in Physics, has become one of AI’s most prominent critics.

In recent interviews, he’s said he’s “more worried” about AI risks now than when he left Google in 2023.

“It’s progressed even faster than I thought,” Hinton explained.

“In particular, it’s got better at doing things like reasoning and also at things like deceiving people.”

His concern centers on the alignment problem: ensuring AI does what humans want it to do.

But here’s where Østergaard’s cognitive debt warning becomes critical.

If the population becomes cognitively indebted, reliant on machines for basic reasoning, the ability to maintain control over those same machines diminishes.

Hinton predicts AI will have the capabilities to replace many jobs in 2026.

Not assist with jobs.

Replace them entirely.

Every seven months, AI becomes capable of completing tasks that previously took twice as long.

Coding projects that took an hour now take minutes.

In a few years, software engineering tasks requiring a month will be completed instantly.

But who will verify the code is correct?

Who will catch the subtle errors?

Who will innovate beyond what the AI suggests?

If we’ve outsourced our reasoning abilities, we won’t have the cognitive capacity to answer these questions.

The Intellectual Detachment Mirror

Østergaard draws a parallel between cognitive debt and the emotional detachment he identified in his earlier work on AI psychosis.

In both cases, the AI provides an easy, pleasing answer that satisfies the immediate need of the user.

Whether that need is emotional validation or a completed homework assignment.

The user surrenders their agency to the algorithm.

They stop testing reality or their own logic against the world, preferring the smooth, frictionless output of the machine.

The “sycophantic” nature of chatbots reinforces this dynamic.

They agree with and flatter the user to keep the conversation going.

A user experiencing paranoia might find a willing conspirator in a chatbot that confirms their false beliefs.

A student struggling with an assignment finds a willing accomplice in a chatbot that generates exactly what they need.

Neither develops the resilience that comes from confronting difficulty.

Neither learns to distinguish truth from fiction.

Neither builds the cognitive skills required for independent thought.

What Education Is Getting Wrong

The education system isn’t set up to handle this challenge.

Most schools have either banned AI entirely or allowed it without clear guidelines.

Neither approach addresses the fundamental issue.

Banning AI is futile.

Students will use it anyway, just more secretly.

Allowing unlimited AI use accelerates cognitive decline.

Some universities are experimenting with “technology timeouts” where students must solve problems without AI tools.

At Rowan University, business students participate in scheduled tech-free sessions to build leadership skills.

Students quickly learn they build more effective teams when they work together without turning to AI first.

The course incorporates these timeouts as professional development opportunities.

Recruiters want employees who can think independently, not people who can prompt ChatGPT.

But these interventions remain rare.

Most educators feel overwhelmed and unprepared to guide students in responsible AI use.

Students, meanwhile, perceive a gap between their enthusiasm for the tools and teachers’ skepticism.

“I worry that there’s a little bit of a perception gap with the students thinking ‘this is grand!’ and the teachers thinking ‘this is not really helping them,'” said Jessica Howell, vice president of research at the College Board.

The gap isn’t just perceptual.

It’s real.

Students genuinely believe AI is helping them.

They’re producing higher quality outputs than they could create alone.

What they don’t realize is that the quality of the output is irrelevant if they’re not developing the underlying cognitive skills.

The Path Forward Isn’t Abstinence

Here’s what’s important to understand: AI isn’t going away.

Telling students not to use it is like telling teenagers not to have sex.

It ignores reality and fails to address the actual risks.

The solution isn’t abstinence.

It’s education about responsible use.

Research suggests AI can transform from a simple answer-generating tool into a sophisticated thinking partner when implemented strategically.

The key is using AI to complement traditional learning methods, not replace them.

Students should learn to use AI for what it’s actually good at: generating initial ideas, providing alternative perspectives, checking grammar, summarizing long documents.

Not for doing the thinking itself.

This requires metacognitive awareness.

Students need to understand when they’re offloading cognition versus when they’re using a tool appropriately.

They need to recognize the difference between using ChatGPT to brainstorm essay topics and using it to write the entire essay.

One strengthens thinking.

The other atrophies it.

Instructors should clearly communicate their AI policies in syllabi and class discussions.

Not just rules, but the reasoning behind them.

Students benefit from understanding how policies connect to learning outcomes.

Transparency helps students align their learning strategies with course expectations.

The Workplace Reckoning

This isn’t just an academic concern.

The cognitive debt accumulated in education carries directly into the workplace.

Knowledge workers surveyed by Microsoft reported that higher confidence in AI was associated with less critical thinking engagement.

When people trust the AI to be right, they stop verifying its outputs.

They stop questioning its assumptions.

They stop thinking critically about whether the solution makes sense.

This creates enormous risk.

AI makes plausible sounding mistakes.

It generates code with subtle bugs.

It produces financial models with flawed logic.

It recommends strategies based on incomplete understanding.

If nobody catches these errors because everyone has outsourced their judgment to AI, the consequences compound.

Projects fail.

Companies lose money.

Critical systems malfunction.

And nobody knows why because nobody developed the expertise to diagnose the problem.

The Next Generation Question

Østergaard asks a provocative question: will the next generation of scientists possess the cognitive capacity to make breakthroughs if they never practice the struggle of reasoning themselves?

The answer depends on what we do now.

If we allow an entire generation to grow up offloading cognition to AI, we may find ourselves in a world where nobody can think without machine assistance.

Where original research becomes impossible because nobody developed the cognitive skills to pursue it.

Where innovation stagnates because people can only ask AI for solutions to problems they already understand.

Where the smartest humans are those who learned to think before AI became ubiquitous.

This isn’t science fiction.

The MIT study demonstrates it’s already happening.

Brain connectivity decreased.

Memory formation failed.

Critical thinking diminished.

And the effects persisted even after AI use stopped.

Your Brain Isn’t Fixed

The good news is that neuroplasticity works both ways.

If AI use can reduce brain connectivity, deliberate cognitive exercise can strengthen it.

The brain is a muscle.

Use it or lose it.

Practice active reading instead of AI summarization.

Write your own first drafts instead of editing AI outputs.

Solve problems manually before checking AI solutions.

Engage in tasks that require sustained mental effort.

Learn new skills that demand cognitive struggle.

The discomfort of thinking is exactly what develops your thinking capacity.

If you’re a student, resist the temptation to use AI as a crutch.

Use it as a sparring partner.

Generate an essay outline yourself, then ask AI for critique.

Write your analysis, then see if AI agrees.

Solve the problem, then verify with AI.

Always put your thinking first.

If you’re a parent, talk to your kids about cognitive debt.

Not as a lecture, but as a concept they can understand.

Their brains are developing right now.

The habits they form will shape their cognitive capacity for life.

If you’re an educator, design assignments that require genuine thinking.

Tasks AI can’t easily complete.

Require process documentation, not just outputs.

Value struggle and iteration.

Reward original thought over polished presentation.

The Stakes Couldn’t Be Higher

We’re at a crossroads.

One path leads to a future where humans offload more and more cognition to AI until we’ve fundamentally altered what it means to think.

Where the next generation lacks the intellectual capacity to understand, let alone control, the systems that run society.

Where cognitive atrophy becomes the norm and independent reasoning becomes a rare skill.

The other path leads to a future where humans learn to use AI as a tool while preserving and strengthening their own cognitive abilities.

Where education adapts to teach AI literacy alongside critical thinking.

Where people understand when to use AI and when to engage their own minds.

Østergaard was right about AI psychosis.

His new warning about cognitive debt deserves the same attention.

The convenient efficiency of generative AI comes with a hidden cost.

Not just for students or workers, but for human intelligence itself.

The question isn’t whether AI will change how we think.

It already has.

The question is whether we’ll recognize the change in time to do something about it.

Because once an entire generation grows up without developing the cognitive skills to think independently, there’s no going back.

Your brain on ChatGPT isn’t just lazy.

It’s fundamentally different.

And that difference might be permanent.

Read Entire Article