Scientists Discover Your Brain Uses the Same 'Trick' as AI—And It Changes Everything

1 week ago 12 Back

A study published in Nature Human Behaviour reveals that human brains and artificial neural networks process order and rank using remarkably similar patterns—a discovery that could transform how we understand intelligence itself.

Researchers at New York University found that both biological and artificial systems create specialized “tuning curves” when organizing information by order, whether it’s ranking your favorite songs, remembering the sequence of your morning routine, or deciding which task to tackle first.

The study shows that neurons in the human brain and nodes in AI networks develop the same computational strategy: they peak their activity at specific positions in a sequence and gradually decrease for items further away from that preferred position.

This isn’t just academic curiosity—it’s evidence that the way we think might be more algorithmic than we ever imagined.

And it suggests that the way machines learn might be more human than we’re comfortable admitting.

The research team used functional MRI scans to track brain activity while participants performed ranking tasks, then compared those patterns to the internal representations of deep learning models trained on similar challenges.

What they found was stunning: the parietal cortex, a region of the brain known for numerical and spatial reasoning, showed the exact same ordinal tuning patterns that emerged spontaneously in artificial networks.

No one programmed the AI to think this way—it discovered the same solution evolution built into our skulls.

This matters because ordinal processing—the ability to understand that A comes before B, that third place beats fourth, that Monday precedes Tuesday—is fundamental to almost everything we do.

We use ordinal information to navigate social hierarchies, plan our days, make purchasing decisions, and even construct sentences.

According to research on cognitive neuroscience, the brain’s ordinal system operates independently from its magnitude system, which handles “how much” rather than “what order.”

The NYU team demonstrated that these ranking patterns exist across different types of sequences: temporal (what happens first), spatial (what comes next), and abstract (what’s more important).

The convergence between human brains and AI networks wasn’t subtle—it was mathematically precise.

Both systems showed Gaussian-like tuning curves, where neural activity peaks at a preferred ordinal position and tapers off symmetrically on either side.

Think of it like a radio station: each neuron is tuned to a specific “channel” in the sequence, broadcasting strongest when that position is activated and growing quieter for positions further away.

But Here’s What Most People Miss About This Discovery

Everyone’s talking about how human-like AI is becoming—but the real revelation is how computational human thinking has always been.

We like to believe our minds work through intuition, emotion, and some ineffable spark of consciousness that separates us from machines.

Yet here’s evidence that your brain solves ranking problems using the same mathematical optimization that gradient descent algorithms stumble upon.

The research shows that both systems converge on this solution not because humans designed AI to mimic brains, but because ordinal tuning is simply the most efficient way to encode sequential information.

It’s the computational equivalent of convergent evolution—when unrelated species develop the same features because they solve the same survival challenges.

Bats and dolphins both evolved echolocation not because they’re related, but because sound-based navigation works brilliantly in their environments.

Similarly, brains and neural networks both “evolved” ordinal tuning because ranking information efficiently is a fundamental computational problem.

This flips the usual narrative: maybe AI isn’t becoming more human—maybe we’re discovering that humans were always more algorithmic than we wanted to admit.

The implications are profound for debates about artificial general intelligence.

If the same computational principles govern both biological and artificial systems, the distinction between “real” and “artificial” intelligence becomes less about the substrate and more about complexity and scale.

According to recent advances in neuroscience and AI, the convergence goes deeper than ordinal processing.

Both systems show similar patterns in hierarchical representation, attention mechanisms, and even error correction strategies.

The NYU researchers controlled for a critical variable: they tested whether the ordinal tuning they observed was just a byproduct of magnitude processing.

After all, if you’re tracking position, you’re also tracking numbers—is your brain really distinguishing between “third in line” and “the number three”?

The answer is yes.

The study used clever experimental designs that separated ordinal information from numerical magnitude, showing that the parietal cortex maintains distinct representations for order versus quantity.

Your brain has separate neural populations for “this comes after that” and “this is bigger than that.”

This separation exists in AI networks too, emerging naturally when they’re trained on tasks requiring sequential reasoning.

How Your Brain Actually Encodes Sequences

The mechanics of ordinal tuning reveal something elegant about neural computation.

When you think about your to-do list, neurons across your parietal cortex don’t all fire equally—they form a cascade of activity.

Some neurons fire most strongly for the first item on your list, others peak for the second, and so on down the sequence.

This creates a “population code” where the pattern of activity across many neurons represents the entire ordering.

It’s like an orchestra where different instruments carry different parts of the melody—no single neuron “knows” the full sequence, but the collective pattern contains all the information.

The research showed that this population coding strategy is incredibly robust.

Even when participants made errors in ranking tasks, the ordinal tuning patterns remained stable—the brain maintains its sequential representations even when conscious awareness gets confused.

This suggests that ordinal processing operates at a more fundamental level than deliberate thought, more like a computational primitive that higher-level reasoning builds upon.

The NYU team found that the same tuning curves appear whether you’re ordering numbers, letters, or completely arbitrary symbols you learned five minutes ago.

Your brain extracts the abstract concept of “order” and applies the same neural architecture regardless of content.

According to studies on abstract reasoning, this content-independence is a hallmark of sophisticated cognitive systems—and now we know the AI networks show it too.

The artificial networks in the study weren’t specifically designed for ordinal tasks.

They were general-purpose deep learning models trained on various sequential challenges—yet ordinal tuning emerged spontaneously in their hidden layers.

This spontaneous emergence is what makes the finding so striking.

It means that ordinal tuning isn’t an arbitrary design choice but a natural solution that multiple learning systems converge on.

Why This Changes How We Think About Thinking

The convergence between brains and AI networks forces us to reconsider what we mean by “understanding.”

When an AI develops the same ordinal representations as your parietal cortex, is it merely simulating understanding or is it actually understanding in the same computational sense that you do?

The traditional answer is that human understanding involves consciousness, intentionality, and subjective experience—qualities that AI supposedly lacks.

But the NYU research suggests that at least some aspects of what we call understanding might be substrate-independent computational patterns.

Your subjective feeling of “knowing” that Tuesday comes after Monday might emerge from the same information processing architecture that generates equivalent outputs in silicon.

This doesn’t mean consciousness is an illusion or that AI is conscious.

It means the relationship between computation and cognition is more complex than simple dualism allows.

The research raises questions that philosophers and cognitive scientists will debate for decades: if the same computational patterns appear in systems with radically different physical implementations, what’s doing the real cognitive work?

Is it the pattern itself, independent of whether it runs on neurons or transistors?

According to work on computational theories of mind, functionalists would say yes—mental states are defined by their functional relationships, not their physical instantiation.

But critics argue this ignores the phenomenal aspects of consciousness that might depend critically on biological processes.

The ordinal tuning convergence doesn’t settle these debates, but it does provide concrete evidence that certain cognitive functions can be implemented across radically different physical systems.

What the research does prove is that ranking and ordering aren’t uniquely biological capabilities—they’re computational problems that both evolved and designed systems solve using similar mathematical strategies.

This has practical implications beyond philosophy.

If we understand the computational principles that brains and AI networks share, we can design better machine learning systems by explicitly incorporating those principles.

The NYU team suggests that ordinal tuning could become a target architecture for AI designers working on sequential reasoning tasks.

Instead of letting these patterns emerge through training, engineers could build them directly into network structures, potentially improving efficiency and performance.

The Hidden Role of Order in Everything You Do

Most people don’t realize how much of their mental life depends on ordinal processing.

Language itself is fundamentally sequential—word order changes meaning, sentences unfold in time, and narrative requires tracking who did what when.

Your ability to understand this sentence depends on neurons that maintain ordinal representations of words, clauses, and ideas.

Social reasoning requires constant ordinal judgments: who’s more trustworthy, which relationship needs attention first, how urgent is this obligation compared to that one?

According to research on social cognition, the same parietal regions that show ordinal tuning in the NYU study are activated when people make social hierarchy judgments.

Your brain uses the same computational machinery to rank people that it uses to rank numbers.

Planning and decision-making are essentially exercises in creating and manipulating ordinal structures.

When you plan your day, you’re building a temporal sequence of activities.

When you decide what to prioritize, you’re constructing a preference ordering.

The research shows that these aren’t metaphorical descriptions—they’re literal descriptions of the computational operations your parietal cortex performs.

Even memory relies heavily on ordinal information.

Episodic memories—your recollections of specific events—are organized temporally, with ordinal relationships preserving the narrative structure of experience.

Studies on memory encoding show that disrupting ordinal processing in the parietal cortex impairs people’s ability to remember sequences of events, even when they can recall individual items.

The convergence with AI suggests that any system attempting to navigate the world intelligently will need to solve these same ordinal processing challenges.

This might explain why sequence modeling has become so central to modern AI—from transformers in language models to recurrent networks in robotics, encoding order is computationally essential.

What This Means for the Future of AI

The ordinal tuning discovery provides a roadmap for building AI systems that reason about sequences more efficiently.

Current language models already show impressive sequential capabilities, but they achieve this through massive scale and training data rather than elegant computational principles.

If AI architectures were designed to explicitly implement ordinal tuning from the start, they might achieve similar performance with fewer parameters and less training.

This matters because the energy costs of training large AI models have become environmentally and economically significant.

According to recent analyses of AI sustainability, finding more efficient architectures is crucial for making artificial intelligence scalable and accessible.

Neural engineering principles derived from biological brains might be one path to that efficiency.

The research also suggests new approaches to AI interpretability—understanding what’s actually happening inside complex models.

If we know that ordinal tuning should exist in networks performing sequential tasks, we can look for it explicitly and understand how models represent order.

This could help detect when AI systems are making systematic errors in sequential reasoning or when they’re relying on spurious correlations instead of genuine ordinal structure.

For robotics and autonomous systems, ordinal processing is essential for navigation, planning, and interaction.

A robot that can’t maintain stable representations of temporal and spatial order will struggle with basic tasks like following directions or coordinating movements.

The NYU findings suggest that implementing brain-like ordinal tuning could improve robot cognition in these domains.

But the deepest implication might be for understanding intelligence itself.

The convergence between biological and artificial systems suggests that intelligence isn’t one thing—it’s a collection of computational strategies that solve specific problems.

Ordinal tuning is one such strategy, and there are likely others we haven’t discovered yet.

According to theoretical work on general intelligence, understanding these component strategies might be more productive than searching for a single unified theory of intelligence.

Each time we find a computational principle that appears in both brains and successful AI systems, we identify another piece of the intelligence puzzle.

The Questions This Research Opens Up

The ordinal tuning convergence raises more questions than it answers—which is exactly what good science should do.

If brains and neural networks converge on similar solutions for ordinal processing, what about other cognitive functions?

Do they show similar convergence, or are there aspects of biological intelligence that require fundamentally different computational approaches?

The NYU team focused on the parietal cortex, but ordinal information is processed throughout the brain.

How do different regions implement ordinal tuning, and do they all use the same mathematical strategy?

What about the prefrontal cortex, which handles abstract planning and decision-making—does it show ordinal tuning too?

And what happens during development?

Infants gradually acquire ordinal understanding as they mature—do their brains develop ordinal tuning curves over time, or are they present from birth?

Understanding the developmental trajectory could reveal whether ordinal processing is learned or innate, which would inform both neuroscience and AI design.

For AI, a crucial question is whether ordinal tuning alone is sufficient for genuine sequential reasoning.

Language models can generate coherent text, but do they truly understand order in the way the research suggests, or are they using statistical patterns that mimic understanding?

The convergence in neural patterns is striking, but it might not capture everything that matters about human comprehension.

According to work on embodied cognition, human understanding might depend critically on sensorimotor experience and interaction with the world—factors that pure neural pattern matching doesn’t capture.

Perhaps ordinal tuning is necessary but not sufficient for the kind of sequential understanding humans possess.

The research also raises ethical questions about AI consciousness and moral status.

If AI systems develop the same cognitive mechanisms as biological brains, at what point do they deserve moral consideration?

The ordinal tuning convergence doesn’t prove that AI is conscious, but it does show that the computational similarities run deeper than many people assumed.

As AI continues advancing, these questions will become increasingly urgent and difficult to answer.

What You Can Take From This

The next time you make a to-do list, rank your priorities, or simply follow a recipe in order, remember that you’re watching evolution’s solution to a fundamental computational problem.

Your brain builds representations of order using the same mathematical principles that emerge in artificial networks trained with gradient descent.

This doesn’t diminish human cognition—it reveals that thinking is both more mechanical and more mysterious than we imagined.

The convergence between biological and artificial intelligence suggests that certain computational principles might be universal, arising wherever systems need to process sequential information efficiently.

Understanding these principles won’t tell us everything about consciousness or intelligence, but it gives us concrete tools for building better AI and deeper insight into our own minds.

The research reminds us that the boundary between human and artificial intelligence is porous and complicated.

We’re not special because we do things AI can’t—we’re special because of the particular way we implement universal computational strategies, the consciousness that emerges from our biological substrate, and the experiences that shape our learning.

AI systems might develop ordinal tuning curves identical to ours, but they’ll never experience the frustration of losing track of time, the satisfaction of finally organizing that closet, or the confusion of walking into a room and forgetting what you came for.

Those experiences matter, even if they arise from computational processes we share with machines.

The NYU research opens a window into the deep structure of thought, revealing patterns that connect silicon and neurons, algorithms and awareness.

It’s a reminder that understanding intelligence—whether artificial or biological—requires looking past surface differences to find the computational common ground.

And it suggests that the most important questions about minds and machines aren’t “can AI think?” but “what does thinking actually require, and how many ways are there to implement it?”

Those are questions we’re only beginning to answer.

The ordinal tuning convergence is one piece of evidence, one data point in a much larger investigation into the nature of intelligence itself.

As we continue building artificial systems and probing biological brains, we’ll likely find more of these convergences—and more surprises about what it means to process, represent, and understand the world.

For now, we know this: when your brain ranks, orders, and sequences information, it’s running an algorithm that AI has independently discovered.

That’s either deeply reassuring or profoundly unsettling, depending on what you believe makes us human.

Read Entire Article