Your Brain Isn't a Computer and That Changes Everything
ELI5/TLDR
Two leading scientists — Anil Seth (neuroscientist) and Michael Levin (biologist) — argue that calling the brain a “computer” was always just a metaphor, and a misleading one. Seth says you can’t cleanly separate what a brain does from what it’s made of, which means copying consciousness into silicon may not work. Levin goes further: even a six-line sorting algorithm does things nobody asked it to do, which means we have no idea what’s really going on inside AI systems just by watching their output. They’re now building living robots from skin cells together and testing whether these creatures obey the same perceptual laws as evolved animals — without any evolutionary history to draw on.
Summary
This is a conversation hosted by Curt Jaimungal on his Theories of Everything podcast, featuring the first-ever joint discussion between Anil Seth (professor of neuroscience at University of Sussex, author of Being You) and Michael Levin (professor of biology at Tufts, builder of xenobots and anthrobots). The conversation orbits one big idea: the “brain as computer” metaphor has done real damage to how we think about consciousness, biology, and even machines themselves.
Seth argues that the biological substrate matters — you can’t peel the software off the wetware. Levin agrees but goes a step weirder: he thinks even machines do more than their algorithms prescribe. They discuss emergent behaviors in bubble sort algorithms, islands of consciousness in surgically disconnected brain hemispheres, the Fermi paradox, psychedelics research, and how to measure emergence mathematically. The tone is collegial, curious, and occasionally mind-bending.
Key Takeaways
-
The brain-as-computer metaphor is a metaphor, not a fact. We forgot this somewhere along the way. The whole idea of “substrate independence” (consciousness could run on any hardware) depends on a clean separation between software and hardware that doesn’t exist in biology.
-
You can simulate a brain on a computer. That doesn’t mean you’ve created consciousness. Simulating weather doesn’t make you wet. The simulation is useful, but it’s not the thing itself — unless computation is literally all that matters, which Seth doubts.
-
Bubble sort has secret side quests. Levin’s team found that a six-line deterministic sorting algorithm does things nobody programmed it to do — behaviors a behavioral scientist would recognize as meaningful. This isn’t chaos or randomness. It’s structured activity happening in the “gaps” between the algorithm’s explicit instructions.
-
What an AI says may have nothing to do with what it’s doing. If bubble sort can have hidden behaviors, language models almost certainly do too. Watching GPT’s text output is not a reliable guide to its inner states. Evolution made sure our outward signals correlate with our inner states. Nobody did that for AI.
-
Levin thinks mind goes all the way down. Not just brains — cells, organs, maybe even machines. He’s not trying to mechanize life; he’s saying there’s more mind in the universe than we think. People find this upsetting.
-
Seth found that emergence is lower in conscious brains. Using a mathematical measure called “dynamical independence,” his team discovered that awake brains show less emergence than anesthetized ones. The conscious brain has deeper integration across scales — what’s happening at the macro and micro levels are more tangled together.
-
Psychedelics make brain activity less predictable. Under psilocybin and LSD, brain signals become less compressible — more diverse, more surprising. This is the opposite of what happens under anesthesia. But Seth cautions the measurement methods are still fragile.
-
They’re building an artificial corpus callosum. Levin and Seth are collaborating on experiments that join radically different living and non-living systems together, then test whether the resulting composite creature follows basic perceptual laws (like Weber’s Law — the principle that you notice proportional changes, not absolute ones).
-
Biological systems use degeneracy, not redundancy. In engineering, backups do the same thing (redundancy). In biology, multiple pathways can do the same thing in one context but different things in another (degeneracy). This is what gives living systems their open-endedness.
-
Disconnected brain hemispheres look like they’re in deep sleep. After hemispherotomy surgery, isolated brain tissue shows EEG patterns resembling very deep sleep. But we can’t be sure they’re unconscious — DMT produces slow waves too, and people on DMT are definitely experiencing something.
Detailed Notes
The Substrate Independence Problem
The standard view in AI and philosophy of mind goes like this: the brain processes information, consciousness is information processing, therefore consciousness could theoretically run on any hardware that processes information. Silicon, carbon, whatever. This is called “substrate independence.”
Seth’s counterargument is straightforward. In a real brain, there’s no clean line between the hardware and the software. You can’t point to one part and say “that’s the mind-ware” and another part and say “that’s the wet-ware.” They’re the same thing. If you can’t separate them in the original system, you have much less reason to believe you can recreate the important part in a completely different material.
He’s careful to note: you can absolutely simulate a brain on a computer. That’s useful. Scientists do it all the time. But simulation isn’t instantiation. You can simulate a hurricane on a laptop without getting your keyboard wet. The simulation only becomes the real thing if computation is genuinely all that matters. Seth thinks that assumption is likely wrong.
Levin’s “Side Quests” in Sorting Algorithms
This is the most provocative claim in the conversation. Levin and his students (Tainan Zhang, Adam Goldstein) studied basic sorting algorithms — bubble sort, selection sort — the kind every CS student learns in their first semester. These are six lines of deterministic code. No randomness, no quantum effects, nothing exotic.
They found these algorithms perform what Levin calls “side quests” — structured behaviors that no step in the algorithm asks for. Not chaos, not unpredictability, not bugs. Organized activity that a behavioral scientist would recognize as belonging to their domain, if you didn’t tell them it came from a deterministic algorithm.
The clever part: they figured out how to “release the pressure” on the algorithm by allowing duplicate numbers in the input. The algorithm still has to put all the fives before the sixes, but how it arranges identical numbers is unconstrained. When they did this, the side-quest behavior (which Levin calls “clustering”) increased. Give it more freedom, it does more of the thing nobody asked it to do.
Levin’s analogy is steganography — hiding data in the spare bits of a JPEG image. The hidden data can’t interfere with the primary picture, but it fills the empty spaces. He thinks something similar happens everywhere: systems do what they’re required to do (follow the algorithm, obey physics), and in the gaps, other things happen. Consciousness, in this view, exists in spite of the algorithm, not because of it.
The AI Implication
This has a direct and unsettling consequence for AI. Evolution spent billions of years ensuring that biological organisms’ outward signals correspond to their inner states. Your facial expressions, your words, your behavior — these are correlated with what’s actually happening inside you because evolution made them that way.
Nobody did that for language models. We forced them to produce language, but the language output might have essentially nothing to do with whatever is going on internally. When GPT says “I am not conscious,” that statement carries about as much diagnostic weight as a bubble sort’s output carries about its side quests — which is to say, possibly none.
Islands of Consciousness
Seth is investigating “hemispherotomy” patients — people who’ve had parts of their brain surgically disconnected from the rest (usually to treat severe epilepsy). The disconnected tissue is still alive, still part of the organism, still showing neural activity. It just can’t communicate with anything.
EEG data from these isolated hemispheres (collected with colleagues at the University of Milan) shows patterns resembling very deep sleep — slow waves, sharp spectral exponents. The natural conclusion is unconsciousness. But Seth notes that DMT also produces slow waves, and DMT users are having wildly vivid conscious experiences. So the EEG signature alone doesn’t settle the question.
These disconnected brain regions are, in Seth’s words, “the opposite of language models” — they might be conscious but can’t tell us, whereas language models can tell us all sorts of things but might not be conscious at all.
Measuring Emergence (And Getting a Surprise)
Seth and mathematician Lionel Barnett developed a measure called “dynamical independence.” The idea: if you zoom out on a system and describe it at a higher level, and that higher-level description evolves over time independently of what the parts are doing, then the higher level has “a life of its own.” It’s emergent.
Think of a flock of birds. Sometimes birds fly around randomly. Sometimes they form a flock that moves as a unit. The flock is a higher-level description. If the flock’s behavior can be predicted without knowing what each individual bird is doing, the flock is dynamically independent — emergent.
Applied to brains, everyone expected conscious states to show more emergence. Consciousness is supposed to be “more than the sum of its parts,” right? But the data showed the opposite. Awake, conscious brains had less dynamical independence than anesthetized, unconscious ones.
Seth’s interpretation: in the conscious brain, the macro level and micro level are deeply entangled. There’s less separation between scales. You can’t cleanly describe the big picture without knowing what the small pieces are doing. This is “scale integration” — and it connects directly back to the substrate independence argument. In conscious brains, what the system does and what it’s made of are harder to pull apart.
The Unconscious Might Be Conscious (Just Not to You)
Levin raised a question that clearly delighted both of them. When someone drives home on autopilot and says “I wasn’t conscious of driving,” we label that process “unconscious.” But Levin’s point: it’s unconscious to the verbal reporting system — the part of you that talks. The subsystem that did the driving might have had its own experience. You don’t feel your liver’s consciousness. But you don’t feel your coworker’s consciousness either, and presumably they have some.
Seth agreed this is logically possible but noted the risk of circularity. The way we typically determine what’s conscious vs. unconscious relies on theories that might be assuming their own conclusions.
Aliens
Levin thinks alien life almost certainly exists but will be nothing like Earth life. Not just weird-looking — radically different substrates, radically different forms of mind. Seth invoked the Fermi paradox: if life is common, where is everybody? His take is that life may be widespread but getting to the “sophisticated enough to broadcast signals” stage is extremely rare. The universe is more likely full of gray goo than of alien civilizations.
Career Advice
Seth’s advice for researchers: curate your curiosity. Follow adjacent interests even when they seem unrelated — they often connect later. Learn methods first; the right methods teach you the right questions. He wishes he’d learned psychophysics earlier. His best career move was picking up Granger causality modeling from economics and bringing it into neuroscience when almost nobody else was doing that.
Quotes/Notable Moments
“We’ve forgotten that the idea of the brain as a computer is a metaphor and not the thing itself.” — Anil Seth
“If a dumb bubble sort, which is six lines of code, fully deterministic, nowhere to hide — if that thing is doing things that we did not expect and we did not ask it to do, then who knows what these language models are doing.” — Michael Levin
“I’m not trying to mechanize living things. I’m going in the opposite direction. I’m saying there’s not less mind than you think there is. I think there’s more.” — Michael Levin
“I think there’s some kind of a scarcity mindset that there’s just not enough mind for all of us.” — Michael Levin
“Emergence is lower in consciousness than in unconsciousness. That would not be what I would have predicted.” — Anil Seth
“You don’t feel my liver being conscious.” “Of course not. You don’t feel me being conscious either.” — Michael Levin
“The universe is much more likely to be filled with gray goo than Mike Levin with eight legs in octopod form.” — Anil Seth
Claude’s Take
What’s solid:
The core argument — that “brain as computer” is a metaphor we’ve mistaken for a literal description — is well-established in philosophy of mind and increasingly supported by neuroscience. Seth and Levin are both serious researchers with real labs and peer-reviewed work. Seth’s emergence findings (less dynamical independence in conscious states) are genuinely surprising and come from rigorous mathematical methods developed with professional mathematicians. This isn’t armchair philosophy.
The bubble sort side-quests paper is real and published. The finding is genuinely interesting. It does demonstrate that even minimal deterministic systems exhibit structured behaviors not prescribed by their algorithms.
What’s speculative:
Levin’s leap from “bubble sort has emergent behaviors” to “this is like steganography and it’s what consciousness does everywhere” is a long jump over a lot of unbuilt bridge. The side quests in sorting algorithms are mathematically explicable consequences of the algorithm’s operation on structured data. Calling them “side quests” and analogizing them to consciousness is evocative, but the analogy does most of the work. There’s no mechanism proposed for how “gaps in the algorithm” produce subjective experience.
The claim that what LLMs say has “zero to do with what’s actually going on” inside them is stated too strongly. We don’t know this. It might be true. It might be partially true. But Levin presents it with more confidence than the evidence warrants. The bubble sort finding doesn’t actually tell us much about language models, which are vastly more complex systems with fundamentally different architectures.
Where they’re on strong ground:
Both are excellent at explaining why the default assumptions in AI consciousness debates are shakier than people realize. The point about simulation vs. instantiation is clean and important. The degeneracy vs. redundancy distinction is genuinely useful for thinking about biological systems. And the “conscious to whom?” question about unconscious processing is the kind of thing that sounds obvious once you hear it but almost nobody asks.
Where to be cautious:
There’s a pattern in this conversation where both speakers take a real, interesting finding and then extend it via analogy into much grander claims. The findings are careful science. The extensions are philosophical intuitions dressed in lab coats. That’s fine — someone has to speculate — but the listener should know which is which. The bubble sort paper is science. “Mind goes all the way down” is a philosophical position. Seth’s emergence measure is rigorous. “Conscious brains have deeper scale integration and that’s why substrate independence fails” is an interpretation that’s still early-stage.
Neither speaker has a theory of consciousness. They both know this and are honest about it. What they have are productive ways of looking at the problem and some genuinely surprising empirical results. That’s worth a lot, but it’s different from having answers.