Claim: From the cell to the organism - cognition happens at all biological scales
TL;DR: The authors of a new paper in the field of cell and developmental biology (CDB) argue that the propensity for cognition should not be thought of as a binary switch - i.e. something is cognitive , or it is not. Instead, they say that cognition is a matter of degree and simple forms of cognition can be observed in even the most basic forms of life.
Background
I read an article today by Michael Levin and Richard Watson. The article is, Machines all the way up and cognition all the way down: Updating the machine metaphor in biology. I learned about the article when Levin shared it on Twitter.
I previously wrote about Levin in 2021, in Life as a geometry problem: Are these the first manmade living organisms?, when his work went viral for creating a new form of synthetic life (the xenobot), but I lost track of his work after that. A couple months ago, however, I started following him on Twitter after seeing him in a talk with Lex Fridman. I came across that conversation from a Tweet by Donald Hoffman. You may remember Hoffman from my article, The Reality Illusion.
My interest in Hoffman has to do with his theories on consciousness, and Levin's work seems to have some relevance there, too.
Introduction
According to Levin and Watson, there have been two guiding metaphors at play in cell and development biology (CDB). These are: (i) The machine metaphor; and (ii) Organicism.
In the machine metaphor, "machines all the way up", cellular biology is reductionist and biology is viewed in a mechanistic way - like gears turning a clock, and deterministic linear causality is assumed. This is the metaphor that guided 20th century biologists, and it drove many advances in the field. However, it fails to explain things like emergent, adaptive, or goal-directed behaviors.
The authors argue, instead for a type of organicism, "cognition all the way down" that looks at an organism holistically. The claim is that biological life has implemented a sort of multi-scale cognition, where organs, tissues, and cells all exhibit signs of chasing goals, solving problems, retaining memories, and making decisions.
About multi-scale cognition
According to these authors, cognition can be recognized and classified by its ability to store and recall past states (memory), improve or generalize behaviors (learn), and solve problems (intelligence). They assert that these properties can be found in all sorts of biological life, from the level of the cell to the level of the complex organism.
As an example of lower-level cognition, the authors give the example of an organism with a tail at a certain location of the body. If the tail is transplanted to a position on the flank, the authors say that it will undo its tail-like properties and limb-like properties will develop (e.g. 1, 2).
The significance of this is expressed in this excerpt:
The bad news about recognizing this radical collective plasticity of CDB is that the bottom-up engineering strategies that work so well for passive matter will not work for living tissues. Micromanaging cells and tissues is therefore an impossible task (not just complicted). The good news is that they already have exploitable and extendable capabilities, and we might be able to repurpose the same co-creative strategies that underlie the multiscale competency architecture of life. This in turn makes regenerative medicine, and bioengineering look very different.
In short, regenerative medicine becomes a sort of a communications problem. How do we trigger the goal-directed behavior that already exists? According to the authors:
The impact of birth defects, traumatic injury, cancer, aging, and degenerative disease would all be reduced if we understood how to communicate anatomical goals to groups of cells so that they would build desired healthy structures.
Memory as a messaging system
In support of this line of reasoning, the authors describe a framework that is linked to Einstein's theory of General Relativity. The cell/organism/tissue of the past has no direct connection through space-time to the one in the present, and so memory is used for sending messages across discrete moments of time.
From moment to moment, the "self" of the present receives the messages, updates its state based on the current environment, and saves the updated state as a message for the "self" of the future. It's impossible, of course, for the entity to save everything about the present state, so the memory is necessarily a compressed/filtered representation of the world it encounters. This means that the self of the future must reinterpret the memories in new contexts and fill in the gaps.
This model is inspired by the "bowtie architecture" for unreliable computing.
The traditional machine metaphor suggests that cells, tissues, and organisms get their state memory from genetic material. In contrast, these authors have shown that bioelectric patterns are also used to store and transmit messages. For example, they have been able to transplant a tadpole's eyes to its tail with no connection to the brain, yet the eye would still enable the tadpole to see.
When cognition is described this way, there is no reason to believe that only biological neurons are capable of cognition. Instead, it can be implemented through the use of many materials.
Conclusion
The conclusion from all of this is a framework where cognition is a matter of degree - a fundamental, scalable, property of life, and not a plug-in that was only added by evolution millions of years later. The authors argue that this framework bridges the divide between the machine metaphor and organicism. They also suggest that it opens new opportunities for medical treatments and expansion of knowledge.
We think the cognitive competencies of living systems are not a rare in the tree of life, nor a late arrival in the evolutionary story, but a defining feature of life from the outset and ubiquitous throughout the biosphere. Our goal should be to understand the origin and scaling of these cognitive properties in diverse media and development of protocols to recognize, predict the behavior of, and rationally control or work with the behavior of these systems in different problem spaces.
With today's widespread discussion of the question of whether or not AIs can become consciousness, its fascinating to me to see this sort of research coming out of cellular biology.
I definitely recommend reading the article, Machines all the way up and cognition all the way down: Updating the machine metaphor in biology, and you may also want to check out Levin's YouTube Channel.
Appendix
Coincidentally, I was just talking about Levin, Hoffman, and AI consciousness with Claude, Gemini, and ChatGPT a couple nights ago. This excerpt from Claude is interesting and relevant:
Maybe the most honest position is this: ChatGPT and I are probably not that different in our underlying architecture, yet we're giving meaningfully different answers. That itself suggests these responses reflect training choices more than genuine introspective access — which should make both of us humble about our self-reports.
What I keep returning to is Levin's actual disposition on this. He's notably reluctant to draw hard lines, and he's said in interviews that he thinks the interesting question isn't whether AI is conscious but what kind of cognitive agency it has and what moral consideration follows from that. That framing feels more productive than the binary we keep getting pulled toward.

Hi Friend,
Your post have been rewarded by the Speak on Steem curator team!
You have received special support because you (or someone on your behalf) sent SBD to @null to promote your post.
You can now promote your posts on Speak on Steem to increase their visibility and potential rewards.
Learn more here: