From Worm to Human

Thanks to some comments from the Mike Smith on my previous post, I began thinking about a proposed evolution of consciousness. To be clear, this is entirely hypothetical and idealized. The actual evolution is probably a lot more complex than we can imagine. I regard this as a thought experiment that aims to identify the critical elements during the evolutionary process that created consciousness.

The diagrams are intended to represent processes or more correctly categories of processes. Anatomical structures may not match. While the diagrams are not supposed to map to anatomy, there is a rough correspondence between various anatomical structures and the categories of processes, especially in the more primitive organisms. As brains have become complex, the capabilities expressed by the processes become distributed throughout the brain. This allows not only more sophisticated processes (more neurons involved with a single function), but also redundancy and integration.

Note: The dotted olfactory line in the diagrams is a minor exception to rule that diagrams represent processes, not anatomy., It represent an anatomical fact that the olfactory system does not signal through the router/controller but is connected closely to the spacetime positioning system.

Organism-0

Organism-0 is essentially an organism without information about the external world. It would have a controller brain with information about internal states and influence over the internal organism.

Organism-1

In Organism-1, an ability to interact the environment is added. Information comes from the senses; a smart routing function is added to the controller that enables some nuance in motor system responses that may be relatively hard-wired.

Organism-2

In Organism-2, the router controller is augmented by additional processing between the sense information and motor response. The senses are enhanced by additional processing . A basic spatial positioning system is developed to control mobility. There may have been a nascent reward/warning system that is shown in the diagram as a dotted circle. Sophistication of the networks borders on consciousness.

Organism-3

In Organism-3, consciousness arises from adding:

  • Space and time positioning
  • More sophisticated sensory processing
  • Capacity for learning and memory
  • A more complete reward/warning system

These are the key elements of consciousness. The reward/warning system ties mentality to the biological organism. Learning and memory requires senses, spacetime positioning, and a reward/warning system as an arbitrator for judging the context and value of actions generated by the motor system. The requirements of complex navigation may been have the evolutionary force that combined these elements. Consider the classic lab test: a mouse finds its way through a maze with its senses, learns the path by storing spacetime stamped memories, and reaches the morsel of reward at the end of the trail.

Organism-4

Organism-4 is an advanced organism such as a mammal or human. Reasoning, problem-solving, and communication capabilities evolve on top of Organism-3.

Summary

Consciousness develops in organisms as an extension of the internal biological control mechanisms through the evolution of capabilities for interacting with the environment. Consciousness is always oriented internally. While we usually think the content of consciousness is a representation of the external world, in fact, all of the content of consciousness is representation of internal biological states. This can be seen from its evolution from the controller in Organism-0. Internal models mimic the external world, but the model is completely a representation of internal states.

This entry was posted in Human Evolution and tagged , , , , . Bookmark the permalink.

33 Responses to From Worm to Human

  1. Steve Ruis says:

    I can’t understand why Organism-0 would have anything like a brain. There is nothing to process. All biological processes are autonomic. So, a brain needs to be evolved (from a nerve nexus, one routing sensory information?).

    Liked by 1 person

    • James Cross says:

      Control of internal systems – like digestion. Actually more or less corresponds to a very simple worm-like creature with the simplest brain. But even in humans an important part of the brain controls heart, lungs, livers, digestion. etc.

      Like

    • James Cross says:

      Let me add something. The brain is primarily to control the internal functions of the organism; however, to control internal functions, it is an enormous help to be able to interact with the world to find good stuff that makes the internal systems better, avoid bad stuff that would harm the internal systems.

      Like

      • Steve Ruis says:

        Then it is receiving sensory data and has something to process.

        PS Your website keeps asking me to sign in and then tells me my response is a duplicate. (Weird.)

        Liked by 1 person

        • James Cross says:

          Don’t know about the WP issue but I’ve heard other people complain about it.

          In Organism-0 it is receiving information (which could be regarded as sense data) but only about internal states. In the diagrams senses are meant as external senses as you can see from the arrow coming from the external world.

          This is a thought experiment. Whether the actual first worms, bilaterians, had externally facing senses or not doesn’t really matter to the thought experiment. We will probably never know that one way or the other.

          Like

        • Steve Ruis says:

          Yes, I know it is a thought experiment! :o) I am just having fun with you.

          If the only sensory data is coming from inside the organism, isn’t that a gateway to external stimuli? If the O-0 has some idea of where its internal organs need to be, then being poked by a stick would shift some of those and the organism would know that, so the line between external and internal is not so easily drawn.

          Feel free to ignore my blatherings any time.

          Liked by 1 person

        • James Cross says:

          Touch is borderline between internal and external. The organism could have reflexes that bypass the brain if it were prodded. It could also try to adjust its internal systems to compensate. But it would be very limited until it developed externally facing senses. Even smell would allow an organism to move towards food.

          Like

  2. This is excellent James! As an exercise, it’s similar to what I’ve historically done with hierarchies, but obviously much more visual.

    The main difference I’d have with it is, rather than having learning/memory and reward systems as separate functional components, I think I’d have a component that responds with relatively automatic responses or fixed action patterns, and then add mechanisms alter the sequence, with the “policies” of those alterations modifiable, a modifiability we refer to as “learning”. Although as you noted, you’re not aiming for anatomy here, so it depends on the exact abstractions you want to use.

    Overall though, I think this is a very good approach.

    Thanks for the link!

    Liked by 1 person

    • James Cross says:

      I did think of your hierarchy somewhere in the process of doing this. There is definitely a correspondence based on your 2021 hierarchy,

      1- Matter: Organism-0 but alive
      2- Reflexes and fixed action patterns (automatic reactions): Organism-1
      3- Perception, 4- Habits (accidentally learned associations): Organism-2
      5- Volition (observationally learned predictions, time involved): Organism-3
      6- Deliberative imagination. 7- Introspection: Organism-4

      https://selfawarepatterns.com/2021/01/03/hierarchy-of-consciousness-january-2021-edition/

      Liked by 1 person

    • James Cross says:

      BTW, it would be really interesting to see you put your hierarchy in a diagram with the functional parts. A new visual version?

      Liked by 1 person

      • Thanks James. It’s been a while since I read a book on evolution and consciousness. Maybe after I read one that gets my head back into that space.

        Liked by 1 person

        • James Cross says:

          Your diagram wouldn’t need to be about evolution.

          However, I think taking an evolutionary perspective I do think might be the best way to understand consciousness. If we can understand the minimum requirements, we should be able to understand how it works.

          My thought is that it is involved with memory. That it might serve a key role in training the biological neural net and putting neural plasticity on a fast track that enables the unlimited associative learning that Ginsburg and Jablonka talk about.

          Liked by 1 person

        • I know what you mean about it not needing to be in evolutionary terms. It’s just that the functional hierarchies originally come from it, and with an eye toward AI systems as well, all in service of making the case what a naturalized version of panpsychism would be missing.

          Memory is an interesting thing, because in biological systems I think it amounts to adjusted dispositions, with the hippocampus causing some sequences of adjustments to be “rehearsed” enough to lock them in. Although it’s complex, because working memory is probably more just the current reactions firing recurrently.

          Liked by 1 person

        • James Cross says:

          I’ve been reading on memory and it is rather strange. The strong consensus is that it is all about adjusting the strength of connections between neurons, but it’s hard to see how that works. not so much biologically and chemically, although that is also complex and not well-understood. But just from a logical point of view, but because it would seem any strengthened connection could just as easily be weakened with the next input. I’m not doubting the consensus at all, but it is hard to visualize how it would actually work in practice.

          Liked by 1 person

        • From everything I’ve read, what weakens synapses is lack of use. (“Neurons that fire together wire together.”) So it’s more a matter of how much activity they get. Of course, with pace neurons and regular oscillations, it seems like every synapse gets at least some activity, but to my mind, there’s probably an evolved balance there between that the the processes that decay them where they’ll steadily weaken unless they get more firing from being part of an active circuit.

          So memories have to be recalled to be preserved. This seems strange to me, because I occasionally remember something from childhood that it doesn’t seem like I’ve thought about in years. Of course, that’s likely because I don’t recall the lion share of the recalls. It’s sometimes disconcerting to realize how much of our mental life we forget.

          Liked by 1 person

        • James Cross says:

          Maybe you can explain this that I can’t seem to find a clear explanation of anywhere.

          Neuron A has 1 axon (like all neurons).; Neuron B has 10 dendrites (in reality could be thousands). When A fires to B, does B create more dendrites connected to A? But there is only 1 axon. Does A also have to branch its 1 axon (arborization) to match? Or can multiple dendrites connect to 1 branch of an axon? The numbers don’t seem to match in that it seems there are a lot more dendrites than axons.

          “memories have to be recalled to be preserved”

          Sohms, I think, talks somewhere about memories being particularly volatile during recall so that under certain conditions the recall results in forgetting. I may not be remembering that correctly. LOL

          Liked by 1 person

        • Going off memory here, so take with a grain of salt. Most neurons do only have one axon (in biology no categories are ever absolute), as in the outbound projection from the soma (cell body). But axons do eventually branch into axon terminals, which are what meet up with dendrites. Although sometimes, particularly with inhibitory synapses, it synapses directly onto the soma.

          I’m not sure on creating new dendrites, at least outside of the hippocampus or after initial synaptogenesis during development. In mature neurons, there’s also a barrier between the cells which I think any new synapses have to punch through, whose name is escaping me (it isn’t myelin or glia, although those are barriers too). So my understanding is most of adult plasticity is changes in the strength of synapses, but I don’t think it’s completely understood yet.

          It’s often said that every time we retrieve a memory we alter it. That’s part of it, but I’m not sure all. “Retrieving” a memory is, I think, doing a simulation, and any simulation we do now is in terms of our current dispositions, so it will inevitably be different from what it would have been years ago. It’s only when we look at an actual record, like an photo or recording, that we discover how different.

          Liked by 1 person

        • James Cross says:

          “in biology no categories are ever absolute”

          You’ve got that right. I was doing some more searching and apparently axons can even connect to axons sometimes.

          I still haven’t found a clear answer about how the synapses are strengthened. More dendrites to the same axons would do it, I guess. Maybe more receptors in the same dendrite? I would be interested to know if it were all on the postsynaptic neuron or whether the presynaptic neuron was also involved.

          If you google dendrites and EM field, interestingly, most of the hits come back relating to metals and alloys. Apparently an EM field will modify dendrite growth in metals. Little or no research in biology.

          Liked by 1 person

        • On synapse strengthening, the answer is, unfortunately, going to be complex, and is far from being fully understood yet. To follow the research, you’d have to take a dive into organic chemistry. There’s also the question of how involved is DNA, epigenetics, and signaling systems throughout the cell. Ginsburg and Jablonka explore this in mind numbing detail in their Evolution of the Sensitive Soul book.

          The Wikipedia on long term potentiation, which is what we’re talking about, currently seems pretty decent. https://en.wikipedia.org/wiki/Long-term_potentiation

          Liked by 1 person

  3. Very interesting schema, James! Are you saying that every Organism-4 has a consciousness?

    Liked by 1 person

  4. Continuing from my comment on your last post, yes I do like these more complete graphs better. I see that even your most advanced animal has a perimeter path where entirely non-conscious function can occur (in the form of an external keyboard entry that gets algorithmically processed to do something external) though the distinctions in the middle might be implemented for a consciousness dynamic to add that sort of element. Furthermore I also see a slight similarity with my own model and diagram. Hopefully mine will come up for the following explanation.

    When boxes in my diagram are connected with lines rather than arrows, what’s below them will simply be details of what they are. For example I show a nervous system box that’s entirely non-conscious and made up of input, processing, and output components. Arrows from these components result in three different types of output — consciousness (presumably in the form of an EM field), muscle operation (neurally guided I think), and anything else. Regarding consciousness however, I don’t have a “higher-order functions” box but rather leave scaled up aspects of other components to address human language and such.

    Your “reward/warning” circle seems somewhat like my “sentience” box. Your “sensory processing” circle seems somewhat like my “senses” box. The box where you house both spacetime positioning and learning/memory is somewhat like my “thought processor” box, though I hold memory out as a potential input to that processor. People with memory problems can still think, though in a crippled way. Furthermore I add a “learned line” that takes things back to the brain for automatic function, such as already learned typing. Though I suspect all consciousness exists in the form of an electromagnetic field, to convert back to brain function at the “non-conscious input” box, an EM field would need to alter neuron firing in appropriate ways. So if a conscious decision to do something muscular is made, the non-conscious brain is relied upon to get this done. Often however there should just be a running loop where various things are considered from moment to moment given sense and memory based thought, and motivated by sentience.

    On organism–0, it seems to me that its controller ought to be genetic dynamics such as cellular DNA. But aren’t even single cell organisms set up to interact with their environments somewhat?

    On organism-1, I’d say this is where brains evolved, and specifically because better algorithms could potentially be developed when information in general comes to a single place rather than is processed all over the organism individually.

    Organism-2 is where I’d put consciousness, or at least given any level of the reward/warning component where existence feels good/bad. Empirical evidence ought to be able to determine if this lies under certain parameters of EM field or something else. Otherwise I think I’d classify all the other components as purely algorithmic function.

    On organism-3, conscious learning and conscious memory exist, so more advanced consciousness should result.

    Then on organism-4, I can see how one might want to add a “higher-order functions” component, though I worry about implying something special that’s radically different than the rest, that is unless something radically different is being proposed. To me the only truly radical component should where brain function converts back and forth to perhaps EM field conscious function.

    It can be helpful to measure one’s model up against another, so I like that you’ve done this. I’ll also try that diagram program that you mentioned sometime, since I did this diagram on MS Word, which wasn’t easy and it’s difficult to modify.

    Liked by 1 person

    • James Cross says:

      Try draw.io – free and easy to use. Save as xml/drawio file for future modification , save as png for images.

      What’s the difference between lines with arrows and those without? For example, what does it indicate that senses input is under consciousness with no in-arrow?It doesn’t appear in the path anywhere else unless implicitly in the path of unconscious processing. Wouldn’t the senses be input to consciousness?

      Why does Non-conscious output go to two different boxes with consciousness/qualia with one box effectively a dead-end?

      The problem with putting first brains with O-1 is that requires evolution of a brain with internal control functions, senses, and routing type functions all at once. The first brains likely had a role in controlling the gut. After all the spinal cord runs parallel to the gut which tells us a lot about what all of this is about.

      In your diagram, there is no external world for the motor system to affect; hence, nothing to affect for adaptive advantage.

      Where do things like fear, love, pleasure reside? Or is that sentience?

      I tend the use the word “sentience” as a synonym for “consciousness”, could you tell me what you mean by it?

      How does the brain affect the heart and lungs for example? There is no body itself in the diagram? Certainly cognitively generated fear (I see a lion) likely will affect the heart and lungs as well as cause you to break out in the sweat.

      Liked by 1 person

      • I’ve been delayed on this given some quite irritating “computer education”. Your program didn’t quite download for me, and maybe because I use a keyboarded phone as my only true computer? So I did also try some apps, all of which sucked. But then I decided that the way I’ve been doing diagrams isn’t actually that bad. So I’ve made some modifications. Consider this version:

        The lines rather than arrows essentially just classify what’s connected below them. So at the top I’ve got the Nervous System composed of three potential components: brain, non discussed non-brain, and consciousness. For the arrows let’s start with the bottom left corner’s “Countless Unspecified Inputs from the outside world”, which should address things that generally affect brain function. So the arrow sequence goes through input, processing, and then output brain function. This could lead to non-conscious muscle operation like heart function, or even something that isn’t muscle based such as hormone release. I presume the right sort of neural firing generally tends to get this sort of thing done, but whatever. Also there’s a consciousness variety of nervous system function that’s created by the brain, and I suspect as an EM field that isn’t brain.

        You’ll notice that I used “valence” this time for what was “sentience”. That’s probably better since I also consider sentience useful as a consciousness synonym. This seems to reference value more explicitly as well. Then there’s non-value based senses which informs value driven existence, and an element of past consciousness that exists presently. So these three forms of input get processed by the thinker that’s continually motivated to figure out how to feel better from moment to moment. Any decision will reside as an input to the brain. Also there’s a “learned line” shortcut. Notice that while one may decide what words to say, he/she needed decide for example the right sort of tongue movements required to say them. Regardless while the brain theoretically creates EMF consciousness, EMF consciousness (here in five categories) can go on to affect the brain in corresponding ways. This input may result in the non-conscious brain effectively operating muscles as intended, or even just deciding how one might deal with something better in the future given that this consciousness might be remembered.

        On evolution, it’s hard for me to imagine an organism-0 that truly receives no input from the world that it reacts to. All of causal reality seems to interact with the world in which it exists. I might be going too deep with this however.

        Regardless I’m saying that in a sense genetic operations obviously existed before there was brain, and should in a sense do the same sort of thing in a cellular capacity. Of course without brains, genes do operate multicellular structures like plants reasonably well also. But theoretically a central information locus was helpful for many more advanced organism, or a central nervous system where lots of diverse information gets algorithmically processed. So I didn’t mean that there should be a brain for your organism-0, though perhaps that would be a next step. First there should be genetic operation, then brain operation would add, then consciousness would add more.

        Liked by 1 person

        • James Cross says:

          I just use the draw.io app online. I might download eventually.

          I don’t know your modifications did much to explain.

          “Valence” is closer to what I thought you were meaning and corresponds somewhat to my reward/warning system. But “valence” isn’t simply input. Pain is highly modifiable by thoughts. It can be generated from thoughts.

          Regarding O-0, keep in mind this is a highly idealized model. However, there are many creatures that have brains to control their internal systems (mainly digestion), reflexes that bypass the brain, and little other information from the external world. Sea squirts are pretty close to O-0.

          from Wikipedia

          Adult tunicates have a hollow cerebral ganglion, equivalent to a brain, and a hollow structure known as a neural gland. Both originate from the embryonic neural tube and are located between the two siphons. Nerves arise from the two ends of the ganglion; those from the anterior end innervate the buccal siphon and those from the posterior end supply the rest of the body, the atrial siphon, organs, gut and the musculature of the body wall. There are no sense organs but there are sensory cells on the siphons, the buccal tentacles and in the atrium.

          https://en.wikipedia.org/wiki/Tunicate

          Liked by 1 person

        • Wow, that program is far better! I like that you can zoom in to any degree you want to select objects or adjust them.

          I’ll explain a practical scenario regarding pain generated from thoughts under my model. Theoretically you have an EM field consciousness that changes each moment based upon what you’re thinking about and such. Let’s say that your thought leads you to a horrible conclusion. My current next box is “decision”, though I think I’ll change that to “realization/decision”. So a given moment of though may be said to provide at least some component of realization/decision that leads to a next moment of thought. In this evolving alteration of the EM field, perhaps you’re on a plane for a long vacation, though the troubling thought emerges that you’ve left with the sprinklers on in your backyard that should progressively erode your hillside and cost lots of money to fix. So here the synchronous neuron firing creates an EM field which punishes you given potential implications of that mistake. I’m saying there’s a continual loop where the field changes based upon appropriate ways for associated synchronous neuron firing to change. Your EM field should continually evolve to ponder all sorts of scenarios about what you might do to mitigate the problem. Beyond just new thought, this might also involve muscle operation such as trying to call a neighbor to help. The right sort of neuron function should then animate your brain so that your body does these sorts of things.

          On your organism-0, agreed that a given brain might control what’s inside rather than outside. I don’t actually see much of a fundamental difference between “outside” and “inside” stuff. Our brains obviously do both. Did the first brains begin with the inside senses and operations rather than outside? I’m open to evidence on that.

          Liked by 1 person

        • James Cross says:

          Glad you like the program, Why not redo your diagram with some shape/color coding and put it out on your blog with an explanation?

          “My current next box is “decision”, though I think I’ll change that to “realization/decision”. So a given moment of though may be said to provide at least some component of realization/decision that leads to a next moment of thought”

          Yes, that can happen. However, I’m not sure someone with PTSD makes a conscious decision to become fearful in certain circumstances. So “decision” might not be the word you are looking for or the decision path needs to be supplemented with something additional.

          “I don’t actually see much of a fundamental difference between “outside” and “inside” stuff.”

          From my summary:

          Consciousness is always oriented internally. While we usually think the content of consciousness is a representation of the external world, in fact, all of the content of consciousness is representation of internal biological states.

          Liked by 1 person

        • Yeah I definitely need to discuss my conception of how the nervous system relates to consciousness over there some time. Currently however I’d like to get going on Brain to Computer Interface. Though this would clearly be possible given the truth of McFadden’s theory (since here brain EM fields are used to operate machines), can I effectively demonstrate that this shouldn’t be possible if consciousness is not EM field based? Though that makes sense to me, helping others grasp this should be challenging.

          Though I may need to keep working on more sensible words for certain classifications, PTSD should work out fine in the model. It’s not that anyone ever decides to become fearful, but rather that certain things make us fearful. Similarly I don’t need to decide to feel pain if I smack my thumb. These are inputs for the thinker to assess and react to. Bad memories should tend to change someone in associated ways. Genetics should play a part too. My wife got us two young brother cats from a shelter several years ago. One is very trusting of me while the other remains fearful. Yes it all reduces back to internal states.

          Liked by 1 person

        • James Cross says:

          Regarding Brain to Computer Interface.

          I feel good that a lot of progress can be made on this without ever understanding consciousness. But the real trick will be for the computer to create and retain memories that can be used by the brain.

          Explaining memories – how they are made and retrieved – has to be a part of an explanation of consciousness. There’s no evidence we can create memories while we are completely unconscious. We know of periods of unconsciousness precisely because of gaps in consciousness . Commonly with concussions we can’t remember even what was happening before the event.

          Memories according to neuroscience are created by changing connections in the brain. Connections are at the neuron level. Consciousness must have some role in modifying neural connections. Do you know if McFadden has written anything on this?

          Liked by 1 person

        • I don’t recall McFadden mentioning memory specifically in any of his papers, but I do have some thoughts on how it might work.

          They talk about how chains of neuron that have fired, also have far more propensity to fire again given their thusly strengthened synaptic connections. So right now you have this consciousness happening associated with certain neural firing. Then if you close your eyes and try to remember what you just saw, you may have a sense of that former image. Or if you’re listening to what someone is saying, a moment later you will not actually hear that person speaking again, though you might still grasp what they said. Thus it seems to me that “phenomenal memory” may be said to exist as a hollow shell of past consciousness. So if the true experience of a vision, or what someone said, and so on, originally exists by means of complex synchronous neuron firing that creates an associated EM field, then a moment later there might be a propensity for crippled firing in that regard which provides a hollow EM field that’s at least reminiscent of the first. So that would be memory that’s quite short term. Then note that if you later think about what you saw, what someone said, or whatever else, this should also exist in the form of a current EM field. Such reflection should thus strengthen associated synaptic connections and so create a higher propensity for them to fire again to reside as something that you’re better able to remember. This could create something more like a cache to potentially access.

          That’s at least a general take on memory for science to experimentally validate or refute, not that I consider it any kind of complete account. And though we obviously use computer information to help us remember and think about things more effectively, I’m not sure how BCI could actually give us memories if this depends upon the strengthening of synaptic connections associated with present consciousness. So while we might “add memory” to a computer, phenomenal memory may not be possible to increase with BCI.

          Liked by 1 person

        • James Cross says:

          I think I saw some research suggesting that there is a kind of weak residue of neural firings for a second or two after brain activity associated with a stimulus. That sounds a lot like your “hollow shell of past consciousness”. I imagine short term memory in the hours range might be retained mostly in the hippocampus but longer term memories are spread out all around the brain

          Like

Leave a comment