The First Minds: Caterpillars, Karyotes, and Consciousness

The book by Arthur Reber that I mentioned in my last post arrived from Amazon. I’ve read it and would like to provide some additional thoughts on my previous post. I won’t repeat ground covered in the previous post so take a look at that if you haven’t read it before reading this.

The book itself is short but it covers a lot of ground in its discussion of consciousness. At various points Searle’s Chinese room and Chalmer’s hard problem put in an appearance. Its main goal is to introduce the author’s Cellular Basis of Consciousness (CBC) theory but it places the theory in the context of the many contemporary issues and views about consciousness. CBC is quite hypothetical in that it doesn’t really have a clearly defined mechanism for sentience. It does offer the suggestion that excitable membranes that permit the flow of ions in cells might be a place to look. It also offers a possible way that the capabilities might have developed in the earliest organisms. Aside from that, CBC is mostly a proposal for how to proceed with future research on the question of consciousness. That way would be to look at the simplest organisms with a critical anthropomorphism for the indications of sentience, try to explain how those capabilities work, then examine how they have evolved in increasingly complex organisms until we end up with humans. This can be contrasted with the reverse approach, which Reber faults, of looking at the most complex brain – the human one – and trying to understand how its structures produce consciousness, then looking for analogs in simpler organisms.

As a research proposal, I think the idea is great. I have thought for a time that insects would be a great place to start to understand consciousness. Reber wants to go even simpler, back to protozoa and amoeba. The Catch-22 problem, however, is how do we know one celled creatures can be sentient if we don’t actually know what is the mechanism. Reber returns to this issue more than once responding to comments in private communication with Daniel Dennett. The issue undercuts his argument in chapter one for why robots or machines cannot be conscious. The argument essentially is that consciousness is an attribute of living beings – all living beings from simplest and earliest to the most complex – because sentience is required for surviving and thriving in complex world with ever changing conditions. Hard wiring of inflexible repertoires of responses in genes wouldn’t be sufficient or optimal. Consciousness is highly conserved as we move up in evolutionary complexity (he does, however, think that plants went down a path that might have dropped it) and it a property built directly into the wetware of the organic molecules of life. Algorithms running on silicon and copper aren’t sufficient. Metal doesn’t feel. It’s an argument I agree with but I’m not one hundred percent dogmatic about. Without a good theory of how living organisms become conscious, I can’t be sure that a robot couldn’t be conscious. I also’ can’t be persuaded a robot, even if it reproduced human behavior without flaw, is conscious without somebody providing a general mechanism for consciousness and how the robot implements the mechanism to produce its behaviors.

Reber does present some remarkable behaviors found in one celled organisms that I found surprising. They can learn. They have memories that can persist for long periods of time relative to their lifespan. They can exhibit remarkably complex behaviors that look like decision-making. They can communicate among themselves and even across species to control growth rates of collective groups. The evidence isn’t just in isolated one-off studies but across multiple studies. Is it conscious, sentient behavior, or complex fixed repertoire? We could be easily fooled.

I have generally thought to look for the first hints of consciousness in the first nervous systems. Neurons and sensory cells are the specialized cells that have evolved in many celled organisms that exhibit that same reactivity Reber identifies as sentience in single celled organisms. They do it with similar excitable membranes based on ion flows that Reber may have identified as the mechanism for the remarkable behaviors of single cell organisms. The difference is that this is the primary role of the neuron, the task it is specialized to do. Their job is to react to external stimuli in the case of sensory cells and to other neurons in case of neurons in the nervous system and the brain. They do it in groups, communicating among themselves. And that may make a big difference in whether there is sufficient critical mass to achieve sentience.

Still I’m intrigued by Reber’s idea but also wonder why not take another step with it. If sentience arose with the first life then could it be a key to understanding the origin of life? If excitable membranes are the source of sentience, might they be the critical feature that binds together the bag of chemicals that is the cell?

This entry was posted in Consciousness, Human Evolution, Mysteries, Origin of Life. Bookmark the permalink.

34 Responses to The First Minds: Caterpillars, Karyotes, and Consciousness

  1. Wyrd Smythe says:

    I’ve never understood why the materials should matter that much so long as the structure and process are isotropic to a brain. Per the discussion about silicon lifeforms, in this view, would evolved silicon beings be sentient, even sapient?

    Liked by 2 people

    • James Cross says:

      Almost always when I see the substrate independence argument, there is an implicit assumption of consciousness being information processing in some form.

      If you have a theory of how consciousness arises that does not include organic matter and can create an entity that appears conscious to implement the theory, I might buy the idea that materials don’t matter with consciousness. McFadden and Pockett, for example, do think consciousness is implemented in EM fields so any device of any material would work if it produced the right sort of EM fields.

      On the other hand, materials could matter. If we were creating an artificial arm, obviously we could do it probably with pulleys, metal rods, some small motors, a battery, and some connection to the nervous system. If we were to create reproductive organs, we might be able to create an artificial uterus but it is hard to see how you could create artificial ova or sperm without actual material of life. It’s even hard to see how you could produce a fully functioning digestive tract without material of life. You can’t digest a steak with a silicon chip. So obviously there are aspects to life, and consciousness could be one of them, where materials would matter. In other words, only certain kinds of materials can be isotropic to certain biological functions and those materials would need to be the actual materials themselves or something similar.

      Per the silicon being discussion, we know silicon can’t do the same chemistry as carbon so there is no way for silicon being could be isotropic to a carbon one. Materials would matter if the goal is reproduction of a carbon based biological being on another substrate. Whether a silicon being could be some other living form, would, in a way similarly to consciousness, be dependent upon the definition and theory of what constitutes living. I doubt this would be possibly with the material silicon but, if someone showed me how a silicon entity could metabolize, grow, and reproduce I would be interesting in seeing it.

      Liked by 1 person

      • Wyrd Smythe says:

        Information processing is certainly part of the picture, but I agree it’s not sufficient as an abstraction (let alone numerical simulation). The “theory” (more a WAG at this point) is just that consciousness is an ongoing dynamic process, not an outcome.

        But I don’t think the chemistry of the process is as important as the structure. The isotropy has to lie, at least significantly, in structure, because those same compounds in all other structures don’t act like brains. However it seems possible something else with that (physical) structure might act like a brain.

        The thing with chemistry is: where is the line? Strictly organic carbon? What about inorganic carbon compounds? You don’t like silicon, but it does have a rich chemistry. What about advanced organic carbon compounds that incorporate things like silicon?

        I don’t think the question is whether a silicon or “Positronic” brain can evolve. I think it’s whether one we would construct would work. I think, for instance, it’s a reasonable assumption, given that information processing is involved, that signal flow, and thus EMF, is involved. That would be part of the necessary structural isometry.

        As an aside, something like digestion can be done artificially. Energy-containing substances can be processed chemically to extract and use the energy. (I speculated the HBO Westworld robots might have eaten the corn they grew for fuel. Corn, after all, is where ethanol comes from.) Artificial limbs are becoming structurally more like muscles, bones, and joints using artificial materials. The key is a material that acts like muscle tissue — contracts and expands due to an electric charge or current.

        We’ve long had artificial hearts and bones. I can see someday having artificial kidneys and livers. Those are obviously just engineering problems. It’s an interesting question whether a brain is.

        Liked by 2 people

        • James Cross says:

          “The isotropy has to lie, at least significantly, in structure, because those same compounds in all other structures don’t act like brains. However it seems possible something else with that (physical) structure might act like a brain”.

          I agree mostly for sure, but what aspects of the structure. The structure of neurons themselves? And which types of neurons? Or can the neurons become like black boxes with some fixed range of behavior for inputs/outputs and it is the arrangement of neurons? Or do you even need neurons all? Could you substitute something else for the local assemblies as long as they interact with other remote assemblies in a similar fashion?

          It’s easy to make a theoretical argument but it gets tougher when you have to identify which structures or processes are the important ones.

          Limbs are one thing. They are very mechanical. A digestive process is another. I guess if you say the only thing digestion does is extract energy, then that can be done. But the digestive process does a good bit more than that. I don’t know whether you ever read this post.

          Aging and the Gut Brain Axis

          There are more neurons in the gut than anywhere else outside of the brain. It has extensive interaction with the brain and the immune systems.

          Liked by 1 person

        • Wyrd Smythe says:

          Assuming the point is an alternate mode of consciousness — something we’d consider on par with our own — the goal isn’t to exactly replicate a human being but to replicate something capable of consciousness. There’s no reason it must exactly match an evolved solution. Our gut bacteria may affect our consciousness, but that doesn’t mean they’re required for conscious thought.

          I think the level of isotropy is somewhere around the synapse level. Neurons and synapses and connections between them would be components, but themselves comprised of processes, no “black boxes.”

          A “Positronic” brain would be a network of neuron nodes connected with synapses, and those things would all act like ours. I very much suspect packing all those neurons in a brain involves synchronous effects, and an artificial brain would do the same. (An interesting question: imagine spreading a brain out in space — would it still work?)

          Liked by 1 person

        • James Cross says:

          By “black boxes”, I only meant that the “stuff” of consciousness resides outside of them., in the structure outside of them.

          By spreading out a brain, do you mean biological or electronic/positronic?

          Biological wouldn’t work I’m pretty sure without other changes. It seems like the connecting nerves are optimized for distance from destination so that things work in sync, that is the longer the distance the faster the connection. Perhaps if everything moved proportionally. All of that would be assuming that EM fields don’t have a role, because things would need to be close together for that to work.

          With electronic it wouldn’t matter probably at relatively small distances but probably would require adjustments if parts were on Earth, Mars, and the Moon.

          With a synaptic approach, I guess there might be a need to take into consideration the variety of neurons at some point. All are not equal.

          Like

        • James Cross says:

          I’m curious about the “synapse level” choice. Essentially it is just the connection between neurons. The decision about when to fire, whether to fire once or multiple times, is all at the neuron level to my understanding. Of course, synapses are how things are connected but what is being connected is important especially whether it excitatory or inhibitory.

          And the diversity and lack of interchangeability is also notable.

          https://en.wikipedia.org/wiki/Llin%C3%A1s%27s_law

          Like

        • Wyrd Smythe says:

          I do assume EMF (or something similar) plays a role, so wrt spreading out, either biological or positronic. Spreading out the network nodes would require an additional network to re-create those effects.

          Which might not be sufficient. Timing itself may play a role, so absent FTL signaling, distance might make the necessary processes too slow for consciousness to arise. I’m not sure (science fictional) “slow thinkers” (such as Ents or mountains) could exist — I’m not sure consciousness works at a glacial pace.

          Of course neurons and synapses would differ in a positronic brain, just as they do in a human brain.

          I think consciousness has to be at least down to the synapse level. It’s the LTP of synapses that allows us to learn and have memories. The short-term training is important, too. This is one reason I think IIT isn’t a complete theory. The network configuration alone isn’t sufficient — the features and complexity of the individual connections matters a lot.

          Neurons fire or don’t fire, but there is a lot of analog information packed in the duty cycle of the pulses while firing, and I’ve read that even the rise and fall times of the signal contain information. The neural network is decidedly analog.

          Liked by 1 person

        • James Cross says:

          My current thinking (it is complete speculation, but that is what this blog is about) is something like this:

          1- Coordinated membrane activity can perform digital computing and can account for behaviors in single cell organisms.
          2- Multi-celled organisms developed specialized cells dedicated to do this activity and this digital type computing continues to occur in large brains to guide unconscious processes.
          3- As the mass of neurons becomes larger with evolution, it begin to generate a sufficient EM field(s) that the field begins to act in a feedback with the neurons themselves.
          4- Evolution appropriates this ability to enhance the digital capabilities with analog ones to provide a coordination mechanism between multiple senses and motor activities – to provide a dashboard-like, integrated view.
          5- The information in the integrated view is contained in the EM field but the actual subjective sensation arises from the neurons “feeling” the EM field.

          Like

        • Wyrd Smythe says:

          Much of that sounds sensible to me, although I don’t think there’s much “digital” in nature. It’s all analog to me (other than, perhaps, DNA/RNA itself).

          Liked by 1 person

        • James Cross says:

          “Digital” may not be the word I meant. Thanks.

          Like

  2. Thanks for reviewing the book. I think this has saved me from attempting to read it. It almost sounds like Reber is arguing for a modern version of the Élan vital, but replacing the vitalism with a form of primitive consciousness. Although I’d have to wonder if the difference is more than just terminology.

    I can see now where Victor Lamme’s conception of biopsychism came from, and why he cited this book.

    Reber’s descriptions of the capabilities of unicellular organisms reminds me of what I read in Gerhard Roth’s The Long Evolution of Brains and Minds. Although Roth uses more neutral terminology like sensorium and motorium to describe their capabilities.

    Like

    • James Cross says:

      “a modern version of the Élan vital”

      Completely unfair categorization. He is emphatic that his explanation is physicalist and that organisms are machines.

      Like

      • James,
        I wasn’t necessarily using the phrase in a non-physical sense, but in Henri Bergson’s original usage to label the specialness of biology. Admittedly, the difference is extremely nuanced, but then ascribing consciousness to a cell in a physicalist manner itself seems like a very nuanced position.

        Like

        • James Cross says:

          You have seemed to argue that based on certain behaviors we should consider robots to be sentient. It would also seem to follow that you would also regard primitive organisms with even limited expression of behaviors like decision-making, memory, and goal directed behavior to have also some primitive level of sentience.

          Reber talks a good deal about the emergentist’s dilemma. What is the miracle that makes sentience appear with insects when it was absent in protozoa, for example? Or in AI robots when it is absent in my laptop? You can argue its presence, perhaps, based on some measurable degree of capability but then you have to explain the miracle. Reber cuts through this dilemma simply by arguing that sentience is on a continuum and primitive organisms are sentient on the lower end of the continuum. More complex forms build on the building blocks of less complex ones. There isn’t some threshold where the miracle occurs.

          Like

        • My actual argument was that if we’re going to regard certain behaviors as evidence for consciousness, and robots exhibit those behaviors, we should be consistent. But as I’ve noted before, there’s a pretty vast range of behavioral capabilities, which I why I usually present that hierarchy. Much of what Reber is discussing is in layer 2 (of 7), which does include all cellular life and technological systems.

          On trying to find where the miracle happens, I agree it’s very problematic. There’s no sign of any sharp border between consciousness and non-consciousness. But if we think along those lines, I see no reason to stop at life, which puts us on a slippery slope to panpsychism, or at least panprotopsychism.

          However, I think the miracle is an illusion, born from an unreliable introspection mechanism, a mechanism that provides effective adaptive benefits, but not one that provides accurate insight into how our minds actually work. Once we accept that, the simplest explanation for the miracle is it’s an incorrect impression given to us by that unreliable introspection system.

          Which isn’t to say there isn’t a lot of functionality to explain, including, at least for humans, introspection, but it all seems amenable to scientific exploration.

          Liked by 1 person

        • James Cross says:

          I think Reber is in a sense looking for where the “miracle” happens. That in a way the main trust of his proposal. Since he sees the incipient signs of sentience in one celled organisms, then that is logical place to start to study because the explanation will easier to find at a simpler level. He doesn’t see incipient signs in rocks so he isn’t willing to go panpsychist. When we understand the simple version, then we can look at how it develops and becomes more complex with evolution.

          Like

        • I can see Reber’s reasoning, from the perspective of a biologist. But I think a panpsychist would argue that rocks, as collections of quantum systems, still have a lot of dynamism. In the end, it depends on just how far down the ladder we want to go, how simple the system needs to be before we no longer want to apply the “consciousness” label.

          Liked by 1 person

        • James Cross says:

          A note of clarification. Are you a biologist? Reber is a psychologist who admits there was a lot he didn’t know about one cell organisms when he started down this path. Although it seems he’s been thinking about it since the 90’s, it was only recently that started to seriously consider it.

          Reber is aligned with the thinking of Sacks, Damasio, and others in this letter to Koch:

          quote

          Scientific American and Scientific American Mind regularly raise newsworthy topics related to the problems of consciousness. We would like to encourage an approach that, despite deep roots in evolutionary theory, has been largely neglected in the modern era—and has yet to become a theme in the stimulating series of articles in this journal.

          That approach entails the search for the properties underlying consciousness down to the level of the protozoan [a single-celled organism] in order to identify the fundamental cell-level mechanisms that, when scaled up in complex nervous systems, give rise to the properties that are typically referred to as “mind”. The unanswered question is: What characteristics of living cells lead ultimately to the various, higher-level psychological phenomena that are apparently unique to certain animal organisms? This question concerns essentially biological functions—and is distinct from “information processing” approaches that might be implemented in silicon systems.

          From a biological perspective, we suggest that the lowest-level candidate mechanism is membrane “excitability:” the unusual capability of certain types of living cells to sense and respond to stimuli within several milliseconds. In light of the fact that all living cells have enveloping membranes and exchange materials with their external worlds, it is unlikely that metabolic activity, biochemical homeostasis [keeping cellular systems in balance], or the mere presence of a boundary between the cellular self and the external world alone is sufficient to explain the origins of mind. Rather, the dynamics of the exchange of materials across biological membranes differ remarkably among cell types. Understanding these differences may be relevant in explaining consciousness.

          https://blogs.scientificamerican.com/mind-guest-blog/exclusive-oliver-sacks-antonio-damasio-and-others-debate-christof-koch-on-the-nature-of-consciousness/

          Like

        • James Cross says:

          That letter is so great that I want to add another quote:

          Importantly, the mechanisms underlying the “irritability” of protozoa are known to be the same as those involved in the hyper-sensitivity of all three main types of excitable cell in metazoan organisms (animals)—that is, sensory receptor cells, neurons, and muscle cells. These mechanisms are essentially the opening and closing of certain pores that allow certain ions to pass freely across the cell membrane. Parsimony suggests that the sudden onslaught of positively-charged ions (cations) into the alkaline cytoplasm—the very definition of membrane excitability—is the key phenomenon involved in a cell’s “awareness” of its environment (“sentience”). In other words, what makes cells with excitable membranes so unusual is their response to electrostatic disturbances of homeostasis (slight acidification of the normally alkaline cellular interior) following external stimulation. In order to produce the higher-level “awareness” of animal organisms, the activity of these numerous excitable cells to achieve a kind of sentience must be synchronized (in ways yet to be determined) for coherent organism-level behavior.

          Like

        • James Cross says:

          The notion of “feeling” begins with sensory receptor cells that are triggered by environmental changes. Neurons “feel” other neurons and sensory receptors through synchronized rhythms and maybe EM fields.

          Like

        • I’m not a biologist. Just trying to see things from the biological perspective.

          Interesting letter. But I think Koch’s reply (just below it) is worth reading too.

          Like

        • James Cross says:

          I think Koch pretty much misses the point.

          It is coming from a panpsychist who thinks integrated information explains it all. Don’t we need hearts and lungs too to have all that integrated information? And what’s with the alternative universe stuff. Who’s to say life or sentience would exist at all in it?

          “the inflow of positively charged hydrogen (H), sodium (Na) and calcium (Ca) ions in our universe could equivalently be replaced by inflow of negatively charged H, Na, and Ca ions in this alternative universe without any change to the mechanisms of cellular excitability”

          This sounds like the argument that life would work the same if we replaced carbon with silicon. No difference between SiO2 and CO2, right?

          Liked by 1 person

  3. Lee Roetcisoender says:

    Right on…… Based upon Mike’s brilliant synthesis there is no such thing as consciousness. And why is this the case? Because as a human beings we cannot possibly be conscious because of the unreliability of our own introspection system. These type of discussions are fundamentally useless and accomplish nothing more than stroking one’s own ego.

    Liked by 1 person

    • James Cross says:

      I’m not exactly sure that is what Mike is saying but I’ll let him defend himself if he wants too.

      What would make our knowledge of our own minds necessarily any more flawed than our knowledge of the external world, especially when we apply scientific techniques to both of them?

      Like

      • Lee Roetcisoender says:

        Fundamentally, that is the point of contention isn’t it James. One can play this game the way idealists choose to play by denying the existence of a physical universe other than being “what consciousness looks like through the dissociative boundary”, or one can play it the way materialists choose to play by denying that a physical universe is a substrate of consciousness. Both metaphysical positions demonstrate an underlying psychosis that is commonly shared by all homo sapiens. It’s the nature of the beast as they say……..

        Liked by 1 person

  4. Like Reber I also have a problem with observing that the human can be conscious, and then working backwards through animal species to decide how similar various examples might be to us. It seems to me that we’d logically then move towards how human various organisms happen to be, though that shouldn’t provide us with a useful definition for the “consciousness” term itself. Better would be to develop an independent term. For example, just as electricity is something that a modern nuclear power plant produces, we shouldn’t define “electricity” on the basis of how similar various things happen to be to nuclear power plants. So let’s permit “consciousness” to be defined on the basis of a generally useful idea, not merely something that we understand as an element of the human. Here I support professor Schwitzgebel’s “innocent” conception of consciousness.

    It seems to me that going back to cells and microorganisms for this should be too far however. They should merely function from genetic material based mechanisms. Even “learning” shouldn’t mandate there to be subjective experience, since some of our computational devices may be said to learn as well. (Of course if we discover various qualia producing mechanisms in basic organisms then fine, though to me it doesn’t seem likely that we will.)

    Once we get to organisms with full central nervous systems however, I’m more confident that subjective experience might be in the mix. It could be that for certain varieties of arthropod, gastropod, and so on, programming instructions alone are not sufficient, and so at least some basic subjective function is required as well.

    Feinberg and Mallatt propose that such creatures require subjective experience in order to effectively operate distance senses. I doubt this however given that even we build non-conscious robots with distance senses that seem to function reasonably well for various specific applications. So surely evolution is also able to incorporate non-conscious distance senses for various organisms in productive ways.

    My own speculation is that certain modes of function can’t effectively be programmed for given their “openness”, which is to say an environment that can’t effectively be scripted for. So while microorganisms, plants, and fungi might not need anything more than non-conscious programming to get their jobs done, evolution might not for example have been able to sufficiently program spiders. This may be because their ways of life go far beyond closed the parameters of “chess boards”. Thus a sentient form of function might have been required additionally. Rather than take over however what this should mainly do is provide a platform for more productive non-conscious function to take hold. I theorize that whatever conscious processing is done (which is to say thought based decisions), the non-conscious brain should institute at least 100,000 times as many operations in support.

    Liked by 1 person

    • James Cross says:

      There’s not much in what you write that I disagree with. Regarding “going back to cells and microorganisms”, I can see a rationale in trying to understand how primitive cells sense their environment and take actions in it. Ion channels and membranes seem to be critical components in how differences are detected, particularly differences with valence. That makes me wonder if the detection of differences more generally is somehow directly connected to the mechanism of qualia. That would provide a direct linkage between the sensing of even primitive organisms and the range of subjective experience.

      Like

      • Lee Roetcisoender says:

        I think you are more than lukewarm on this James. Lately I’ve been hacking out a similar trail so to speak. In agreement with your own synthesis, I think the term valence is very useful and furthermore, the detection of differences within an architecture of valences that are intrinsic to systems, that architecture would directly connect those differences to an even more fundamental substrate, the mechanism of qualia.

        Liked by 1 person

  5. Lee Roetcisoender says:

    James,

    Thought I would post this on your sight. Feel free to comment or add further insights to my revised definition. Even though I think that the term consciousness is obfuscated beyond any reconcilable usefulness, this new definition may be more useful for anyone who is inclined towards some form of universal consciousness. Again, I don’t think the term panpsychism is useful either because of its direct correlation and association with “psyche”.

    Consciousness: Noun
    1. A state of being, also referenced by the physical sciences as a system; one that has the capacity to both experience and express the power that is intrinsic to its being as it engages in meaningful relationships with other states of being and/or systems.
    2. Those states of being and/or systems consist of the aggregate that make up a physical universe which over time emerges into a vast diversity of expressions resulting in uniqueness and novelty; the system of mind being the most unique in both form and power of all known systems.

    Footnote: Plato also offers some intuitive insights of his own as to the notion of power and (consciousness).

    “Whatever has a native power, whether of affecting anything else, or of being affected in ever so slight a degree by the most insignificant agents, even on one solitary occasion, is a real being (conscious). In short, I offer it as the definition of be-ings (consciousness) that they are potency — and nothing else.” Plato, Sophist, 247d–e

    (i) Plato explicitly says so, at 247d-e. As one proponent put it: “Being, Plato says, is power or potency”.
    (ii) The definition is convincing, since the power to act and to be acted upon is indeed common to all things.
    (iii) Participation, or the communion of beings, is explained by, and indeed depends upon, the capacity of a thing to act or be acted upon. Indeed, for Charles Bigger, the ‘battle of the
    giants’ “must surely be the most important passage for an understanding of participation in the dialogues”.

    Liked by 1 person

    • James Cross says:

      I like #1 . I would emphasize that with consciousness there seems to be a sort of inturning of the wrld in the form of knowledge about it and representations of it at same time it is connected to feelings and experience.

      #2 for me might be an example of what consciousness is but is not a one to one with consciousness. It seems to suggest to me some of the things I’ve argued in relation to entropy and intelligence.

      See

      The Intelligent Universe

      and

      Click to access PhysRevLett_110-168702.pdf

      Like

      • Lee Roetcisoender says:

        #1… Right on; I think those feelings and experience would be valences, non-conceptual representations of value which ultimately reduce to qualia, and that qualia itself would be imbedded in the structure.

        #2…. Right again; number two would be a broad brush example and not necessarily a definition per se. The post you linked is also right on and I like how you summarized it:

        “We may not be the originators of our own intelligence as much as we are agents of an algorithmic principle working at the quantum level. Our intelligence would be a reflection of some deep physical principle that guides the evolution of the universe and life. In the end, our intelligence and that of the slime mold may be more closely linked than we might think.”

        Liked by 1 person

  6. Lee Roetcisoender says:

    FYI, for what its worth, here is my latest revision of a definition of consciousness. This definition is both scientific and all inclusive.

    Consciousness: Noun

    1. Any thing that has a native and/or intrinsic power, whether of affecting anything else, or of being affected; anything that acts or is acted upon is a system whose qualitative property is consciousness. ( Plato, Sophist, 247d–e)

    2. A state of being, also referenced by the physical sciences as a system; one that has the capacity to both experience and express the power that is intrinsic to its being as it engages in meaningful relationships with other states of being and/or systems.

    3. As a state of being, consciousness is a form, and that form is a physical, material universe.

    Liked by 1 person

Leave a comment