Evolution, Learning, and Uncertainty

A question that has puzzled me is the one of the evolutionary reasons for consciousness. I’m speaking loosely here. Evolution doesn’t really have reasons. Changes happen in organisms because the changes in the long run bestow some advantage (or at least no disadvantage) to the survival of the organism. Let’s rephrase the question: what evolutionary advantage does consciousness provide?

This might seem like a question with obvious answers. Obviously, there is a lot I can do if I am not anesthetized and unconscious. I can search for food, hide from predators, seek a mate, or build a fire to keep warm. How many of those things, however, really require consciousness? The first three things in the list can be done by insects. Do we think insects are conscious?  Some people do. Do any of the first three really require consciousness? Or, can they be done through pure instinct, a set of possibly complex but hard-wired behaviors that execute when the right triggers in the environment are present?

The question lingers in the background of Searle’s Chinese Room and the Turing Test. Can we create a device, that is not conscious, that is from outward actions indistinguishable in its capabilities from a human? Are we forced to acknowledge that it would have to be conscious if we can’t tell the difference between the device and a human? Personally, it doesn’t feel right to think what I know to be a device, built say of silicon and copper, is conscious, even though I can’t distinguish how it acts with me from the actions of a human. I might talk with it and even call it by name, but something still doesn’t feel right about declaring it conscious. If we go too far down this path, we will be forced to conclude that consciousness is an epiphenomenon, like waste heat from a computer, and unimportant.

That leads us back to the question: what evolutionary advantage does consciousness provide if we can’t distinguish actions of a conscious organism from those of an unconscious device (or organism)?

Certainly, it must be something significant since consciousness itself comes with a huge, but seldom appreciated, cost. Consciousness not only consumes energy, an enormous part of the human energy budget for humans, but it also generates significant waste and poisonous by-products. Organisms we recognize as the most conscious require regular sleep to repair the damage. While many of us might appreciate the forced downtime of sleep, we not only pay the opportunity cost of productive things we could have been doing while sleeping, we also put ourselves in an unconscious state which could make us vulnerable to predators and other dangers.

The answer may be somewhat complicated. Evolution doesn’t usually have free reign to tear up one design and start fresh with a new one. Consciousness may not be the best design for whatever evolutionary advantages it provides. A complex biological, but unconscious, computer with massive amounts of memory and processing power might be the superior design choice.  The problem is that evolution must start from where it is to get to where it is going and none of the steps and iterations along the way can be dysfunctional.

Evolution and Truth

As I have pointed out before, the first organisms with a semblance of a nervous system were probably primitive worms, the first bilaterians. While there were other multi-cellular, predecessor organisms, like jellyfish, with neural-like reactions, the first worms had the body plan from which more complex organisms have developed. Essentially this plan consists of a head with a mouth, a concentration of neurons near the mouth (the makings of a brain), and a strand of neurons running beside the digestive tract (the start of a spinal cord). I will assume these organisms were not and are not conscious or, at least, certainly not conscious in any way comparable to the more complex organisms that have developed from this body plan. They lack more advanced sensory capabilities as well as much of a backend brain to process and interpret the information from the senses. What evolution had to work with before conscious brains were reflexes and simple stimulus-response neural circuits. Simple circuits are great as far as they go and an upgrade for organisms that couldn’t respond in any way to the environment. Even a simple response of movement toward something good or away from something bad would have an advantage.

More complex and seemingly more conscious organisms have evolved and along with this have developed larger and more complex nervous systems. With this complexity has come a larger and more nuanced repertoire of behavior. The next step in complexity of nervous systems would include more sophisticated, but still largely hard-wired, behaviors. Some examples might be spiders building webs or bees forming a hive. Advantages accrue to the spider who can capture more and larger prey with a well-built web. Bees living in hives can divide the work and benefit from the labors of others in a social organization.  These behaviors evolved over thousands, possibly millions, of years to reach the form we find them in today. The behaviors are in their own way amazing but they still rigidly structured. Every worker goes through the same life cycle and exhibits the same behaviors. When X happens, Y happens. There is little discrimination or nuance in the behavior.

Even if this behavior is instinctual, I can’t personally rule out that there may be some degree of consciousness involved in it. The spider building a web may be performing essentially canned routines, but it would still need some capability of adjusting the routines to local conditions. Spiders construct webs with seven anchor points. The first step involves casting as many as twenty silk strands to the air to be carried by the wind to see which ones attach to other leaves or branches. From there the spider then must identify seven lines distributed around a circle in a plane with good attachment points then cut away the unused lines. It is possible this is entirely unconscious behavior. However, integration of multiple senses – touch and vision in this case – with adaptive action involving choice may be the most primitive level of consciousness. I have often thought the underpinning of consciousness, perhaps its first glimmers, lay in the spatiotemporal mapping of the body to the external world.  The extension of this mapping beyond the body to the nearby world of leaves and branches with some low-level understanding of geometry might be a small step toward consciousness.

As a model for how perceptions and more complex nervous systems developed, I’d like to look at Donald Hoffman’s Interface Theory of Perception (ITP). Hoffman’s thesis is that “veridical perceptions—strategies tuned to the true structure of the world—are routinely dominated by nonveridical strategies tuned to fitness.” In this view, perceptions are elaborate reality hacks that evolve as part of what he calls a Perception-Decision-Action (PDA) loop.

pda-loop

Hoffman explains it this way: “The channel P transmits messages from the world W, leading to conscious experiences X. The channel D transmits messages from X, leading to actions G. The channel A transmits messages from G that are received as new states of W.”

Perceptions and actions evolve because they lead to effectively change something in the world to the benefit of the organism. Perceptions do not need to be truthful representations of the world. As a matter of fact, fidelity to the actual world would be a disadvantageous because it could be too slow and too costly to achieve.

One of Hoffman’s favorite examples to illustrate this is the Australian Jewel Beetle. During evolution selection for males that chose to mate with larger brown, shiny females began to be favored. The perception of large, brown, and shiny became tied to the decision and action of mating. Females were large, brown, and shiny. The bigger they were the more fit they were. Males with a better perception of big, brown, and shiny would be selected for because they would be making the better choice for a mate. This worked all to the good if female jewel beetles were the only brown, shiny things to be found on the ground in the outback. Unfortunately, when humans began throwing their beer bottles into the outback, the male beetles began mating with the bottles. The nonveridical perception of big, brown, shiny for a female mate evolved and worked as a nice hack for the jewel beetle until the environment changed and the beetle almost went extinct.

The theory is attractive and seemingly explanatory for the mating of jewel beetle and possibly for developed perceptions of many other simple organisms. Does it work for more complex organisms? I can certainly think of instances where it would be at least partially correct. Take as an example color perception. In a great many species, the ability to distinguish variations in wavelengths of light is tied to identification of food sources. The development of trichromatic vision in primates was probably selected by the improved ability to distinguish ripe fruit in a green forested environment. In a real sense there is no “red” or “yellow” in the world. We developed an ability to perceive of “red” and “yellow” because it helped us find food.

Human beings, however, are not exactly like jewel beetles. Human beings do not simply see red and decide to eat as the male jewel beetle sees big, brown, and shiny and decides to mate. The problem with Hoffman’s PDA loop as a more general evolutionary theory of perception and consciousness is that it leaves out a major aspect of more complex nervous systems: learning and memory. Human beings use red and other perceptions to match to previous experiences to decide if the red is ripe fruit. Between the perception, the decision, and the action sits a consciousness that places the perception into a context based on prior experience. Perceptions are not the entire evolutionary ballgame. Our consciousness consists of matching current sensory data with memories of past sensory data. This is a learning process that guides the decision-action part of Hoffman’s loop. The decisions and actions arise only partially from selection for perception on evolutionary time scales and arise mostly on real time scales. Evolution selected for learning and memory, in addition to perceptions, to permit faster developing and more complex adaptive behavior.

As we move to more complex organisms, the decision part of the loop and capabilities that enhance the decision part of the loop become more critical to fitness than the perception itself. Any perception can be overridden through learning and experience. If our predecessor species ever made an automatic decision to run on the perception of a snake, we now as human beings can decide to capture the snake for food based upon our learning and experience. Our natural perceptions (as opposed to our technologically enhanced perceptions) certainly are limited to our perceptual equipment. We can see red because we have the right type of light cone in our eyes and we cannot see ultraviolet because we lack the perceptual equipment to see it. However, our capacity for learning, experimentation, and ability to form relationships based on prior experience allow us to better approximate the world than Hoffman’s ITP suggests. We can learn when our perceptions are wrong, and we can even overrule them.

The actual representations of the world in consciousness may originate in and be dependent upon in part the perceptual equipment as much as the brain itself. Visual images look in part the way they do because they come from eyes. Sounds sound like they do because they come from ears. In an experiment, researchers were able to modify some of the green light cones in the eyes of male squirrel monkeys so they would be sensitive to red. The monkeys which couldn’t previously distinguish red dots in an image could distinguish them after the modification to the eyes. The processes that enabled color vision in the monkey brain didn’t need modification to be able to learn a new color. They just needed new inputs. The red must originate at least in part in the input from the eyes. Mriganka Sur rewired new-born ferret brains so that the visual input went to where auditory input normally is processed. The result was the part of the brain thought to be only able to process auditory input developed fully functional visual processing capability.

Hoffman is right that our perceptions are not veridical in an absolute way. They may be even somewhat arbitrary. They are dependent the limitations of the sensual equipment. Evolving an eye that can see a third color is costly. Evolution, however, has evolved a backend brain which seems to have a more general-purpose information processing capability. Provided with a new color it learns the new color. The same structure provided with auditory input hears and provided with visual input sees. How does this work?

Edelman in a partial critique of Hoffman argues that “there are interesting ways in which perception can be truthful, with regard not to ‘objects’ but to relations, and that evolutionary pressure is expected to favor rather than rule out such veridicality.” Edelman cites three examples. Categorical consistency allows determination of the identity of stimuli which can vary, such as identifying the same person with different dress or haircut. Second order isomorphism involves ranking similarities, such as seeing different shades of green and yellow in ripe and unripe bananas.  Causality involves associating events in a time order, such as understanding thunder to be caused by lightning.

The brain, of course, is all about seeing relationships, identifying differences and similarities. It does this with learning and memory. What our senses present to us may not be veridical but the relationships in what is presented must be veridical or we could not interact consistently with the world. When people are fitted with prism glasses that turn everything upside down, they learn in a few weeks how to interact with the world using completely upside-down input. This is possible because the relationships between the objects are relatively the same. Regularity and consistency in the world still exists and the brain can learn about the regularity and how to operate with it. Hoffman almost acknowledges this when he writes: “Whereas in perception the selection pressures are almost uniformly away from veridicality, perhaps in math and logic the pressures are not so univocal, and partial accuracy is allowed.” I would expand Hoffman’s statement to include the ability to distinguish relationships in general.

Perceptions are not exactly veridical, but our understanding of the world still is to a degree veridical because we learn about relationships between the objects of the world. Evolution selected for perceptions, which are not veridical, and a general-purpose brain that can detect relationships in those perceptions. I think perhaps part of the so-called hard problem of consciousness comes exactly from this disparity between nonveridical, somewhat arbitrary, perceptions and the more veridical understanding of relationships. When we look at the world, we do not really see it, but we do see relationships in it. That makes the world seem like a simulation and it makes us wonder where the simulation comes from.

Consciousness and Learning

Consciousness is required for most learning and memory.

Bernard Baars, originator the global workplace theory of consciousness, notes that “there appears to be no robust evidence so far for long-term learning of unconscious input” and the “evidence for learning of conscious episodes is very strong.” He also writes: “Consciousness is also involved with skill acquisition. As predicted by the hypothesis, novel skills, which are typically more conscious, activate large regions of cortex, but after automaticity due to practice, the identical task tends to activate only restricted regions.”

Here we may have in a nutshell what I believe to be the primary evolutionary advantage of consciousness. For some reason (that I will get to shortly), only when we are conscious can we learn. We shouldn’t think of learning as something that occurs sporadically or only in structured situations. Fundamentally we and other conscious entities are learning constantly. We never have the same experience twice. We are constantly encountering new situations and getting new information.  Consciousness is learning and memory forming. Our conscious lives are an ongoing and continual process of discovery and learning. During sleep (how appropriately!) we apparently consolidate the learning of the day and the previous weeks saving what is new and useful and discarding the redundant.

Consciousness and the process of learning are either very closely related or are, in fact, the same as Stephen Grossberg’s proposes with his Adaptive Resonance Theory. The theory particularly is designed to explain what he called the stability–plasticity dilemma. We need continuity with past learning but also ability to incorporate new facts with the old. This is the “problem whereby the brain learns quickly and stably without catastrophically forgetting its past knowledge.” He writes:

The processes whereby our brains continue to learn about a changing world in a stable fashion throughout life are proposed to lead to conscious experiences. These processes include the learning of top-down expectations, the matching of these expectations against bottom-up data, the focusing of attention upon the expected clusters of information, and the development of resonant states between bottom-up and top-down processes as they reach an attentive consensus between what is expected and what is there in the outside world. It is suggested that all conscious states in the brain are resonant states and that these resonant states trigger learning of sensory and cognitive representations.

The resonance which Grossberg writes about is a synchronized firing of neurons engaged in a pattern matching process between sensory input and memories. It is what provides to context for understanding, deciding, and taking action. It is not only how the brain becomes conscious of attended events but also enables learning of new events. Every situation is novel. Like the river of Heraclitus, we never experience the same thing twice. We find similarities as well as differences. That is what learning does. That is what the process of consciousness is about.

Even qualia, most fundamental of representations we have in consciousness, may be learned, according to Grossberg. He writes his theory “explains how an adaptive resonance, when it occurs in different specialized anatomies that receive different types of environmental and brain inputs, can give rise to different conscious qualia as emergent, or interactive, properties of specialized networks of neuron.” The red seen by male squirrel monkey in the experiment may originate the eye and derive part of its quality from that, but it and its relationship to other colors still must be learned upon first seeing it. The brain may not be a complete blank slate at birth, but it may be a good deal more blank that we think until it learns about the world through the senses. Much of this basic learning takes place early in life with humans. Newborns need around two years to learn how to focus their eyes and form the relationships in the world. Development of much of our capabilities seems to have a critical period of the first five years of life. Blind people from birth do not form visual imagery in their dreams, but also people who go blind in the first years of life frequently do not form visual imagery in their dreams. It also seems that no matter when a person goes blind, the amount of visual imagery in dreams begins to diminish with time. It is almost as if the ability to form visual imagery and the forms themselves are primarily learned and, hence, can be forgotten with time.

Mark Solms writes:

Moreover, much of what we have traditionally thought to be “hard-wired” in cortical processing is actually learnt.

Cortical perception, therefore, no less than cortical cognition, is rooted in memory processes. Indeed, as far as we know, all cortical functional specializations are acquired. The columns of cortex are initially almost identical in neural architecture, and the famous differences in Brodmann’s areas probably arise from use-dependent plasticity (following the innate patterns of subcortical connectivity). Cortical columns resemble the random-access memory (RAM) chips of digital computers.

The answer to our question, “What does cortex contribute to consciousness?”, then, is this: it contributes representational memory space. This enables cortex to stabilize the objects of perception, which in turn creates potential for detailed and synchronized processing of perceptual images. This contribution derives from the unrivalled capacity of cortex for representational forms of memory (in all of its varieties, both short- and long-term).

Based on this capacity, cortex transforms the fleeting, wavelike states of brainstem activation into “mental solids.” It generates objects.

Consciousness involves matching sensual input to memories of sensual to form the representations of sensual input. It necessarily will involve learning because the input of today will never exactly be like the input of yesterday.  There is always uncertainty involved.

Uncertainty

While we often think of the brain as a prediction machine, the evolutionary advantage of consciousness may align more closely its ability to deal with uncertainty. Things that can be well predicted can frequently be reacted to automatically. It is the unknown and novel that can present the most opportunities and risks to an organism. Consciousness as a learning process is ever poised to evaluate new circumstances and stimuli, bring the experience of the past to bear on understanding the present, and commit to memory what can be learned from the new thus reducing uncertainty in the future.

The question is why consciousness is required for learning.

Grossberg describes this consciousness/learning process as coming from “a resonance is a dynamical state during which neuronal firings across a brain network are amplified and synchronized when they interact via reciprocal excitatory feedback signals during a matching process that occurs between bottom-up and top-down pathways. Often the activities of these synchronized cells oscillate in phase with one another. Resonating cell activities also focus attention upon a subset of cells, thereby clarifying how the brain can become conscious of attended events. It is called an adaptive resonance because the resonant state can trigger learning within the adaptive weights, or long-term memory (LTM) traces, that exist at the synapses of these pathways.”

One way to look at the question would be to envision what happens when known and unknown stimuli are presented to a brain. If sensory input is known, it can be matched with an existing pattern of cells that can then oscillate in phase to signal resonance. The oscillation will reinforce the learning that has already occurred that may already be implemented in a wired neural circuit. On the other hand, if the sensory input is unknown, it won’t match to existing pattern of cells or may match only partially. The problem becomes generating a pattern or new circuit to represent the new input. The brain could try to generate new patterns, randomly or perhaps by some “smart” process, beginning with the closest match to old pattern. Consciousness might play a role as arbiter of the closest match. This could be slow if there exist no wired circuits between the cells that need to resonate to make a good match. Consciousness could, however, also play more robust role by directing controlling the firing of neurons and matching resulting patterns in near real time. Is there any evidence it can do this?

Baars writes:

Conscious feedback training provides spectacular examples of the scope of access to almost any neuronal population and even single neurons. Single spinal motor units can come under voluntary control with auditory feedback. After brief training subjects have learned to play drumrolls on a single motor unit, with simultaneous silencing of surrounding units. There is no evidence that unconscious feedback can do this. Apparently conscious feedback enables control of a very wide range of activities in the nervous system, consistent with the idea that consciousness enables widespread access in the brain.

Consciousness apparently does sit in the loop that can control directly the firing neurons. I would suggest, therefore, that consciousness itself, whatever it is, must have the ability to affect and engage in feedback with cells and circuits which are themselves unconscious. While this ability could originate as an emergent property of brain circuits, I have elsewhere argued that electromagnetic field theories would be provide the best explanation.

A goal of learning, and hence consciousness, is to reduce uncertainty by creating better models of the world. Solms writes: “The more veridical the brain’s predictive model of the world, then the less surprise, the less salience, the less consciousness, the more automaticity, the better.” Since consciousness is a serial process, the more perception, decision, and action can be automated, the more consciousness can be freed to attend to other things and the more the organism can do without consciousness. Since it requires a great deal of energy, the more that can be done by wired and unconscious circuits the better. To quote Baars again: “novel skills, which are typically more conscious, activate large regions of cortex, but after automaticity due to practice, the identical task tends to activate only restricted regions.”

I’ll end with another quote from Solms:

This in turn suggests that the ideal of cognition is to forego representational (and therefore cortical) processing and replace it with associative processing—to shift from episodic to procedural modes of functioning (and therefore, presumably, from cortex to dorsal basal ganglia). It appears that consciousness in cognition is a temporary measure: a compromise. But with reality being what it is—always uncertain and unpredictable, always full of surprises— there is little risk that we shall in our lifetimes actually reach the zombie-like state of Nirvana that we now learn, to our surprise, is what the ego aspires to.

 

This entry was posted in Brain size, Consciousness, Electromagnetism, Human Evolution. Bookmark the permalink.

26 Responses to Evolution, Learning, and Uncertainty

  1. An interesting tour of ideas here James. I think you’re right that learning plays an important role in the adaptive value of consciousness. Just because some animals have perceptual imagery doesn’t mean that the imagery isn’t utilized by a reflexive system. That said, some researchers consider any sensory imagery to be consciousness, image based consciousness. I’m not sure there’s a fact of the matter on whether they’re right or wrong.

    I think Baars gets a bit carried away with learning and consciousness. He appears to think that even classical conditioning requires consciousness. But from what I’ve read, there’s plenty of evidence against that. Apparently a beheaded frog or a decerebrated rat can still undergo classical conditioning.

    A more difficult question is whether any form of operant or instrumental learning can take place without affect (emotional) consciousness. It seems like maybe simple local forms of it can, but as it scales in complexity, and requires holding information for longer periods of time in working memory, affect consciousness becomes required.

    But I don’t know that there are any bright lines here, just points where we can plausibly apply labels.

    Liked by 1 person

    • James Cross says:

      I certainly would not argue that classical conditioning requires consciousness which is why I said consciousness is required for most learning. I might need to qualify that statement a bit more.

      Like

    • James Cross says:

      Looking around a little I ran into this which is interesting.

      “We propose unlimited associative learning (UAL) as the marker of the evolutionary transition to minimal consciousness (or sentience), its phylogenetically earliest sustainable manifestation and the driver of its evolution. We define and describe UAL at the behavioral and functional level and argue that the structural-anatomical implementations of this mode of learning in different taxa entail subjective feelings (sentience). We end with a discussion of the implications of our proposal for the distribution of consciousness in the animal kingdom, suggesting testable predictions, and revisiting the ongoing debate about the function of minimal consciousness in light of our approach.”

      https://www.frontiersin.org/articles/10.3389/fpsyg.2016.01954/full

      There is quite a bit you can hit just googling learning and consciousness.

      Like

      • Sorry, didn’t mean to imply you were saying classical conditioning required consciousness. Your mention of Baars just reminded me of his statements along those lines.

        Interesting on UAL. I’ve been meaning to look it up ever since Feinberg and Mallatt mentioned it in their 2018 book. Although I couldn’t find a succinct description in the paper. Here’s F&M’s summarized version:

        Recently, a more sophisticated form of learning called unlimited associative learning (UAL) was proposed as a marker of whether an animal has affective consciousness (and consciousness in general). Bronfman, Ginsburg, and Jablonka argue that whereas simple operant conditioning can involve learning just a single characteristic of a new object, UAL involves more than this.4 That is, UAL means learning the reward value of the whole object: the appearance, weight, and smell of the food pellet all together, so that the animal will never be fooled by a pebble that looks like the food. UAL also means the ability to learn all the steps in a complex response: a rat finds the food-delivering lever, reaches it, learns how hard to press it and when to stop, and so on. Indeed, this represents sophisticated affective learning of brand-new behaviors.

        Feinberg, Todd E.. Consciousness Demystified (The MIT Press) . The MIT Press. Kindle Edition.

        They express concern that this might be too sophisticated for the most minimal levels of consciousness, however they come to very similar conclusions as Bronfman, Ginsburg, and Jablonka about which animals are conscious: vertebrates, arthopods, and cephalopods.

        Liked by 1 person

        • James Cross says:

          No need for apology. I wanted to qualify that statement somewhat but simply settled on “most” learning which isn’t very specific. I can agree on UAL for the moment at least and possibly the list of animals too.

          Liked by 1 person

        • I agree with the list of animals too. Although I can understand why someone might say that arthropods and basal vertebrates (fish, amphibians) aren’t conscious. I think they are, but their consciousness is very limited compared to mammals, birds, or cephalopods, with only nuanced contribution to their primarily stimulus bound behavior.

          Liked by 1 person

        • James Cross says:

          Arthropods certainly seem at the very edge Is Feinberg including insects and spiders? I would suppose it would logically follow if they do UAL.

          To quote from article I linked to:

          Hence it seems that all vertebrates, many arthropods, and some mollusks have all the neural structures necessary to support UAL. For some animal phyla we lack sufficient information about the relevant structures or have information for only some of these structures (e.g., Wolff and Strausfeld, 2016). In other phyla, such as flat worms and nematodes, there is no evidence for the presence of such structures. This is consistent with the behavioral data showing no evidence for UAL in these groups.

          Like

        • “Are they including insects and spiders?”

          F&M are. They explicitly list the fruit fly, bee, and jumping spider as showing signs of operant learned responses to punishment and reward, and the cricket and jumping spider for value trade off behavior. That said, I haven’t looked at the cited studies. I did look a while back at some they cited for other species, such as crayfish, and the results often seem open to alternate interpretations to a much larger degree than the ones for vertebrates.

          Liked by 1 person

  2. James,
    I think you’re right that there must be good survival based reason for consciousness, especially given that it requires sleep and a good deal of energy. If evolution could have gotten by without it in more advanced forms of life, then it surely would have. (Talk of non-conscious machines functioning so well that they seem conscious, should be just that, “talk”. I don’t consider it possible. But given that the consciousness topic remains so primitive, such speculation should be inevitable.)

    Your answer seems to be that consciousness is required for learning, and that learning is needed since a given situation may not match up well enough with fixed parameters. From here you ask why consciousness is required for learning? As somewhat of an explanation you said:

    Since consciousness is a serial process, the more perception, decision, and action can be automated, the more consciousness can be freed to attend to other things and the more the organism can do without consciousness.

    Definitely. Consciousness is a valuable and scarce serial dynamic which needs to make novel behavior become automatic behavior. I think that I can add something to this though.

    I believe that conscious stuff harbors purpose, by which I mean that it can feel good/bad, while non-conscious stuff does not. Without purpose there should only be rules to follow, though with purpose an agent has incentive to figure something out given the way that it feels. Here existence “matters”.

    From my perspective the process here essentially works like this: In certain situations the brain creates an experiencer (by means of EM fields or whatever), and this entity interprets provided inputs and constructs scenarios about how to make itself feel better. The brain then detects associated EM fields (or whatever) and so attempts to comply where applicable. If successful such operations tend to become habitual for the brain, thus freeing the conscious entity for more such novel function, such as responding to this post!

    Liked by 1 person

    • James Cross says:

      Solms in the linked paper writes:

      Above all, the phenomenal states of the body-as-subject are experienced affectively. Affects do not emanate from the external sense modalities. They are states of the subject. These states are thought to represent the biological value of changing internal conditions (e.g., hunger, sexual arousal). When internal conditions favor survival and reproductive success, they feel “good”; when not, they feel “bad.” This is evidently what conscious states are for. Conscious feelings tell the subject how well it is doing. At this level of the brain, therefore, consciousness is closely tied to homeostasis.

      So I think that is right and you are too. Originally I was thinking of calling this post “Learning, Self, and Uncertainty” because the self (whether illusory or not) is tied into the evolutionary picture. Perhaps in a later post I will try to tie it in more explicitly.

      Liked by 1 person

    • James Cross says:

      Another note in UAL linked paper. They note self and embodiment as a property associated with UAL and minimal consciousness. They write:

      UAL instantiates the philosophical notion of self. “Minimal self,” as most clearly described by Metzinger (2009) and Merker (2005, 2007, 2013), requires a model of the integrated yet rapidly changing world in which a model of the coordinated and flexibly changing and moving body is nested. Such a nested model provides a stable updateable perspective that enables the flexible evaluation of the various changes in the world–body relations. This notion of self can only be achieved via hierarchical, multilayered predictive-coding, allowing the animal to distinguish between the effects of compound self-generated and world-generated stimuli. Furthermore, comparing the actual sensory consequences of the animal’s complex motor commands to those of the sensory feedback that is predicted by its self-model allows the animal to “predict” the effects of its own actions

      Liked by 1 person

      • Thanks James. It sounds like you’ve found some things from Solms which work pretty well with my own ideas. But then I’m not much for neuroscientific accounts of brain function, as in that 82 page paper. It’s not the sort of thing that I’ve found interesting enough in to explore. I’m more into psychology, or the sort of thing which neuroscience will ultimately need to explain. Hopefully people like you and Mike can assess my ideas in terms of modern ideas in neuroscience though.

        Anyway I believe that evolution is able to deal with many things by means of brain based algorithms, which is to say by means of non-conscious function. (The entire brain, as I see it, is non-conscious.) But under less closed environments (or less like the game of chess), this sort of function should have had difficulties. Straight algorithms shouldn’t have been effective enough under more open circumstances, and regardless of how involved those algorithms happened to be.

        So I propose that brains began producing something else, and perhaps somewhat like lightbulbs produce light. In essence they began to produce an entity with phenomenal experience, or essentially affect. Furthermore evolution would arm this entity with memory and informational senses. It should tend to interpret these inputs and constructs scenarios about what to do to make itself feel better, and so harbor teleological existence. The brain should not only produce the experiencer, but also detect how the experiencer would like to move various muscles. With this detection the brain should generally comply, and so gain its own teleological function from which to better deal with more open circumstances.

        Of course your post is about how conscious function instructs non-conscious function. I’ve simply gotten into what I consider to be effective brain architecture regarding this sort of thing, and indeed, why consciousness evolved.

        Liked by 1 person

        • James Cross says:

          Just to make sure you understand the Solms paper and UAL paper are different papers.

          Solms, by the way, is a psychoanalyst in the Freudian tradition as well as a neuroscientist. He is trying to blend both disciplines. Some of his views are unique and at variance with most others.

          For example, he thinks consciousness comes from the brainstem and the cortex is more like the RAM of a computer. It is the workplace for the representations of consciousness but is actually unconscious itself. So representations form in the cortex but the “light” that shines on them comes from a lower parts of the brain.

          I think the relationship of the thalamus and the RAS with the cortex is somewhat complicated. It is clear that damage to small parts of the RAS lead to permanent coma so there is something in it critical to the whole process. Aside from that, all of the senses except smell route through the thalamus so it must be critical too in some way. Going back a few posts, I talked about a paper that suggested conscious content had to pass through the L5 pyramidal neurons which has connections with both the thalamus and the cortex.

          If I had to take a position at this point, I would say the RAS must play some role in powering up the cortex through neurotransmitter production and controlling the flow of sensual input through the thalamus to the cortex. The flow of sensual input and neurotransmitter production is reduced during sleep to a trickle but opened up fully during waking consciousness. The “entity with phenomenal experience” you mention I would see as the EM field which probably operates continuously during wakefulness and sleep but varies in intensity so that during dreamless sleep it is too weak to create phenomenal experience. In this case, the circuits in the cortex would themselves be unconscious even during wakefulness but conscious representations might be formed in the L5 pyramidal neurons (or the EM field generated by them) in the cortex as in Solms model.

          But this is all heavy speculation.

          Liked by 1 person

        • James Cross says:

          Having just written what I did about the RAS, several additional thoughts occur to me. More speculations to be clear.

          The RAS might maintain a trickle of neurotransmitters and sensual input to the cortex during sleep for threat detection. In the absence of a threat, the gradual accumulation of a small amount of neurotransmitters might trigger the periodic eruption of dreams to “burn off” the accumulation somewhat like a geysers periodically burst forth so the brain can continue with the restorative functions. In cases of damage to the RAS, permanent and vegetative coma results when essentially no significant amount of transmitters or sensual information passes to the cortex. In partial states, there may remain the trickle, hence, resulting in the state of coma where there appears to be some response to the environment beyond reflex.

          More speculations to be clear.I’ll repeat.

          Liked by 1 person

      • James,
        I was trying to be diplomatic regarding Solms given his Freudianism. It could be that he has some reasonable ideas even still, though in today’s world I’ve got to wonder why he hasn’t entirely abandoned such an outdated perspective.

        You may have noticed that I never use the “unconscious” term, which of course Freud popularized. It seems to me that at least 20% of associated scientists have taken this road as well. I’ll briefly mention why I personally go this way. I’m generally fine when others don’t, that is as long as they don’t conflate associated issues.

        It seems to me that the “unconscious” term is used in three distinct ways. The first concerns the obvious “not conscious”. But the unconscious term does still carry the flavor of consciousness. We don’t call rocks “unconscious”, for example. So to truly end this fuzziness I think it can be helpful to instead go with “not conscious” or “non-conscious” exclusively in this regard.

        The second form is used to reference degraded states of consciousness, such as sleep, as well as anything else up to a full loss of consciousness. I think it’s helpful to refer to these as “altered states of consciousness”. Of course this is a standard term in science, and indeed references the effects of mind altering drugs as well.

        The last of them is not one that I have a standard term to fall back on. Sometimes we want to refer to a melding of conscious function with non-conscious proclivities. An example would be when we reference our biases. Blind and double blind tests are needed because people who administer them tend to color the results with their own preconceptions. Instead of calling this sort of thing “unconscious”, I’ve taken to calling it “quasi-conscious”. Food for thought anyway.

        Though I am down on Freud, I’m not down on psychoanalysis in general. The reason that it has failed (as displayed by the field of psychiatry generally abandoning it in the last couple of decades for potential medication based remedies), is because no experimentally successful broad general theory regarding out nature has yet been developed. Thus I propose my own, though mine is hampered by the social tool of morality (and of course that I’m a nobody). But once we do have experimentally successful broad general theory regarding our nature, I believe that specialists will be able to assess people better, and so begin to provide effective psychoanalytic advice and therapy.

        Will neuroscience help? Well that hasn’t been my approach, though it’s all connected. And this EM field theory has really been growing on me. The vast majority of phenomenal experience theories seem information based, or depend upon substance dualism given that effects require physics based causes in the natural world. Could the brain be producing phenomenal experience by means of chemical interactions? Maybe, but chemical dynamics occur locally. If that’s the case then how might different parts of the brain have as much real time influence as they seem to? Phenomenal experience by means of created EM fields, would seems to be the exclusive way to address this.

        I had to look up the RAS part of the brainstem. I’m definitely on board with the conscious mode of function possibly being extremely basic. One potential reason for the Cambrian explosion is that non-conscious brains evolved. But it’s also possible that they already existed and were hampered under the more open circumstances which they couldn’t effectively be programmed for. Thus it could be that virtually all modern life with central organism processors, either currently harbor a phenomenal component, or at least evolved from organisms with such a component.

        Liked by 1 person

        • James Cross says:

          Freud is seldom read but frequently criticized especially by people who haven’t read.

          He does need to be put into the context of late nineteenth and early twentieth century science and philosophy.

          Liked by 1 person

        • James Cross says:

          “This EM field theory has really been growing on me.”

          Me too. It sort of caught me by surprise when I first started looking at it. Especially it seems to resolve the seeming insubstantially and lack of location for consciousness while at the same time not abandoning completely science.

          Liked by 1 person

  3. Lee Roetcisoender says:

    “…what evolutionary advantage does consciousness provide?”

    That is a good question predicated upon the isolated context of our own phenomenal experience of mind. Breaking out of that cordoned off context and considering the bigger picture one might also ask: What evolutionary advantage does organic material provide in contrast to inorganic material? Clearly, organic material is more robust than inorganic material and has the advantage when it comes to self replication. Nevertheless, inorganic material has the advantage when it comes to survival and longevity. So, I’m not sure the ability to learn is an advantage when it comes to the grander schema called survival.

    Peace

    Like

    • James Cross says:

      Since I am talking about evolutionary advantage, I’m limiting my discussion here to organic matter. The question of organic vs inorganic is about trying to find a reason why life came about beyond because it could. If you look to the About page, there were some speculations about that question that started this blog. Winding the Universal Clock suggests life and consciousness came about to kick start the next cycle of the universe.I haven’t done speculations on those sort of topics for a while but I may get back to them at some point.

      https://www.rand.org/pubs/papers/P8006.html

      Like

      • Lee Roetcisoender says:

        Just finished “Winding the Universal Clock”. Builder and Menke’s take on the next ontological level of evolution is Star Trek material. Actually, the characterization of the Borg works better than straight machinery. It’s good to see creativity at work though.

        Ditto on your remarks being “heavy speculations” Trying to build a model of consciousness using physics is an impossible task because knowledge gained through experimentation is a posteriori knowledge. A priori is a better tool for such an endeavor. According to Kant, any proposition would be a priori if it is both necessary and universal. The elusiveness of consciousness fits that bill with precision.

        According to this thesis, a proposition is necessary if it could not possibly be false, and consequently cannot be denied without contradiction. That’s just one of the prerequisites. Also, a proposition would be universal if it was found to be true in all cases, a condition that does not allow for any exceptions. In contrast, knowledge gained a posteriori through the senses would never impart absolute necessity and universality. This is because it would always be possible that we might encounter an exception. In light of the unabridged potential of a priori knowledge, I am perplexed that scientists are so religiously devoted to a posteriori knowledge; where as a tool, a posteriori knowledge is incapable of delivering the “goods”.

        Peace

        Liked by 1 person

        • James Cross says:

          You’re right it is Star Trekesque. I think I remarked in the page that it seemed really weird for Rand to do. It also reminds of the Asimov story The Last Question where a super computer ponders at the end of the universe and answers the question: “Let there be light.”

          Like

  4. paultorek says:

    You hit the nail on the head with this: “The problem is that evolution must start from where it is to get to where it is going and none of the steps and iterations along the way can be dysfunctional.” So that means that consciousness only has to be good enough for evolution’s “purpose”, compared to the accessible alternatives.

    This point should be combined with the fact that there’s usually “more than one way to skin a cat” as the saying goes. (Apparently, back in the day, there were some taxidermists making up sayings.) Evolution only needs a way to solve its problems, such as the problem of achieving flexible learning; it doesn’t need that to be the only way. You would think that such an elementary logical distinction would be obvious, but the number of otherwise smart people who blunder there, when the subject is consciousness, is staggering.

    Hoffman’s discussion of the PDA loop is sorely lacking any account of semantics or reference. How can you say whether an organism’s beliefs are true or false if you have no way of telling what those beliefs refer to? You can’t declare mismatch between a belief and the truth unless you have a way of telling what the belief actually says. Epic fail.

    Liked by 1 person

    • James Cross says:

      I was somewhat enthralled with Hoffman’s approach for a while but now see it as useful only to limited degree. I think Hoffman might say that in his evolutionary games simulation it is irrelevant what the truth actually is but we can be sure that an organism’s perception will not likely match it. That goes back to your own point that perceptions only need be good enough for evolution’s “purpose”. Take vision is an example. Probably it would be much more “truthful” if we could see the full range of the electromagnetic spectrum. But evolving a capability like that from some primitive light detector cells would likely would be of no evolutionary value so there is no reason it would be selected for. Humans apparently get all of the value we can use from our visible spectrum.

      Like

      • paultorek says:

        Well, you get a greater quantity of truth if you can distinguish 1024 bands in the color spectrum like a spectrometer does. But not a greater percentage of truth, necessarily. “Accuracy” refers to percentages.

        Like

        • James Cross says:

          I think it would depend upon what you needed to make a discrimination about. Certainly the 1024 bands would allow you make you see things we can’t currently see and I think a number of insects and animals see in the ultraviolet spectrum for some navigation and predator/prey detection.

          Like

  5. Pingback: The Evolution of the Sensitive Soul | Broad Speculations

Leave a Reply to paultorek Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s