A question that has puzzled me is the one of the evolutionary reasons for consciousness. I’m speaking loosely here. Evolution doesn’t really have reasons. Changes happen in organisms because the changes in the long run bestow some advantage (or at least no disadvantage) to the survival of the organism. Let’s rephrase the question: what evolutionary advantage does consciousness provide?
This might seem like a question with obvious answers. Obviously, there is a lot I can do if I am not anesthetized and unconscious. I can search for food, hide from predators, seek a mate, or build a fire to keep warm. How many of those things, however, really require consciousness? The first three things in the list can be done by insects. Do we think insects are conscious? Some people do. Do any of the first three really require consciousness? Or, can they be done through pure instinct, a set of possibly complex but hard-wired behaviors that execute when the right triggers in the environment are present?
The question lingers in the background of Searle’s Chinese Room and the Turing Test. Can we create a device, that is not conscious, that is from outward actions indistinguishable in its capabilities from a human? Are we forced to acknowledge that it would have to be conscious if we can’t tell the difference between the device and a human? Personally, it doesn’t feel right to think what I know to be a device, built say of silicon and copper, is conscious, even though I can’t distinguish how it acts with me from the actions of a human. I might talk with it and even call it by name, but something still doesn’t feel right about declaring it conscious. If we go too far down this path, we will be forced to conclude that consciousness is an epiphenomenon, like waste heat from a computer, and unimportant.
That leads us back to the question: what evolutionary advantage does consciousness provide if we can’t distinguish actions of a conscious organism from those of an unconscious device (or organism)?
Certainly, it must be something significant since consciousness itself comes with a huge, but seldom appreciated, cost. Consciousness not only consumes energy, an enormous part of the human energy budget for humans, but it also generates significant waste and poisonous by-products. Organisms we recognize as the most conscious require regular sleep to repair the damage. While many of us might appreciate the forced downtime of sleep, we not only pay the opportunity cost of productive things we could have been doing while sleeping, we also put ourselves in an unconscious state which could make us vulnerable to predators and other dangers.
The answer may be somewhat complicated. Evolution doesn’t usually have free reign to tear up one design and start fresh with a new one. Consciousness may not be the best design for whatever evolutionary advantages it provides. A complex biological, but unconscious, computer with massive amounts of memory and processing power might be the superior design choice. The problem is that evolution must start from where it is to get to where it is going and none of the steps and iterations along the way can be dysfunctional.
Evolution and Truth
As I have pointed out before, the first organisms with a semblance of a nervous system were probably primitive worms, the first bilaterians. While there were other multi-cellular, predecessor organisms, like jellyfish, with neural-like reactions, the first worms had the body plan from which more complex organisms have developed. Essentially this plan consists of a head with a mouth, a concentration of neurons near the mouth (the makings of a brain), and a strand of neurons running beside the digestive tract (the start of a spinal cord). I will assume these organisms were not and are not conscious or, at least, certainly not conscious in any way comparable to the more complex organisms that have developed from this body plan. They lack more advanced sensory capabilities as well as much of a backend brain to process and interpret the information from the senses. What evolution had to work with before conscious brains were reflexes and simple stimulus-response neural circuits. Simple circuits are great as far as they go and an upgrade for organisms that couldn’t respond in any way to the environment. Even a simple response of movement toward something good or away from something bad would have an advantage.
More complex and seemingly more conscious organisms have evolved and along with this have developed larger and more complex nervous systems. With this complexity has come a larger and more nuanced repertoire of behavior. The next step in complexity of nervous systems would include more sophisticated, but still largely hard-wired, behaviors. Some examples might be spiders building webs or bees forming a hive. Advantages accrue to the spider who can capture more and larger prey with a well-built web. Bees living in hives can divide the work and benefit from the labors of others in a social organization. These behaviors evolved over thousands, possibly millions, of years to reach the form we find them in today. The behaviors are in their own way amazing but they still rigidly structured. Every worker goes through the same life cycle and exhibits the same behaviors. When X happens, Y happens. There is little discrimination or nuance in the behavior.
Even if this behavior is instinctual, I can’t personally rule out that there may be some degree of consciousness involved in it. The spider building a web may be performing essentially canned routines, but it would still need some capability of adjusting the routines to local conditions. Spiders construct webs with seven anchor points. The first step involves casting as many as twenty silk strands to the air to be carried by the wind to see which ones attach to other leaves or branches. From there the spider then must identify seven lines distributed around a circle in a plane with good attachment points then cut away the unused lines. It is possible this is entirely unconscious behavior. However, integration of multiple senses – touch and vision in this case – with adaptive action involving choice may be the most primitive level of consciousness. I have often thought the underpinning of consciousness, perhaps its first glimmers, lay in the spatiotemporal mapping of the body to the external world. The extension of this mapping beyond the body to the nearby world of leaves and branches with some low-level understanding of geometry might be a small step toward consciousness.
As a model for how perceptions and more complex nervous systems developed, I’d like to look at Donald Hoffman’s Interface Theory of Perception (ITP). Hoffman’s thesis is that “veridical perceptions—strategies tuned to the true structure of the world—are routinely dominated by nonveridical strategies tuned to fitness.” In this view, perceptions are elaborate reality hacks that evolve as part of what he calls a Perception-Decision-Action (PDA) loop.
Hoffman explains it this way: “The channel P transmits messages from the world W, leading to conscious experiences X. The channel D transmits messages from X, leading to actions G. The channel A transmits messages from G that are received as new states of W.”
Perceptions and actions evolve because they lead to effectively change something in the world to the benefit of the organism. Perceptions do not need to be truthful representations of the world. As a matter of fact, fidelity to the actual world would be a disadvantageous because it could be too slow and too costly to achieve.
One of Hoffman’s favorite examples to illustrate this is the Australian Jewel Beetle. During evolution selection for males that chose to mate with larger brown, shiny females began to be favored. The perception of large, brown, and shiny became tied to the decision and action of mating. Females were large, brown, and shiny. The bigger they were the more fit they were. Males with a better perception of big, brown, and shiny would be selected for because they would be making the better choice for a mate. This worked all to the good if female jewel beetles were the only brown, shiny things to be found on the ground in the outback. Unfortunately, when humans began throwing their beer bottles into the outback, the male beetles began mating with the bottles. The nonveridical perception of big, brown, shiny for a female mate evolved and worked as a nice hack for the jewel beetle until the environment changed and the beetle almost went extinct.
The theory is attractive and seemingly explanatory for the mating of jewel beetle and possibly for developed perceptions of many other simple organisms. Does it work for more complex organisms? I can certainly think of instances where it would be at least partially correct. Take as an example color perception. In a great many species, the ability to distinguish variations in wavelengths of light is tied to identification of food sources. The development of trichromatic vision in primates was probably selected by the improved ability to distinguish ripe fruit in a green forested environment. In a real sense there is no “red” or “yellow” in the world. We developed an ability to perceive of “red” and “yellow” because it helped us find food.
Human beings, however, are not exactly like jewel beetles. Human beings do not simply see red and decide to eat as the male jewel beetle sees big, brown, and shiny and decides to mate. The problem with Hoffman’s PDA loop as a more general evolutionary theory of perception and consciousness is that it leaves out a major aspect of more complex nervous systems: learning and memory. Human beings use red and other perceptions to match to previous experiences to decide if the red is ripe fruit. Between the perception, the decision, and the action sits a consciousness that places the perception into a context based on prior experience. Perceptions are not the entire evolutionary ballgame. Our consciousness consists of matching current sensory data with memories of past sensory data. This is a learning process that guides the decision-action part of Hoffman’s loop. The decisions and actions arise only partially from selection for perception on evolutionary time scales and arise mostly on real time scales. Evolution selected for learning and memory, in addition to perceptions, to permit faster developing and more complex adaptive behavior.
As we move to more complex organisms, the decision part of the loop and capabilities that enhance the decision part of the loop become more critical to fitness than the perception itself. Any perception can be overridden through learning and experience. If our predecessor species ever made an automatic decision to run on the perception of a snake, we now as human beings can decide to capture the snake for food based upon our learning and experience. Our natural perceptions (as opposed to our technologically enhanced perceptions) certainly are limited to our perceptual equipment. We can see red because we have the right type of light cone in our eyes and we cannot see ultraviolet because we lack the perceptual equipment to see it. However, our capacity for learning, experimentation, and ability to form relationships based on prior experience allow us to better approximate the world than Hoffman’s ITP suggests. We can learn when our perceptions are wrong, and we can even overrule them.
The actual representations of the world in consciousness may originate in and be dependent upon in part the perceptual equipment as much as the brain itself. Visual images look in part the way they do because they come from eyes. Sounds sound like they do because they come from ears. In an experiment, researchers were able to modify some of the green light cones in the eyes of male squirrel monkeys so they would be sensitive to red. The monkeys which couldn’t previously distinguish red dots in an image could distinguish them after the modification to the eyes. The processes that enabled color vision in the monkey brain didn’t need modification to be able to learn a new color. They just needed new inputs. The red must originate at least in part in the input from the eyes. Mriganka Sur rewired new-born ferret brains so that the visual input went to where auditory input normally is processed. The result was the part of the brain thought to be only able to process auditory input developed fully functional visual processing capability.
Hoffman is right that our perceptions are not veridical in an absolute way. They may be even somewhat arbitrary. They are dependent the limitations of the sensual equipment. Evolving an eye that can see a third color is costly. Evolution, however, has evolved a backend brain which seems to have a more general-purpose information processing capability. Provided with a new color it learns the new color. The same structure provided with auditory input hears and provided with visual input sees. How does this work?
Edelman in a partial critique of Hoffman argues that “there are interesting ways in which perception can be truthful, with regard not to ‘objects’ but to relations, and that evolutionary pressure is expected to favor rather than rule out such veridicality.” Edelman cites three examples. Categorical consistency allows determination of the identity of stimuli which can vary, such as identifying the same person with different dress or haircut. Second order isomorphism involves ranking similarities, such as seeing different shades of green and yellow in ripe and unripe bananas. Causality involves associating events in a time order, such as understanding thunder to be caused by lightning.
The brain, of course, is all about seeing relationships, identifying differences and similarities. It does this with learning and memory. What our senses present to us may not be veridical but the relationships in what is presented must be veridical or we could not interact consistently with the world. When people are fitted with prism glasses that turn everything upside down, they learn in a few weeks how to interact with the world using completely upside-down input. This is possible because the relationships between the objects are relatively the same. Regularity and consistency in the world still exists and the brain can learn about the regularity and how to operate with it. Hoffman almost acknowledges this when he writes: “Whereas in perception the selection pressures are almost uniformly away from veridicality, perhaps in math and logic the pressures are not so univocal, and partial accuracy is allowed.” I would expand Hoffman’s statement to include the ability to distinguish relationships in general.
Perceptions are not exactly veridical, but our understanding of the world still is to a degree veridical because we learn about relationships between the objects of the world. Evolution selected for perceptions, which are not veridical, and a general-purpose brain that can detect relationships in those perceptions. I think perhaps part of the so-called hard problem of consciousness comes exactly from this disparity between nonveridical, somewhat arbitrary, perceptions and the more veridical understanding of relationships. When we look at the world, we do not really see it, but we do see relationships in it. That makes the world seem like a simulation and it makes us wonder where the simulation comes from.
Consciousness and Learning
Consciousness is required for most learning and memory.
Bernard Baars, originator the global workplace theory of consciousness, notes that “there appears to be no robust evidence so far for long-term learning of unconscious input” and the “evidence for learning of conscious episodes is very strong.” He also writes: “Consciousness is also involved with skill acquisition. As predicted by the hypothesis, novel skills, which are typically more conscious, activate large regions of cortex, but after automaticity due to practice, the identical task tends to activate only restricted regions.”
Here we may have in a nutshell what I believe to be the primary evolutionary advantage of consciousness. For some reason (that I will get to shortly), only when we are conscious can we learn. We shouldn’t think of learning as something that occurs sporadically or only in structured situations. Fundamentally we and other conscious entities are learning constantly. We never have the same experience twice. We are constantly encountering new situations and getting new information. Consciousness is learning and memory forming. Our conscious lives are an ongoing and continual process of discovery and learning. During sleep (how appropriately!) we apparently consolidate the learning of the day and the previous weeks saving what is new and useful and discarding the redundant.
Consciousness and the process of learning are either very closely related or are, in fact, the same as Stephen Grossberg’s proposes with his Adaptive Resonance Theory. The theory particularly is designed to explain what he called the stability–plasticity dilemma. We need continuity with past learning but also ability to incorporate new facts with the old. This is the “problem whereby the brain learns quickly and stably without catastrophically forgetting its past knowledge.” He writes:
The processes whereby our brains continue to learn about a changing world in a stable fashion throughout life are proposed to lead to conscious experiences. These processes include the learning of top-down expectations, the matching of these expectations against bottom-up data, the focusing of attention upon the expected clusters of information, and the development of resonant states between bottom-up and top-down processes as they reach an attentive consensus between what is expected and what is there in the outside world. It is suggested that all conscious states in the brain are resonant states and that these resonant states trigger learning of sensory and cognitive representations.
The resonance which Grossberg writes about is a synchronized firing of neurons engaged in a pattern matching process between sensory input and memories. It is what provides to context for understanding, deciding, and taking action. It is not only how the brain becomes conscious of attended events but also enables learning of new events. Every situation is novel. Like the river of Heraclitus, we never experience the same thing twice. We find similarities as well as differences. That is what learning does. That is what the process of consciousness is about.
Even qualia, most fundamental of representations we have in consciousness, may be learned, according to Grossberg. He writes his theory “explains how an adaptive resonance, when it occurs in different specialized anatomies that receive different types of environmental and brain inputs, can give rise to different conscious qualia as emergent, or interactive, properties of specialized networks of neuron.” The red seen by male squirrel monkey in the experiment may originate the eye and derive part of its quality from that, but it and its relationship to other colors still must be learned upon first seeing it. The brain may not be a complete blank slate at birth, but it may be a good deal more blank that we think until it learns about the world through the senses. Much of this basic learning takes place early in life with humans. Newborns need around two years to learn how to focus their eyes and form the relationships in the world. Development of much of our capabilities seems to have a critical period of the first five years of life. Blind people from birth do not form visual imagery in their dreams, but also people who go blind in the first years of life frequently do not form visual imagery in their dreams. It also seems that no matter when a person goes blind, the amount of visual imagery in dreams begins to diminish with time. It is almost as if the ability to form visual imagery and the forms themselves are primarily learned and, hence, can be forgotten with time.
Mark Solms writes:
Moreover, much of what we have traditionally thought to be “hard-wired” in cortical processing is actually learnt.
Cortical perception, therefore, no less than cortical cognition, is rooted in memory processes. Indeed, as far as we know, all cortical functional specializations are acquired. The columns of cortex are initially almost identical in neural architecture, and the famous differences in Brodmann’s areas probably arise from use-dependent plasticity (following the innate patterns of subcortical connectivity). Cortical columns resemble the random-access memory (RAM) chips of digital computers.
The answer to our question, “What does cortex contribute to consciousness?”, then, is this: it contributes representational memory space. This enables cortex to stabilize the objects of perception, which in turn creates potential for detailed and synchronized processing of perceptual images. This contribution derives from the unrivalled capacity of cortex for representational forms of memory (in all of its varieties, both short- and long-term).
Based on this capacity, cortex transforms the fleeting, wavelike states of brainstem activation into “mental solids.” It generates objects.
Consciousness involves matching sensual input to memories of sensual to form the representations of sensual input. It necessarily will involve learning because the input of today will never exactly be like the input of yesterday. There is always uncertainty involved.
While we often think of the brain as a prediction machine, the evolutionary advantage of consciousness may align more closely its ability to deal with uncertainty. Things that can be well predicted can frequently be reacted to automatically. It is the unknown and novel that can present the most opportunities and risks to an organism. Consciousness as a learning process is ever poised to evaluate new circumstances and stimuli, bring the experience of the past to bear on understanding the present, and commit to memory what can be learned from the new thus reducing uncertainty in the future.
The question is why consciousness is required for learning.
Grossberg describes this consciousness/learning process as coming from “a resonance is a dynamical state during which neuronal firings across a brain network are amplified and synchronized when they interact via reciprocal excitatory feedback signals during a matching process that occurs between bottom-up and top-down pathways. Often the activities of these synchronized cells oscillate in phase with one another. Resonating cell activities also focus attention upon a subset of cells, thereby clarifying how the brain can become conscious of attended events. It is called an adaptive resonance because the resonant state can trigger learning within the adaptive weights, or long-term memory (LTM) traces, that exist at the synapses of these pathways.”
One way to look at the question would be to envision what happens when known and unknown stimuli are presented to a brain. If sensory input is known, it can be matched with an existing pattern of cells that can then oscillate in phase to signal resonance. The oscillation will reinforce the learning that has already occurred that may already be implemented in a wired neural circuit. On the other hand, if the sensory input is unknown, it won’t match to existing pattern of cells or may match only partially. The problem becomes generating a pattern or new circuit to represent the new input. The brain could try to generate new patterns, randomly or perhaps by some “smart” process, beginning with the closest match to old pattern. Consciousness might play a role as arbiter of the closest match. This could be slow if there exist no wired circuits between the cells that need to resonate to make a good match. Consciousness could, however, also play more robust role by directing controlling the firing of neurons and matching resulting patterns in near real time. Is there any evidence it can do this?
Conscious feedback training provides spectacular examples of the scope of access to almost any neuronal population and even single neurons. Single spinal motor units can come under voluntary control with auditory feedback. After brief training subjects have learned to play drumrolls on a single motor unit, with simultaneous silencing of surrounding units. There is no evidence that unconscious feedback can do this. Apparently conscious feedback enables control of a very wide range of activities in the nervous system, consistent with the idea that consciousness enables widespread access in the brain.
Consciousness apparently does sit in the loop that can control directly the firing neurons. I would suggest, therefore, that consciousness itself, whatever it is, must have the ability to affect and engage in feedback with cells and circuits which are themselves unconscious. While this ability could originate as an emergent property of brain circuits, I have elsewhere argued that electromagnetic field theories would be provide the best explanation.
A goal of learning, and hence consciousness, is to reduce uncertainty by creating better models of the world. Solms writes: “The more veridical the brain’s predictive model of the world, then the less surprise, the less salience, the less consciousness, the more automaticity, the better.” Since consciousness is a serial process, the more perception, decision, and action can be automated, the more consciousness can be freed to attend to other things and the more the organism can do without consciousness. Since it requires a great deal of energy, the more that can be done by wired and unconscious circuits the better. To quote Baars again: “novel skills, which are typically more conscious, activate large regions of cortex, but after automaticity due to practice, the identical task tends to activate only restricted regions.”
I’ll end with another quote from Solms:
This in turn suggests that the ideal of cognition is to forego representational (and therefore cortical) processing and replace it with associative processing—to shift from episodic to procedural modes of functioning (and therefore, presumably, from cortex to dorsal basal ganglia). It appears that consciousness in cognition is a temporary measure: a compromise. But with reality being what it is—always uncertain and unpredictable, always full of surprises— there is little risk that we shall in our lifetimes actually reach the zombie-like state of Nirvana that we now learn, to our surprise, is what the ego aspires to.