Consciousness is like the weather. Since everybody experiences it everybody has an idea about it.
Philosophers take it as their unique prerogative since without it they would have no field. Neuroscientists want to make pictures of it and reduce it to chemical and charge. Physicists, who mostly think they are the rightful ones to explain almost anything (especially if it seems mysterious), try to explain it with information theory. New Agers talk about expanding it and political activists about raising it.
What if consciousness isn’t such a big deal after all? What if consciousness is like eyes or ears – just another part of what we are as human but otherwise not so special after all?
My last post discussed Max Tegmark’s attempt to define consciousness as a state of matter. The starting point for Tegmark was the Integrated Information Theory (ITT) of Tononi. Tegmark and Tononi both seem to approach the problem of consciousness in an abstract manner disconnected from living matter which is the only material we can reasonably confident is (or might) be capable of consciousness. I argued that living matter itself possesses integrated information and the difference between living matter that possesses little or no consciousness and organisms with greater consciousness is primarily the degree to which the living material can operate on information in near real-time. In this view consciousness is a potential property of living material not something that can be instantiated in any matter.
A more serious objection perhaps is whether Tononi’s theory even makes any sense at all. The core of the theory is that integrated information is the key to consciousness. He proposes a mathematical quantity (symbolized by Φ) to represent the quantity of integrated information in a system. He calculates it for simple systems but admits there is no practical way to calculate it for the human brain. We are to believe from this that integrated information is related to consciousness, perhaps even the definition of it.
Scott Aaronson, a computer scientist at MIT, has come out recently with a great post on his blog calling attention to some significant problems in the theory. Let me quote:
In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.
Aaronson follows with what appears to be mathematical demolition job of the theory. The original post has been followed up with another post that includes a response by Tononi.
We do not need to engage in advanced math to understand the problems with the theory. We just need to look at the Eurasian Magpie.
The Eurasian Magpie is a highly intelligent bird in the crow family. It is about 17-18 inches long but half of that is the tail. Yet Eurasian Magpies demonstrates intelligent behavior indicative of consciousness. They pass the Mirror Self-Recognition test which seems to be the gold standard for measuring animal intelligence. They use tools, they store food across seasons, and have elaborate social rituals including expressions of grief. Eurasian Magpies with their small brain should have significantly less integrated information than chimpanzees and bonobos yet they seem to have capabilities similar to apes. Emery in a 2004 article actually refers to them as “feather apes”. So we might presume that Eurasian Magpies possesses some level of consciousness, perhaps roughly equivalent to that of an ape.
Of course, the more proper measure is not total brain size but brain size relative to body size. Eugene Dubois evolved a formula to relate body mass and brain size for mammals. Roughly speaking, as body mass increases brain size increases at the ¾ power. When organisms are plotted on a graph with this relation, they fall somewhere on or near the line that represents this relation. Humans, apes, dolphins, dogs, cats, and squirrels for example fall above the line. Other organisms, such as hippos and horses, fall below. Of course the Eurasian Magpie is not a mammal but it is interesting to note that the Eurasian Magpie has about as large a brain relative to its body size as an ape does to its body size. If we assume that some portion of the brain is unavailable for consciousness, still the amount left over for the Eurasian Magpie must be significantly smaller than the amount available to the ape.
It might be possible to argue that a smaller amount of “extra” brain in the Eurasian Magpie might be as integrated as a larger amount of “extra” brain the ape. But this argument would undermine the observation that the proper measure is not total brain size but brain size relative to body size. There would be no reason that small brains relative to body size could not be highly conscious. Yet when we look at where species fall out on the Dubois line, species below the line really do seem to be less conscious and species above it seem to be more conscious.
What about the information in that portion of the brain unavailable for consciousness? Is it not very well-integrated? Is it integrated but not conscious? Wouldn’t that be enough to question the theory?
We need to understand the origin of consciousness from an evolutionary perspective rather than conjecture about it with mathematics and information theory.
As I have argued elsewhere, consciousness probably had its beginnings in the first bilaterians which were basically worms. The body plan consists of a mouth, a digestive tract, and an anus. A mass of nerves developed near the mouth and a strand of nerves developed along the digestive tract. This is the human body plan and the body plan of every creature we might think to be conscious – a brain near the mouth and a spinal cord. The brain evolved to control the mouth, guide the head to food, find prey and/or avoid predators.
Control of the body, perception of the environment, and ability to predict the outcomes of interaction with the environment were the key evolutionary selection factors that drove the development of the brain and the nervous system. The brain, nervous system, and perceptional apparatus evolved as a product of the feedback required for an organism to interact with its environment – both the non-living and living parts of it. Initially most of this was hard-wired (“learned” through evolution). With time came more advanced organisms with the ability to learn through experience. Eventually came organisms required to learn through interaction with other conscious entities, especially members of their own species.
Social organisms tend to rank high on the brain/body mass ratio and are the organisms we mostly seem to believe to more conscious and more intelligent. The “extra” brain developed to serve the needs of the society of the organism. I could also add that for organisms we presume to be more conscious a key part of the learning is the interaction with other conscious beings. In other words, they tend to be social creatures.
Mirror neurons may be a key part of this learning. Mirror neurons are neurons that fire when we take some action and also when we see others take the same action. They have been found in humans and chimpanzees. In other words, if I pick up an apple to eat or I see you pick up an apple to eat, the same set of neurons fire.
Mirror neurons might be involved in triggering some increased capacity to rehearse future actions and events in our minds. They might also be a key element in the sense of the body double that occurs in out-of-body and near-death experiences. As such, these could be directly related to religious beliefs in life after death and the animistic beliefs of spirit.
Several researchers have cast doubt on whether mirror neurons represent a distinct type of neuron. One theory is that mirror neurons are just ordinary neurons trained by associative learning. This hardly invalidates the core of this theory. In fact, if mirror neurons were ordinary neurons trained by associative learning, it would be an argument in favor of the radical plasticity thesis of consciousness. This theory argues “conscious experience occurs if and only if an information processing system has learned about its own representations of the world. To put this claim even more provocatively: consciousness is the brain’s theory about itself, gained through experience interacting with the world, and, crucially, with itself.”
If we couple the radical plasticity theory with the observation that more conscious organism seem to be more social, it might be that our subjective sense of self-awareness may be learned through interaction with other conscious entities, especially those of our own species. We recognize consciousness in other organisms because we learned our own consciousness through interaction with them. Consciousness, while it may require some threshold of information storage/processing capacity, may not actually be directly dependent on it.
The radical plasticity theory when tied to observation that much of the learning of the more conscious species is social learning, explains a lot:
- Social organisms will have larger brain to body weight ratios.
- Social organisms will be more conscious.
- Our subjective sense of self-awareness is learned through interaction with other conscious entities.
- We recognize consciousness in other organisms because we learned our own consciousness through interaction with them.
- Most of what the brain does is unconscious (Freud was right after all!). This includes developing math theorems. It also conforms to observational evidence in brain scans that neurons involved in recognition or decision-making fire before consciousness becomes aware of the corresponding thought.
- In learning something new we use a great deal of consciousness but once it is learned it requires less consciousness. I concentrated a lot learning to ride a bike. Now I ride and think about consciousness.
In the end, consciousness may be an evolutionary product of natural selection advantages provided by increased control of the body, better perception of the environment, and increased ability to predict the outcomes of interaction with the environment. The final development of what we might more properly think of consciousness occurred when we needed to predict the outcomes of interactions with other members of our species. The development of consciousness may be more or less equivalent in the grand scheme of things to the evolution of eyes or ears. Quite remarkable and amazing but not something we should regard as sui generis.
This is a follow-up to my last post and a response to some of the comments various readers made on it. I would like to thank cogitatingduck for some challenging questions, oiscarey for a link to radical plasticity theory, as well as other comments by Wes Hansen, Jeff, and Clifford.