Consciousness: Much Ado About (Almost) Nothing?

mapgie

Eurasian Magpie – image from from Wikimedia Commons

Consciousness is like the weather. Since everybody experiences it everybody has an idea about it.

Philosophers take it as their unique prerogative since without it they would have no field. Neuroscientists want to make pictures of it and reduce it to chemical and charge. Physicists, who mostly think they are the rightful ones to explain almost anything (especially if it seems mysterious), try to explain it with information theory. New Agers talk about expanding it and political activists about raising it.

What if consciousness isn’t such a big deal after all? What if consciousness is like eyes or ears – just another part of what we are as  human but otherwise not so special after all?

My last post discussed Max Tegmark’s attempt to define consciousness as a state of matter. The starting point for Tegmark was the Integrated Information Theory (ITT) of Tononi. Tegmark and Tononi both seem to approach the problem of consciousness in an abstract manner disconnected from living matter which is the only material we can reasonably confident is (or might) be capable of consciousness. I argued that living matter itself possesses integrated information and the difference between living matter that possesses little or no consciousness and organisms with greater consciousness is primarily the degree to which the living material can operate on information in near real-time. In this view consciousness is a potential property of living material not something that can be instantiated in any matter.

A more serious objection perhaps is whether Tononi’s theory even makes any sense at all. The core of the theory is that integrated information is the key to consciousness. He proposes a mathematical quantity (symbolized by Φ) to represent the quantity of integrated information in a system. He calculates it for simple systems but admits there is no practical way to calculate it for the human brain. We are to believe from this that integrated information is related to consciousness, perhaps even the definition of it.

Scott Aaronson, a computer scientist at MIT, has come out recently with a great post on his blog calling attention to some significant problems in the theory. Let me quote:

In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.

Aaronson follows with what appears to be mathematical demolition job of the theory. The original post has been followed up with another post that includes a response by Tononi.

We do not need to engage in advanced math to understand the problems with the theory. We just need to look at the Eurasian Magpie.

The Eurasian Magpie is a highly intelligent bird in the crow family. It is about 17-18 inches long but half of that is the tail. Yet Eurasian Magpies demonstrates intelligent behavior indicative of consciousness. They pass the Mirror Self-Recognition test which seems to be the gold standard for measuring animal intelligence. They use tools, they store food across seasons, and have elaborate social rituals including expressions of grief. Eurasian Magpies with their small brain should have significantly less integrated information than chimpanzees and bonobos yet they seem to have capabilities similar to apes. Emery in a 2004 article actually refers to them as “feather apes”. So we might presume that Eurasian Magpies possesses some level of consciousness, perhaps roughly equivalent to that of an ape.

Of course, the more proper measure is not total brain size but brain size relative to body size. Eugene Dubois evolved a formula to relate body mass and brain size for mammals. Roughly speaking, as body mass increases brain size increases at the ¾ power. When organisms are plotted on a graph with this relation, they fall somewhere on or near the line that represents this relation. Humans, apes, dolphins, dogs, cats, and squirrels for example fall above the line. Other organisms, such as hippos and horses, fall below. Of course the Eurasian Magpie is not a mammal but it is interesting to note that the Eurasian Magpie has about as large a brain relative to its body size as an ape does to its body size. If we assume that some portion of the brain is unavailable for consciousness, still the amount left over for the Eurasian Magpie must be significantly smaller than the amount available to the ape.

It might be possible to argue that a smaller amount of “extra” brain in the Eurasian Magpie might be as integrated as a larger amount of “extra” brain the ape. But this argument would undermine the observation that the proper measure is not total brain size but brain size relative to body size. There would be no reason that small brains relative to body size could not be highly conscious. Yet when we look at where species fall out on the Dubois line, species below the line really do seem to be less conscious and species above it seem to be more conscious.

What about the information in that portion of the brain unavailable for consciousness? Is it not very well-integrated? Is it integrated but not conscious? Wouldn’t that be enough to question the theory?

We need to understand the origin of consciousness from an evolutionary perspective rather than conjecture about it with mathematics and information theory.

As I have argued elsewhere, consciousness probably had its beginnings in the first bilaterians which were basically worms. The body plan consists of a mouth, a digestive tract, and an anus. A mass of nerves developed near the mouth and a strand of nerves developed along the digestive tract. This is the human body plan and the body plan of every creature we might think to be conscious – a brain near the mouth and a spinal cord. The brain evolved to control the mouth, guide the head to food, find prey and/or avoid predators.

Control of the body, perception of the environment, and ability to predict the outcomes of interaction with the environment were the key evolutionary selection factors that drove the development of the brain and the nervous system. The brain, nervous system, and perceptional apparatus evolved as a product of the feedback required for an organism to interact with its environment – both the non-living and living parts of it. Initially most of this was hard-wired (“learned” through evolution). With time came more advanced organisms with the ability to learn through experience. Eventually came organisms required to learn through interaction with other conscious entities, especially members of their own species.

Social organisms tend to rank high on the brain/body mass ratio and are the organisms we mostly seem to believe to more conscious and more intelligent. The “extra” brain developed to serve the needs of the society of the organism. I could also add that for organisms we presume to be more conscious a key part of the learning is the interaction with other conscious beings. In other words, they tend to be social creatures.

Mirror neurons may be a key part of this learning. Mirror neurons are neurons that fire when we take some action and also when we see others take the same action. They have been found in humans and chimpanzees. In other words, if I pick up an apple to eat or I see you pick up an apple to eat, the same set of neurons fire.

Mirror neurons might be involved in triggering some increased capacity to rehearse future actions and events in our minds. They might also be a key element in the sense of the body double that occurs in out-of-body and near-death experiences. As such, these could be directly related to religious beliefs in life after death and the animistic beliefs of spirit.

Several researchers have cast doubt on whether mirror neurons represent a distinct type of neuron. One theory is that mirror neurons are just ordinary neurons trained by associative learning. This hardly invalidates the core of this theory. In fact, if mirror neurons were ordinary neurons trained by associative learning, it would be an argument in favor of the radical plasticity thesis of consciousness. This theory argues “conscious experience occurs if and only if an information processing system has learned about its own representations of the world. To put this claim even more provocatively: consciousness is the brain’s theory about itself, gained through experience interacting with the world, and, crucially, with itself.”

If we couple the radical plasticity theory with the observation that more conscious organism seem to be more social, it might be that our subjective sense of self-awareness may be learned through interaction with other conscious entities, especially those of our own species. We recognize consciousness in other organisms because we learned our own consciousness through interaction with them. Consciousness, while it may require some threshold of information storage/processing capacity, may not actually be directly dependent on it.

The radical plasticity theory when tied to observation that much of the learning of the more conscious species is social learning, explains a lot:

  1. Social organisms will have larger brain to body weight ratios.
  2. Social organisms will be more conscious.
  3. Our subjective sense of self-awareness is learned through interaction with other conscious entities.
  4. We recognize consciousness in other organisms because we learned our own consciousness through interaction with them.
  5. Most of what the brain does is unconscious (Freud was right after all!). This includes developing math theorems. It also conforms to observational evidence in brain scans that neurons involved in recognition or decision-making fire before consciousness becomes aware of the corresponding thought.
  6.  In learning something new we use a great deal of consciousness but once it is learned it requires less consciousness. I concentrated a lot learning to ride a bike. Now I ride and think about consciousness.

In the end, consciousness may be an evolutionary product of natural selection advantages provided by increased control of the body, better perception of the environment, and increased ability to predict the outcomes of interaction with the environment. The final development of what we might more properly think of consciousness occurred when we needed to predict the outcomes of interactions with other members of our species. The development of consciousness may be more or less equivalent in the grand scheme of things to the evolution of eyes or ears. Quite remarkable and amazing but not something we should regard as sui generis.

This is a follow-up to my last post and a response to some of the comments various readers made on it. I would like to thank cogitatingduck for some challenging questions, oiscarey for a link to radical plasticity theory, as well as other comments by Wes Hansen, Jeff, and Clifford.

 

About these ads
This entry was posted in Brain size, Consciousness, Human Evolution. Bookmark the permalink.

10 Responses to Consciousness: Much Ado About (Almost) Nothing?

  1. jeff says:

    great read.
    i have written and often lecture that men are but worms with fancy endoskeletons and attending organs, with the fundamental illustrative model being that of an inchworm first feeling out (projecting its forward end) possible situations and then finally reaching out to draw itself into its next situation through a general process I have named ‘self-abduction’ …
    loved the post.

  2. Red-walker says:

    Hello James,
    I’ve been reading your blog for a bit now and found this post interesting in that it seems your views on consciousness has shifted. Am I wrong? Your post, also, seems to not address the “hard problem” at all and takes an approach similar to Massimo Pigliucci or Daniel Dennett, that it does not really exist. Is this correct? I ask this humbly, trying to understand what seems to be a shift in your view that goes against your previous posts. Thank you for your time.

    • James Cross says:

      When I started this blog, I think I said I could change my mind and I may not be completely consistent. These are speculations after all.

      Regarding the “hard problem”, are you saying that I seem to suggest that it does not exist or that consciousness does not exist?

      Of course, nobody can really address the “hard problem”. It is much like the Chinese finger trap – the harder we pull the tighter it becomes.

      I do think consciousness exists but that it is probably only possible to be instantiated in living material. Machines may become increasingly clever but not conscious. This distinction could get blurred at some point in the future.

      How do you think my views have changed?

      • Red-walker says:

        Thank you for your reply.

        I agree, we all reserve the right to change our minds. I’m just curious.

        Regarding the “hard problem”, I meant to say it seems you suggest the problem does not exist. Even if brains and nervous systems become bigger and more complex, the hard problem seems to be why something like qualia would emerge. A brain similar to a computer processor and a mind similar to it’s software?: yes, consciousness?: not so much.

        How do I think your views have changed?
        I’m not sure, I can only point to other posts where you seem to take a more grandiose view of consciousness, rather than a simply product of evolution. Mind, Life, and Tensegrity would be one, The Intelligent Universe and Animism, Neuroscience, and Information would be others. Again, I’m not sure if your views have changed. It’s hard to get at a person’s mind through a few (although lengthy) sporadic posts on the internet.

        All in all, I really do enjoy your blog and, because of that, I’m trying to understand where you’re coming from so that I can better understand what you’re getting at.

    • James Cross says:

      Thanks for being a reader.

      I probably have been a little more (and too) grandiose in some of my views. However, I think you can see some hints of what I think you are seeing in this post in some of the other posts.

      For example in the Intelligent Universe, I end with this:

      “We could describe the universe as intelligent but it might be better to understand that intelligence might not be really what it seems. We may not be the originators of our own intelligence as much as we are agents of an algorithmic principle working at the quantum level. Our intelligence would be a reflection of some deep physical principle that guides the evolution of the universe and life. In the end, our intelligence and that of the slime mold may be more closely linked than we might think.”

      In Mind, Life, and Tensegrity, I wrote this:

      “My position on mind and consciousness is a materialist position. A materialist believes that everything is matter, that there is nothing above or beyond matter, that mind and consciousness are matter or derivable from matter and physical laws. By “matter”, I mean “matter” in the very broad sense that matter and energy, wave and particle, are all material. The strict materialist position must be that there is nothing that is not matter. So even an emergent property such as mind or consciousness would also need to be material.”

      And this:

      “Could we also take the argument forward to understand how mind and consciousness came from matter? Could consciousness be a sort of dynamic balancing and optimal organization of something (matter?) in space and time? Could it in in a sense be just another structure such as fullerenes, cytoskeletons, and bones are?”

      Anyway. Keep asking the questions and making the comments.

      Always appreciated.

      • Red-walker says:

        Interesting. Thank you for your reply!

        I’m thinking I’m starting to see what you’re getting at. One more quick question, if I may?

        When reading your comments on Steve Garcia’s post on Reincarnation, you seemed to take a neutral monist view of consciousness and propose an Internet analogy similar to reincarnation. How does that fit into this post? Does it?

        Sorry for the barrage of questions and thank you for all your time.

    • James Cross says:

      Yeah, I am not sure I would write that comment that I made on the Garcia blog today.

      But let try to give a more consistent approach to this.

      This is the most speculative part (and one I could easily abandon). I think probably there is something inherently intelligent in the universe. I think this is what causes complexity to arise. I don’t mean this in any sort of Creationist sense. I think this is physical law(s) at work. This intelligence seems to be most recognized in life. The catch is that intelligence is not exactly what we think it is. Remember this quote: We could describe the universe as intelligent but it might be better to understand that intelligence might not be really what it seems. We may not be the originators of our own intelligence as much as we are agents of an algorithmic principle working at the quantum level.

      Now when we bring up intelligence maybe people think of consciousness and for some they may almost mean the same. However, I think the two are distinct but perhaps complementary.

      Look at this example from Nature:

      http://www.nature.com/news/how-brainless-slime-molds-redefine-intelligence-1.11811

      It describes how slim molds “can solve mazes, mimic the layout of man-made transportation networks and choose the healthiest food from a diverse menu—and all this without a brain or nervous system.”

      Are slim molds conscious? I don’t think so. It seems to me that life has a sort of neural network (in the computer sense of the term) foundation that enables memory, decision making, and what appears to be purposive behavior. Much of this has developed through evolution and does not involve consciousness. Our own intelligence may be built on this basis which is why I have argued that even human works of science art derive initially from the unconscious processes being brought into consciousness.

      However, it might be intelligence as it evolved brain and nervous systems began to develop consciousness. Now this conscious is somewhat illusory but as it looks at itself and at the whole we realize that is a part of the greater whole that evolved it.

      • Red-walker says:

        Interesting. Okay, I’m definitely understanding your point. Makes a lot of sense, but it also seems to make consciousness superfluous. Would we need to have consciousness if intelligence alone would suffice in being evolutionary beneficial? Unless, maybe, those intelligent laws had the creation of consciousness “programmed” into them? Maybe something similar to the teleology discussed in Nagel’s Mind and Cosmos? Not sure. Just my “broad speculations”. :)

    • James Cross says:

      Maybe not superfluous any more than eyes and ears are superfluous. Just not as special as we sometimes think.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s