Consciousness: Much Ado About (Almost) Nothing?

mapgie

Eurasian Magpie – image from from Wikimedia Commons

Consciousness is like the weather. Since everybody experiences it everybody has an idea about it.

Philosophers take it as their unique prerogative since without it they would have no field. Neuroscientists want to make pictures of it and reduce it to chemical and charge. Physicists, who mostly think they are the rightful ones to explain almost anything (especially if it seems mysterious), try to explain it with information theory. New Agers talk about expanding it and political activists about raising it.

What if consciousness isn’t such a big deal after all? What if consciousness is like eyes or ears – just another part of what we are as  human but otherwise not so special after all?

My last post discussed Max Tegmark’s attempt to define consciousness as a state of matter. The starting point for Tegmark was the Integrated Information Theory (ITT) of Tononi. Tegmark and Tononi both seem to approach the problem of consciousness in an abstract manner disconnected from living matter which is the only material we can reasonably confident is (or might) be capable of consciousness. I argued that living matter itself possesses integrated information and the difference between living matter that possesses little or no consciousness and organisms with greater consciousness is primarily the degree to which the living material can operate on information in near real-time. In this view consciousness is a potential property of living material not something that can be instantiated in any matter.

A more serious objection perhaps is whether Tononi’s theory even makes any sense at all. The core of the theory is that integrated information is the key to consciousness. He proposes a mathematical quantity (symbolized by Φ) to represent the quantity of integrated information in a system. He calculates it for simple systems but admits there is no practical way to calculate it for the human brain. We are to believe from this that integrated information is related to consciousness, perhaps even the definition of it.

Scott Aaronson, a computer scientist at MIT, has come out recently with a great post on his blog calling attention to some significant problems in the theory. Let me quote:

In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.

Aaronson follows with what appears to be mathematical demolition job of the theory. The original post has been followed up with another post that includes a response by Tononi.

We do not need to engage in advanced math to understand the problems with the theory. We just need to look at the Eurasian Magpie.

The Eurasian Magpie is a highly intelligent bird in the crow family. It is about 17-18 inches long but half of that is the tail. Yet Eurasian Magpies demonstrates intelligent behavior indicative of consciousness. They pass the Mirror Self-Recognition test which seems to be the gold standard for measuring animal intelligence. They use tools, they store food across seasons, and have elaborate social rituals including expressions of grief. Eurasian Magpies with their small brain should have significantly less integrated information than chimpanzees and bonobos yet they seem to have capabilities similar to apes. Emery in a 2004 article actually refers to them as “feather apes”. So we might presume that Eurasian Magpies possesses some level of consciousness, perhaps roughly equivalent to that of an ape.

Of course, the more proper measure is not total brain size but brain size relative to body size. Eugene Dubois evolved a formula to relate body mass and brain size for mammals. Roughly speaking, as body mass increases brain size increases at the ¾ power. When organisms are plotted on a graph with this relation, they fall somewhere on or near the line that represents this relation. Humans, apes, dolphins, dogs, cats, and squirrels for example fall above the line. Other organisms, such as hippos and horses, fall below. Of course the Eurasian Magpie is not a mammal but it is interesting to note that the Eurasian Magpie has about as large a brain relative to its body size as an ape does to its body size. If we assume that some portion of the brain is unavailable for consciousness, still the amount left over for the Eurasian Magpie must be significantly smaller than the amount available to the ape.

It might be possible to argue that a smaller amount of “extra” brain in the Eurasian Magpie might be as integrated as a larger amount of “extra” brain the ape. But this argument would undermine the observation that the proper measure is not total brain size but brain size relative to body size. There would be no reason that small brains relative to body size could not be highly conscious. Yet when we look at where species fall out on the Dubois line, species below the line really do seem to be less conscious and species above it seem to be more conscious.

What about the information in that portion of the brain unavailable for consciousness? Is it not very well-integrated? Is it integrated but not conscious? Wouldn’t that be enough to question the theory?

We need to understand the origin of consciousness from an evolutionary perspective rather than conjecture about it with mathematics and information theory.

As I have argued elsewhere, consciousness probably had its beginnings in the first bilaterians which were basically worms. The body plan consists of a mouth, a digestive tract, and an anus. A mass of nerves developed near the mouth and a strand of nerves developed along the digestive tract. This is the human body plan and the body plan of every creature we might think to be conscious – a brain near the mouth and a spinal cord. The brain evolved to control the mouth, guide the head to food, find prey and/or avoid predators.

Control of the body, perception of the environment, and ability to predict the outcomes of interaction with the environment were the key evolutionary selection factors that drove the development of the brain and the nervous system. The brain, nervous system, and perceptional apparatus evolved as a product of the feedback required for an organism to interact with its environment – both the non-living and living parts of it. Initially most of this was hard-wired (“learned” through evolution). With time came more advanced organisms with the ability to learn through experience. Eventually came organisms required to learn through interaction with other conscious entities, especially members of their own species.

Social organisms tend to rank high on the brain/body mass ratio and are the organisms we mostly seem to believe to more conscious and more intelligent. The “extra” brain developed to serve the needs of the society of the organism. I could also add that for organisms we presume to be more conscious a key part of the learning is the interaction with other conscious beings. In other words, they tend to be social creatures.

Mirror neurons may be a key part of this learning. Mirror neurons are neurons that fire when we take some action and also when we see others take the same action. They have been found in humans and chimpanzees. In other words, if I pick up an apple to eat or I see you pick up an apple to eat, the same set of neurons fire.

Mirror neurons might be involved in triggering some increased capacity to rehearse future actions and events in our minds. They might also be a key element in the sense of the body double that occurs in out-of-body and near-death experiences. As such, these could be directly related to religious beliefs in life after death and the animistic beliefs of spirit.

Several researchers have cast doubt on whether mirror neurons represent a distinct type of neuron. One theory is that mirror neurons are just ordinary neurons trained by associative learning. This hardly invalidates the core of this theory. In fact, if mirror neurons were ordinary neurons trained by associative learning, it would be an argument in favor of the radical plasticity thesis of consciousness. This theory argues “conscious experience occurs if and only if an information processing system has learned about its own representations of the world. To put this claim even more provocatively: consciousness is the brain’s theory about itself, gained through experience interacting with the world, and, crucially, with itself.”

If we couple the radical plasticity theory with the observation that more conscious organism seem to be more social, it might be that our subjective sense of self-awareness may be learned through interaction with other conscious entities, especially those of our own species. We recognize consciousness in other organisms because we learned our own consciousness through interaction with them. Consciousness, while it may require some threshold of information storage/processing capacity, may not actually be directly dependent on it.

The radical plasticity theory when tied to observation that much of the learning of the more conscious species is social learning, explains a lot:

  1. Social organisms will have larger brain to body weight ratios.
  2. Social organisms will be more conscious.
  3. Our subjective sense of self-awareness is learned through interaction with other conscious entities.
  4. We recognize consciousness in other organisms because we learned our own consciousness through interaction with them.
  5. Most of what the brain does is unconscious (Freud was right after all!). This includes developing math theorems. It also conforms to observational evidence in brain scans that neurons involved in recognition or decision-making fire before consciousness becomes aware of the corresponding thought.
  6.  In learning something new we use a great deal of consciousness but once it is learned it requires less consciousness. I concentrated a lot learning to ride a bike. Now I ride and think about consciousness.

In the end, consciousness may be an evolutionary product of natural selection advantages provided by increased control of the body, better perception of the environment, and increased ability to predict the outcomes of interaction with the environment. The final development of what we might more properly think of consciousness occurred when we needed to predict the outcomes of interactions with other members of our species. The development of consciousness may be more or less equivalent in the grand scheme of things to the evolution of eyes or ears. Quite remarkable and amazing but not something we should regard as sui generis.

This is a follow-up to my last post and a response to some of the comments various readers made on it. I would like to thank cogitatingduck for some challenging questions, oiscarey for a link to radical plasticity theory, as well as other comments by Wes Hansen, Jeff, and Clifford.

 

This entry was posted in Brain size, Consciousness, Human Evolution. Bookmark the permalink.

15 Responses to Consciousness: Much Ado About (Almost) Nothing?

  1. jeff says:

    great read.
    i have written and often lecture that men are but worms with fancy endoskeletons and attending organs, with the fundamental illustrative model being that of an inchworm first feeling out (projecting its forward end) possible situations and then finally reaching out to draw itself into its next situation through a general process I have named ‘self-abduction’ …
    loved the post.

    Like

  2. Red-walker says:

    Hello James,
    I’ve been reading your blog for a bit now and found this post interesting in that it seems your views on consciousness has shifted. Am I wrong? Your post, also, seems to not address the “hard problem” at all and takes an approach similar to Massimo Pigliucci or Daniel Dennett, that it does not really exist. Is this correct? I ask this humbly, trying to understand what seems to be a shift in your view that goes against your previous posts. Thank you for your time.

    Like

    • James Cross says:

      When I started this blog, I think I said I could change my mind and I may not be completely consistent. These are speculations after all.

      Regarding the “hard problem”, are you saying that I seem to suggest that it does not exist or that consciousness does not exist?

      Of course, nobody can really address the “hard problem”. It is much like the Chinese finger trap – the harder we pull the tighter it becomes.

      I do think consciousness exists but that it is probably only possible to be instantiated in living material. Machines may become increasingly clever but not conscious. This distinction could get blurred at some point in the future.

      How do you think my views have changed?

      Like

      • Red-walker says:

        Thank you for your reply.

        I agree, we all reserve the right to change our minds. I’m just curious.

        Regarding the “hard problem”, I meant to say it seems you suggest the problem does not exist. Even if brains and nervous systems become bigger and more complex, the hard problem seems to be why something like qualia would emerge. A brain similar to a computer processor and a mind similar to it’s software?: yes, consciousness?: not so much.

        How do I think your views have changed?
        I’m not sure, I can only point to other posts where you seem to take a more grandiose view of consciousness, rather than a simply product of evolution. Mind, Life, and Tensegrity would be one, The Intelligent Universe and Animism, Neuroscience, and Information would be others. Again, I’m not sure if your views have changed. It’s hard to get at a person’s mind through a few (although lengthy) sporadic posts on the internet.

        All in all, I really do enjoy your blog and, because of that, I’m trying to understand where you’re coming from so that I can better understand what you’re getting at.

        Like

    • James Cross says:

      Thanks for being a reader.

      I probably have been a little more (and too) grandiose in some of my views. However, I think you can see some hints of what I think you are seeing in this post in some of the other posts.

      For example in the Intelligent Universe, I end with this:

      “We could describe the universe as intelligent but it might be better to understand that intelligence might not be really what it seems. We may not be the originators of our own intelligence as much as we are agents of an algorithmic principle working at the quantum level. Our intelligence would be a reflection of some deep physical principle that guides the evolution of the universe and life. In the end, our intelligence and that of the slime mold may be more closely linked than we might think.”

      In Mind, Life, and Tensegrity, I wrote this:

      “My position on mind and consciousness is a materialist position. A materialist believes that everything is matter, that there is nothing above or beyond matter, that mind and consciousness are matter or derivable from matter and physical laws. By “matter”, I mean “matter” in the very broad sense that matter and energy, wave and particle, are all material. The strict materialist position must be that there is nothing that is not matter. So even an emergent property such as mind or consciousness would also need to be material.”

      And this:

      “Could we also take the argument forward to understand how mind and consciousness came from matter? Could consciousness be a sort of dynamic balancing and optimal organization of something (matter?) in space and time? Could it in in a sense be just another structure such as fullerenes, cytoskeletons, and bones are?”

      Anyway. Keep asking the questions and making the comments.

      Always appreciated.

      Like

      • Red-walker says:

        Interesting. Thank you for your reply!

        I’m thinking I’m starting to see what you’re getting at. One more quick question, if I may?

        When reading your comments on Steve Garcia’s post on Reincarnation, you seemed to take a neutral monist view of consciousness and propose an Internet analogy similar to reincarnation. How does that fit into this post? Does it?

        Sorry for the barrage of questions and thank you for all your time.

        Like

    • James Cross says:

      Yeah, I am not sure I would write that comment that I made on the Garcia blog today.

      But let try to give a more consistent approach to this.

      This is the most speculative part (and one I could easily abandon). I think probably there is something inherently intelligent in the universe. I think this is what causes complexity to arise. I don’t mean this in any sort of Creationist sense. I think this is physical law(s) at work. This intelligence seems to be most recognized in life. The catch is that intelligence is not exactly what we think it is. Remember this quote: We could describe the universe as intelligent but it might be better to understand that intelligence might not be really what it seems. We may not be the originators of our own intelligence as much as we are agents of an algorithmic principle working at the quantum level.

      Now when we bring up intelligence maybe people think of consciousness and for some they may almost mean the same. However, I think the two are distinct but perhaps complementary.

      Look at this example from Nature:

      http://www.nature.com/news/how-brainless-slime-molds-redefine-intelligence-1.11811

      It describes how slim molds “can solve mazes, mimic the layout of man-made transportation networks and choose the healthiest food from a diverse menu—and all this without a brain or nervous system.”

      Are slim molds conscious? I don’t think so. It seems to me that life has a sort of neural network (in the computer sense of the term) foundation that enables memory, decision making, and what appears to be purposive behavior. Much of this has developed through evolution and does not involve consciousness. Our own intelligence may be built on this basis which is why I have argued that even human works of science art derive initially from the unconscious processes being brought into consciousness.

      However, it might be intelligence as it evolved brain and nervous systems began to develop consciousness. Now this conscious is somewhat illusory but as it looks at itself and at the whole we realize that is a part of the greater whole that evolved it.

      Like

      • Red-walker says:

        Interesting. Okay, I’m definitely understanding your point. Makes a lot of sense, but it also seems to make consciousness superfluous. Would we need to have consciousness if intelligence alone would suffice in being evolutionary beneficial? Unless, maybe, those intelligent laws had the creation of consciousness “programmed” into them? Maybe something similar to the teleology discussed in Nagel’s Mind and Cosmos? Not sure. Just my “broad speculations”. 🙂

        Like

    • James Cross says:

      Maybe not superfluous any more than eyes and ears are superfluous. Just not as special as we sometimes think.

      Like

  3. Tienzen (Jeh-Tween) Gong says:

    Excellent article. Yet, I do find a hang-up here.

    James Cross: “Most of what the brain does is unconscious (Freud was right after all!). This includes developing math theorems. It also conforms to observational evidence in brain scans that neurons involved in recognition or decision-making fire before consciousness becomes aware of the corresponding thought. … Yet when we look at where species fall out on the Dubois line, species below the line really do seem to be less conscious and species above it seem to be more conscious. … What about the information in that portion of the brain unavailable for consciousness? Is it not very well-integrated? Is it integrated but not conscious? Wouldn’t that be enough to question the theory?”

    Among your sayings above, the term of ‘consciousness’ encompasses a few different semantic meanings. This might not only confuse your readers but is also most importantly confuse you yourself.

    Is ‘conscious’ only a ‘state’? You are no longer conscious about the ‘balance’ after you have mastered the art of riding the bike.

    Is ‘conscious’ a quantity? A lake of water is water. A gallon of water is not much of water. A water molecular is no longer a water.

    Is conscious a capability? When an entity can distinguish itself from all others, it is conscious.

    It is very important to make the definition 100% clear and precise. Only with a precise definition, we can then address the issue.

    James Cross: “I argued that living matter itself possesses integrated information and the difference between living matter that possesses little or no consciousness and organisms with greater consciousness is primarily the degree to which the living material can operate on information in near real-time. In this view consciousness is a potential property of living material not something that can be instantiated in any matter.”

    This could be a good reasoning if the definition for consciousness is precise. As your concept of consciousness is not precisely defined, it becomes a preconceived conclusion which prevents you to deal with the issue intelligently.

    Consciousness and intelligence are two ‘empirical’ traits of human. Thus, there are two issues about them.

    One, what are they? Are they the same or different? Instead of giving them prefect definitions, we could first to point out their ‘key’ features.
    For intelligence, its key feature is to process ‘information’. Thus, its key component is having a computing device (counting straws, abacus or Turing computer).

    For consciousness, its key feature is to distinguish the self from all others. Thus, its key requirement is that all entities are uniquely identifiable.

    Of course, they two are somewhat entangled. Without a uniquely identifiable mechanism, the ‘information’ will be in a big mess, and it will make the information processing utterly difficult to proceed. But, nonetheless they two are totally different. Thus, Max Tegmark’s and Tononi’s “Integrated Information Theory (ITT)” is totally off the target, as they do not know the difference between the two (consciousness and intelligence).

    Two, there are two ways to answer these issues.
    Way one, the Intelligent-design: they both are the results of single stroke of divine-design. This is a great answer for many who has no ability and no desire to get a true truth. Any makeup story is good as long as they accept. Obviously, you and I are not those people, amen!

    Way two, they ‘grow’ into these visible states. “Grow” implies that there are ‘seeds’. And, we do know the ‘key’ part for each seed.
    For intelligence, there must be a computing (counting) device.
    For consciousness, all entities in this physics-universe must be uniquely identifiable.

    Of course, that where these seeds sit (at the bottom of physics or being ‘inserted’ at bio-level processes) will be a different issue. This is a big issue, and thus I will not go into it here. However, I have discussed it ‘briefly’ at many other places (see, http://scientiasalon.wordpress.com/2014/09/01/the-intuitional-problem-of-consciousness/comment-page-1/#comment-7091 ).

    Like

    • James Cross says:

      Thanks for commenting.

      I am probably guilty of (mis)using the term “consciousness” in a confusing manner in this post and probably to an even greater extent over all of my posts. I am in the company of many on this topic.

      In this post I am primarily arguing for the radical plasticity theory:

      “This theory argues “conscious experience occurs if and only if an information processing system has learned about its own representations of the world. To put this claim even more provocatively: consciousness is the brain’s theory about itself, gained through experience interacting with the world, and, crucially, with itself.”

      That is what I think you are implying when you wrote this:

      “For consciousness, its key feature is to distinguish the self from all others.”

      I would argue that the consciousness begins with the ability of living organism to control itself and distinguish itself from its environment and its prey and its predators. It expands with social organisms distinguishing themselves in their interactions with others of their own type and with other more conscious organisms.

      Intelligence, at least in my current view, is much bigger than consciousness. Consciousness is a manifestation of intelligence not the cause of it. So there could be intelligence without consciousness. Take a look at this link:

      Click to access PhysRevLett_110-168702.pdf

      I quote the abstract:

      “Recent advances in fields ranging from cosmology to computer science have hinted at a possible deep connection between intelligence and entropy maximization, but no formal physical relationship between them has yet been established. Here, we explicitly propose a first step toward such a relationship in the form of a causal generalization of entropic forces that we find can cause two defining behaviors of the human “cognitive niche”—tool use and social cooperation—to spontaneously emerge in simple physical systems. Our results suggest a potentially general thermodynamic model of adaptive behavior as a nonequilibrium process in open systems.”

      Like

  4. dondeg says:

    Hi James

    Thanks for pointing me to this post. I have familiarity with, but not in-depth knowledge of Tononi’s and Tegmark’s ideas. To pick up where I left off on my comment to you in my blog, let’s continue with considering the implications of the split between mind and matter initiated by Descartes. As I spell out in Yogic View of Consciousness (YVC), this ultimately led to a divorce between Western science and philosophy.

    We can frame the issue as follows. Recall the book 1984. The idea here was to limit vocabulary and the meaning of words in such a way that people could not even frame substantial political thoughts. I would suggest that an analogous effect has occurred because of the divorce of science and philosophy, which has limited the pool of ideas on which the current crop of scientists draw.

    A big chunk of the YVC is intended to expose the losses suffered by modern science by its wholesale rejection of philosophy (or I can say alternatively, its arbitrary and subjective use of philosophy). At least philosophy, as you noted above, has covered the ground of logical possibilities about the mind-matter problem. No technical framework, whether information theory or biology, can supersede the logical categories that it took Western philosophy centuries to work out. On this basis, I consider work like Tononi or Tegmark, or similar efforts, to be philosophically naive and have a hard time taking them seriously.

    But the greater point of YVC is that the vocabulary and meaning-base of Western philosophy is itself too limited. YVC confines itself only to Patanjali’s Yoga Sutras (YS) as an additional source of information on the mind-body problem. This, of course, is only one piece in a tradition of literature that goes back at least 3000 years. Nonetheless, YS is a good anchor point as it is a systematic expression of a cogent world-view.

    The world-view described by Patanjali (which I call ‘yogic cosmology” in YVC) provides a vocabulary and meaning-base that is larger and more general than that of even Western philosophy. One of the central notions of YS is the idea of “vritti”, which is often translated as a “wave” or “disturbance” within the mind. Modern occultists may call this a “thought-form”. What the notion of “vritti” does is clearly distinguish the contents of the mind from the medium in which these contents occur. The medium is consciousness itself, consciousness per se. Because of all the possible meanings of the word “consciousness” in English, I prefer to use the Hindu word “drisimatrah”. Baars’ global workspace model is surprisingly consistent with this view.

    My interpretation of work like Tononi’s and Tegmark’s is that they are best interpreted as attempts to describe the behavior of vrittis within consciousness, drisimatrah, but do not touch drisimatrah at all. It cannot be touched. However, it can be experienced. It is the base of our experience. To think drisimatrah can be explained in terms of any vritti or set of vrittis is to confuse the medium and the message (to use a familiar structuralists metaphor).

    Our being cannot be explained; certainly not by the intellect. Our being simply is. If we can get over this hump, then we free ourselves to explore the structure of the vrittis. Here we will find seemingly infinite patterns that are so awesome in their brilliance and diversity that it simply overwhelms the intellect. It is the study of the patterns within consciousness where the reconciliations of mind and matter, of science and philosophy, will occur.

    Therefore, the efforts of people like Tononi and Tegmark are of value for providing possible new understandings of the vrittis. However, they will never touch drisimatrah. If we overcome the expectation that consciousness can be explained by the intellect, then I think we can extract some good from all this.

    Again, sorry for the longish reply, and I will close here. Once again, thank you for the most thought provoking conversation.

    Very best wishes,

    Don

    Like

    • James Cross says:

      No problem with the longish replies.

      As you probably have gathered, I agree with you about Tononi and Tegmark.

      I really intend to spend a lot more time on your site reading through what you have written.

      The mind-body/mind-matter split you talk about may be more than just philosophy. By that, I do not mean it is any sense real or more real than alternative views. One of the ideas I have tossed around, perhaps more in comments, than in actual posts is the idea that consciousness is learned or, at least, a significant portion of it is learned. If that is the case, then the philosophical split itself may be a reflection of some long-term movement in Western culture about how we learn consciousness. The good news is that if consciousness is learned it can be learned in more than one way (compare worldviews of indigenous people or traditional Eastern cultures before extensive contact to Western worldviews) and that would mean historical and cultural forces could be driving us to new approaches now.

      Like

      • dondeg says:

        Just to make clear: vrittis are the contents in the mind. Some are learned, some are innate. They are the patterns that give content to consciousness. Consciousness per se just is. It is much greater than our human mind, although our human mind is one way it expresses itself. I am totally panpsychist: everything is an expression of consciousness. My overall framework is the yogic one, as opposed to the various Western intellectual frameworks. Thanks for being patient with my long windedness! Best wishes, Don

        Like

Leave a comment