Thinking About Thinking

The Edge question for 2015 is “What do you think about machines that think?

The first two responses I read were from Sean Carroll and Nick Bostrom. I have been a long time followier of Sean Carroll blog’s Preposterous Universe and have also recently read Bostrom’s book Superintelligence: Paths, Dangers, Strategies.

Carroll took the opportunity to revisit yet again his long running atheist argument that we can explain everything with physical law by erasing the distinction between us and machines. His response entitled “We Are All Machines That Think” really misses the point of the question, I think.

Bostrom, on the other hand, in his response “A Difficult Topic” immediately begins discussing intelligence which may or may not be the same as thinking. I had planned to write a separate post on Bostrom’s book but will touch on my issues with that book here.

Not surprising, most open-ended questions with imprecise definitions of terms generate answers using different definitions. The terms in this question with imprecise definitions are “machines” and “thinking”. I will be curious to see if other responders even care to define the terms as they provide their answers. I will explore my thoughts on those terms and what they mean to the answer of the Edge question.

Let’s begin with the easier of the terms: “machine”.

The Wikipedia definition of machine is “a tool containing one or more parts that uses energy to perform an intended action”. This is mostly what we mean when we use the word in normal language. To consider us machines by this definition is a considerable stretch. We may be physical entities consisting of one or more parts that use energy but we certainly are not tools unless you want to consider we are the tools of our genes and our genes have the intention of propagating themselves. The notion of intention in the definition also carries with it the idea that a machine is constructed or perhaps chosen from the natural word in the case of very primitive tools. By this, only humans and some other intelligent species make or choose tools. So a machine that thinks would have to be constructed by us or another intelligent biological species unless a machine so constructed develops an ability to create another machine for its use to think.

We can hyperbolize this definition of machine as Sean Carroll does to make other points. His is that we and our ability to think can be explained by physics, probably classical physics, without resorting to gods, souls, or elan vital. I happen to agree with this point (although I am not so sure about whether classical or even current physics will suffice) but, for the purpose of Edge question, I think the Wikipedia definition is the most appropriate definition and the one that will yield the most interesting insights and new questions.

Now for the difficult term: “thinking”.

“Thinking” is an incredibly imprecise term. Thinking seems intimately involved with consciousness and intelligence but in support of those we often need to involve perception, cognition, pattern recognition, and more. Even Wikipedia says “there is no generally accepted agreement as to what thought is or how it is created.”

If consciousness and intelligence are at the core of thinking, then is consciousness a requirement for intelligence?

Where Nick Bostrom’s Edge answer runs aground is in the lack to address this question. He immediately begins discussing intelligence and super-intelligence as if intelligence is the same as thinking. We can easily see that machines are or soon will be intelligent. Deep Blue, the chess computer, beat Gary Kasparov, the reigning chess champion (although Kasparov claimed IBM cheated). So Deep Blue was clearly intelligent in the narrow realm of chess strategy. Whether Deep Blue was thinking is another matter. The confusion on this becomes even more pronounced in his book where he (along with many others who have written on this topic) heavily anthropomorphizes AI ascribing human intentions, mostly of the worst kind, to machines.

How does intention derive from intelligence? For that matter, what is intelligence? I can’t find a clear definition of it in his book on the topic.

I am going to attempt a definition. Intelligence is a physical process that attempts to maximize the diversity and/or utility of future outcomes to achieve optimal solutions. We see this in the operation of Deep Blue where the machine developed sufficiently optimal strategies to defeat Gary Kasparov. We see this in slime molds that “can solve mazes, mimic the layout of man-made transportation networks and choose the healthiest food from a diverse menu—and all this without a brain or nervous system.” We may even see this in the operation of evolution itself. Leslie Valiant argues in Probably Approximately Correct that evolution itself is computational and best understood as a natural neural network similar in many respects to the neural networks used in computer learning. He calls this algorithmic activity ecorithms and it is the actual mechanism by which natural selection operates.

Intelligence thus is a physical process and does not require consciousness. Intelligence may be something build into the very lowest levels of life and could have perhaps played a role in its origin. The basis of this intelligence may not be classical but may arise from the quantum level. I won’t go into this argument too much at this time but Paul Davies in “Does quantum mechanics play a non-trivial role in life?” argues that quantum mechanical processes may be at work as a source of mutation, accelerated reaction rates of enzymes, the genetic code which may be optimized for quantum search algorithms, and microtubules which play a role in neural firings and which some believe to the root of consciousness.

In my view, intelligence precedes consciousness and created consciousness through natural selection to “maximize the diversity and/or utility of future outcomes to achieve optimal solutions” to quote myself. Our ability to think is a biological product of evolutionary algorithms.

Machines do not think any more than slime molds think. They appear to us to be capable of thought because they are driven by algorithmic intelligence similar to that which created the capability of thought in biological organisms but of a depth and complexity that we are far from understanding. I suspect that when or if we approach an understanding we might be able to create something which thinks but it will more likely resemble life than it does anything we currently recognize as a machine.

Advertisements
This entry was posted in Consciousness, Intelligence, Origin of Life, Quantum Mechanics, Transhumanism. Bookmark the permalink.

7 Responses to Thinking About Thinking

  1. dondeg says:

    Interesting thoughts but you run into many of the paradoxes that have been gone over again and again in classical philosophy. There is really little that is new in the current dialogues except the metaphors of computers and brains.

    It is valid to separate out intelligence from consciousness, although your terminology could be tightened up a bit. Intelligence is process, dynamics, pattern, and thus applies to galaxies, atoms, brains and everything else.

    But when you begin questioning the genesis or ultimate fate of such processes, or whether they link or do not like to consciousness, then that is the same quicksand since Plato and Aristotle, since Newton and Leibniz, since Swedenborg and Laplace, since Einstein and Bohr. Every generation thinks they finally have been able to solve what all the greats before them could not.

    As long a people stare only at the shadows of the cave wall, they will see only shadows.

    Thanks for posting the provocative article!

    Best wishes,

    Don

    • James Cross says:

      Sorry to take so long to get back to you.

      Thanks for your comments. Of course, we can probably never quite escape Plato and perhaps other predecessors whose names have been lost to history. Philosophy never really progresses whereas science evolves to what, if anything, we cannot quite be sure.

      I’ve added your website PlaneTalk to my blogroll and would encourage readers to check it out.

  2. dondeg says:

    Dear James

    Nice to hear from you. Thank you for replying and also adding my site to your blog roll. Yes, poor philosophy. I just did a post (part 3 of the yogic view of consciousness) where poor philosophy, went “mad as a hatter” when her life long soul mate Science left her for greener pastures. Hehe, it’s nice to have some fun with all this. Again, very nice to hear from you. Best wishes, -Don

  3. jjhiii24 says:

    Hey Jim,

    Just noticed our similar threads in recent blog posts, and wanted to reciprocate with a link that I think you may find interesting. One of the most interesting philosophers of consciousness is David Chalmers, a professor at the National University in Australia, and one of the co-founders of the annual “Science of Consciousness Conference,” which takes place most often at the University of Tuscon in Arizona. David recently gave a TED talk that addresses this subject in an interesting way.

    I think your position that intelligence precedes consciousness is somewhat controversial, but I think you probably could say a great deal more about it by addressing some of the individual components of your ideas separately. It’s a complex subject and deserves the attention.

    I am far less optimistic than you about our ability to produce machines that “think,” as opposed to either mimicking thought or processing information in a way that resembles thought in humans, but as you so correctly pointed out, we aren’t even really sure about our definition of thinking, so it is problematic on several levels.

    It has always been my contention that while consciousness may be possible to describe as a fundamental force like space, time, mass, and charge, or that everything has some degree of consciousness, with the most complex structures like brains having the highest degree, and elementary particles having only a fraction or near zero sum of consciousness, but there is still a great deal of work and thought to be done in order to arrive at something more substantial to support such ideas.

    Looking forward to reading more of your ideas as you are able to contribute more…John H.

    • James Cross says:

      Thanks for commenting but I am not sure where you think I am optimistic about being able to create machines that think. I think my viewpoint is almost the exact opposite.

      Actually I think Evan Thompson captures my view almost perfectly and I was planning on using this quote when I eventually post something on his book:

      “According to this way of thinking, sentience depends fundamentally on electrochemical processes of excitable living cells while consciousness depends fundamentally on neuroelectrical processes of the brain. Consciousness isn’t an abstract informational property, such as Giulio Tononi’s “integrated information”, it’s a concrete bioelectrical phenomenon. It follows – as John Searle has long argued – that consciousness can’t be instantiated in an artificial system simply by giving the system the right kind of computer program, for consciousness depends fundamentally on specific kinds of electrochemical processes, that is, on a specific kind of biological hardware. This view predicts that only artificial systems having the right kind of electrochemical constitution would be able to be conscious.”

      In other words, the only machine capable of thinking would be one constructed from the chemicals of life. Computers and other systems might be be intelligent but they are not really thinking. Thinking and consciousness do have a physical basis but it is in the specific bioelectrical properties of brains, neurons, and maybe the microtubules of cells. Intelligence is a computational process that can instantiated on many types of substrates.

  4. James Cross says:

    Asking why there is subjective experience is like asking why is water wet?

    Water is wet because the chemical properties of the molecule. Consciousness is the way it is because of the underlying bioelectrical properties of brains and neurons. The problem is hard only in the sense that the molecules of brains and neurons are trillions of times more varied, more dynamic, and more complex than the simple water molecule. So the science is not simple. Understanding will not be accomplished in one giant leap. It cannot be simulated or replicated on anything we regard today as a machine.

    The why of the evolutionary development of consciousness I think can explained in part by the radical plasticity theory that argues “conscious experience occurs if and only if an information processing system has learned about its own representations of the world. To put this claim even more provocatively: consciousness is the brain’s theory about itself, gained through experience interacting with the world, and, crucially, with itself.”.

    I posted this a while back:

    If we couple the radical plasticity theory with the observation that more conscious organism seem to be more social, it might be that our subjective sense of self-awareness may be learned through interaction with other conscious entities, especially those of our own species. We recognize consciousness in other organisms because we learned our own consciousness through interaction with them. Consciousness, while it may require some threshold of information storage/processing capacity, may not actually be directly dependent on it.

    The radical plasticity theory when tied to observation that much of the learning of the more conscious species is social learning, explains a lot:

    1-Social organisms will have larger brain to body weight ratios.
    2-Social organisms will be more conscious.
    3-Our subjective sense of self-awareness is learned through interaction with other conscious entities.
    4-We recognize consciousness in other organisms because we learned our own consciousness through interaction with them.
    5-Most of what the brain does is unconscious (Freud was right after all!). This includes developing math theorems. It also conforms to observational evidence in brain scans that neurons involved in recognition or decision-making fire before consciousness becomes aware of the corresponding thought.
    6-In learning something new we use a great deal of consciousness but once it is learned it requires less consciousness. I concentrated a lot learning to ride a bike. Now I ride and think about consciousness.

    In the end, consciousness may be an evolutionary product of natural selection advantages provided by increased control of the body, better perception of the environment, and increased ability to predict the outcomes of interaction with the environment. The final development of what we might more properly think of consciousness occurred when we needed to predict the outcomes of interactions with other members of our species. The development of consciousness may be more or less equivalent in the grand scheme of things to the evolution of eyes or ears. Quite remarkable and amazing but not something we should regard as sui generis.

    https://broadspeculations.com/2014/06/01/consciousness-much-ado-about-almost-nothing/

    See also:

    http://srsc.ulb.ac.be/axcwww/papers/pdf/07-PBR.pdf

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s