Thinking About Thinking

The Edge question for 2015 is “What do you think about machines that think?

The first two responses I read were from Sean Carroll and Nick Bostrom. I have been a long time followier of Sean Carroll blog’s Preposterous Universe and have also recently read Bostrom’s book Superintelligence: Paths, Dangers, Strategies.

Carroll took the opportunity to revisit yet again his long running atheist argument that we can explain everything with physical law by erasing the distinction between us and machines. His response entitled “We Are All Machines That Think” really misses the point of the question, I think.

Bostrom, on the other hand, in his response “A Difficult Topic” immediately begins discussing intelligence which may or may not be the same as thinking. I had planned to write a separate post on Bostrom’s book but will touch on my issues with that book here.

Not surprising, most open-ended questions with imprecise definitions of terms generate answers using different definitions. The terms in this question with imprecise definitions are “machines” and “thinking”. I will be curious to see if other responders even care to define the terms as they provide their answers. I will explore my thoughts on those terms and what they mean to the answer of the Edge question.

Let’s begin with the easier of the terms: “machine”.

The Wikipedia definition of machine is “a tool containing one or more parts that uses energy to perform an intended action”. This is mostly what we mean when we use the word in normal language. To consider us machines by this definition is a considerable stretch. We may be physical entities consisting of one or more parts that use energy but we certainly are not tools unless you want to consider we are the tools of our genes and our genes have the intention of propagating themselves. The notion of intention in the definition also carries with it the idea that a machine is constructed or perhaps chosen from the natural word in the case of very primitive tools. By this, only humans and some other intelligent species make or choose tools. So a machine that thinks would have to be constructed by us or another intelligent biological species unless a machine so constructed develops an ability to create another machine for its use to think.

We can hyperbolize this definition of machine as Sean Carroll does to make other points. His is that we and our ability to think can be explained by physics, probably classical physics, without resorting to gods, souls, or elan vital. I happen to agree with this point (although I am not so sure about whether classical or even current physics will suffice) but, for the purpose of Edge question, I think the Wikipedia definition is the most appropriate definition and the one that will yield the most interesting insights and new questions.

Now for the difficult term: “thinking”.

“Thinking” is an incredibly imprecise term. Thinking seems intimately involved with consciousness and intelligence but in support of those we often need to involve perception, cognition, pattern recognition, and more. Even Wikipedia says “there is no generally accepted agreement as to what thought is or how it is created.”

If consciousness and intelligence are at the core of thinking, then is consciousness a requirement for intelligence?

Where Nick Bostrom’s Edge answer runs aground is in the lack to address this question. He immediately begins discussing intelligence and super-intelligence as if intelligence is the same as thinking. We can easily see that machines are or soon will be intelligent. Deep Blue, the chess computer, beat Gary Kasparov, the reigning chess champion (although Kasparov claimed IBM cheated). So Deep Blue was clearly intelligent in the narrow realm of chess strategy. Whether Deep Blue was thinking is another matter. The confusion on this becomes even more pronounced in his book where he (along with many others who have written on this topic) heavily anthropomorphizes AI ascribing human intentions, mostly of the worst kind, to machines.

How does intention derive from intelligence? For that matter, what is intelligence? I can’t find a clear definition of it in his book on the topic.

I am going to attempt a definition. Intelligence is a physical process that attempts to maximize the diversity and/or utility of future outcomes to achieve optimal solutions. We see this in the operation of Deep Blue where the machine developed sufficiently optimal strategies to defeat Gary Kasparov. We see this in slime molds that “can solve mazes, mimic the layout of man-made transportation networks and choose the healthiest food from a diverse menu—and all this without a brain or nervous system.” We may even see this in the operation of evolution itself. Leslie Valiant argues in Probably Approximately Correct that evolution itself is computational and best understood as a natural neural network similar in many respects to the neural networks used in computer learning. He calls this algorithmic activity ecorithms and it is the actual mechanism by which natural selection operates.

Intelligence thus is a physical process and does not require consciousness. Intelligence may be something build into the very lowest levels of life and could have perhaps played a role in its origin. The basis of this intelligence may not be classical but may arise from the quantum level. I won’t go into this argument too much at this time but Paul Davies in “Does quantum mechanics play a non-trivial role in life?” argues that quantum mechanical processes may be at work as a source of mutation, accelerated reaction rates of enzymes, the genetic code which may be optimized for quantum search algorithms, and microtubules which play a role in neural firings and which some believe to the root of consciousness.

In my view, intelligence precedes consciousness and created consciousness through natural selection to “maximize the diversity and/or utility of future outcomes to achieve optimal solutions” to quote myself. Our ability to think is a biological product of evolutionary algorithms.

Machines do not think any more than slime molds think. They appear to us to be capable of thought because they are driven by algorithmic intelligence similar to that which created the capability of thought in biological organisms but of a depth and complexity that we are far from understanding. I suspect that when or if we approach an understanding we might be able to create something which thinks but it will more likely resemble life than it does anything we currently recognize as a machine.

This entry was posted in Consciousness, Intelligence, Origin of Life, Quantum Mechanics, Transhumanism. Bookmark the permalink.

19 Responses to Thinking About Thinking

  1. dondeg says:

    Interesting thoughts but you run into many of the paradoxes that have been gone over again and again in classical philosophy. There is really little that is new in the current dialogues except the metaphors of computers and brains.

    It is valid to separate out intelligence from consciousness, although your terminology could be tightened up a bit. Intelligence is process, dynamics, pattern, and thus applies to galaxies, atoms, brains and everything else.

    But when you begin questioning the genesis or ultimate fate of such processes, or whether they link or do not like to consciousness, then that is the same quicksand since Plato and Aristotle, since Newton and Leibniz, since Swedenborg and Laplace, since Einstein and Bohr. Every generation thinks they finally have been able to solve what all the greats before them could not.

    As long a people stare only at the shadows of the cave wall, they will see only shadows.

    Thanks for posting the provocative article!

    Best wishes,

    Don

    Like

    • James Cross says:

      Sorry to take so long to get back to you.

      Thanks for your comments. Of course, we can probably never quite escape Plato and perhaps other predecessors whose names have been lost to history. Philosophy never really progresses whereas science evolves to what, if anything, we cannot quite be sure.

      I’ve added your website PlaneTalk to my blogroll and would encourage readers to check it out.

      Like

  2. dondeg says:

    Dear James

    Nice to hear from you. Thank you for replying and also adding my site to your blog roll. Yes, poor philosophy. I just did a post (part 3 of the yogic view of consciousness) where poor philosophy, went “mad as a hatter” when her life long soul mate Science left her for greener pastures. Hehe, it’s nice to have some fun with all this. Again, very nice to hear from you. Best wishes, -Don

    Like

  3. jjhiii24 says:

    Hey Jim,

    Just noticed our similar threads in recent blog posts, and wanted to reciprocate with a link that I think you may find interesting. One of the most interesting philosophers of consciousness is David Chalmers, a professor at the National University in Australia, and one of the co-founders of the annual “Science of Consciousness Conference,” which takes place most often at the University of Tuscon in Arizona. David recently gave a TED talk that addresses this subject in an interesting way.

    I think your position that intelligence precedes consciousness is somewhat controversial, but I think you probably could say a great deal more about it by addressing some of the individual components of your ideas separately. It’s a complex subject and deserves the attention.

    I am far less optimistic than you about our ability to produce machines that “think,” as opposed to either mimicking thought or processing information in a way that resembles thought in humans, but as you so correctly pointed out, we aren’t even really sure about our definition of thinking, so it is problematic on several levels.

    It has always been my contention that while consciousness may be possible to describe as a fundamental force like space, time, mass, and charge, or that everything has some degree of consciousness, with the most complex structures like brains having the highest degree, and elementary particles having only a fraction or near zero sum of consciousness, but there is still a great deal of work and thought to be done in order to arrive at something more substantial to support such ideas.

    Looking forward to reading more of your ideas as you are able to contribute more…John H.

    Like

    • James Cross says:

      Thanks for commenting but I am not sure where you think I am optimistic about being able to create machines that think. I think my viewpoint is almost the exact opposite.

      Actually I think Evan Thompson captures my view almost perfectly and I was planning on using this quote when I eventually post something on his book:

      “According to this way of thinking, sentience depends fundamentally on electrochemical processes of excitable living cells while consciousness depends fundamentally on neuroelectrical processes of the brain. Consciousness isn’t an abstract informational property, such as Giulio Tononi’s “integrated information”, it’s a concrete bioelectrical phenomenon. It follows – as John Searle has long argued – that consciousness can’t be instantiated in an artificial system simply by giving the system the right kind of computer program, for consciousness depends fundamentally on specific kinds of electrochemical processes, that is, on a specific kind of biological hardware. This view predicts that only artificial systems having the right kind of electrochemical constitution would be able to be conscious.”

      In other words, the only machine capable of thinking would be one constructed from the chemicals of life. Computers and other systems might be be intelligent but they are not really thinking. Thinking and consciousness do have a physical basis but it is in the specific bioelectrical properties of brains, neurons, and maybe the microtubules of cells. Intelligence is a computational process that can instantiated on many types of substrates.

      Like

  4. James Cross says:

    Asking why there is subjective experience is like asking why is water wet?

    Water is wet because the chemical properties of the molecule. Consciousness is the way it is because of the underlying bioelectrical properties of brains and neurons. The problem is hard only in the sense that the molecules of brains and neurons are trillions of times more varied, more dynamic, and more complex than the simple water molecule. So the science is not simple. Understanding will not be accomplished in one giant leap. It cannot be simulated or replicated on anything we regard today as a machine.

    The why of the evolutionary development of consciousness I think can explained in part by the radical plasticity theory that argues “conscious experience occurs if and only if an information processing system has learned about its own representations of the world. To put this claim even more provocatively: consciousness is the brain’s theory about itself, gained through experience interacting with the world, and, crucially, with itself.”.

    I posted this a while back:

    If we couple the radical plasticity theory with the observation that more conscious organism seem to be more social, it might be that our subjective sense of self-awareness may be learned through interaction with other conscious entities, especially those of our own species. We recognize consciousness in other organisms because we learned our own consciousness through interaction with them. Consciousness, while it may require some threshold of information storage/processing capacity, may not actually be directly dependent on it.

    The radical plasticity theory when tied to observation that much of the learning of the more conscious species is social learning, explains a lot:

    1-Social organisms will have larger brain to body weight ratios.
    2-Social organisms will be more conscious.
    3-Our subjective sense of self-awareness is learned through interaction with other conscious entities.
    4-We recognize consciousness in other organisms because we learned our own consciousness through interaction with them.
    5-Most of what the brain does is unconscious (Freud was right after all!). This includes developing math theorems. It also conforms to observational evidence in brain scans that neurons involved in recognition or decision-making fire before consciousness becomes aware of the corresponding thought.
    6-In learning something new we use a great deal of consciousness but once it is learned it requires less consciousness. I concentrated a lot learning to ride a bike. Now I ride and think about consciousness.

    In the end, consciousness may be an evolutionary product of natural selection advantages provided by increased control of the body, better perception of the environment, and increased ability to predict the outcomes of interaction with the environment. The final development of what we might more properly think of consciousness occurred when we needed to predict the outcomes of interactions with other members of our species. The development of consciousness may be more or less equivalent in the grand scheme of things to the evolution of eyes or ears. Quite remarkable and amazing but not something we should regard as sui generis.

    Consciousness: Much Ado About (Almost) Nothing?

    See also:

    Click to access 07-PBR.pdf

    Like

  5. I see that some problems stemmed from our attempts to look at it mostly from a human viewpoint. Let me illustrate this thesis.

    I would like to go back to the easier of the terms. You have used Wikipedia definition: machine “is a tool containing one or more parts that uses energy to perform an intended action”.

    The problem with this definition is the use of the term “tool”. “Tool” is something useful for people. Wikipedia definition is this: a tool is “an object used to extend the ability of an individual to modify features of the surrounding environment.” [1]
    Let us imagine an intelligent machine, which is independent of people (AI or AGI). If it is independent of people, then it, probably, does not consider itself as a “tool for people.”.
    We even do not need to discuss what “considers” means. Could it be thinking or not, does it involve self-recognition or not, or alike questions could be put aside for now. The most important thing here is that, from an independent intelligent machine viewpoint, that machine is not a tool used by people. That is an entity. Both human and independent intelligent machines could satisfy the same definition as “an entity containing one or more parts that uses energy to perform an intended action”.

    1. https://en.wikipedia.org/wiki/Tool

    Liked by 1 person

  6. No, it did not – But that point is up for a separate discussion, I think.

    Like

  7. Mules exist for thousands of years. Each one would not be born if humans will not intervene. If, for some reason, people will not bother about already existed mules – then they would exist independently from people. Of course, in this case, mules could not reproduce. I’m not sure that mules would think (if they think :-)) favorably of people. Even, or maybe especially, because they would not exist without people’s help.

    Like

  8. If an intelligent machine could reproduce itself – then the fact of its initial creation by people would matter only for the first generation of those machines.

    Like

    • James Cross says:

      You’re assuming humans would make a machine that would want to reproduce itself. But would it need to be endowed with the desire for reproduction from its creation by humans? Or, do you see this arising spontaneously? At what point and how does the machine move from being able to do only what it is programmed to do?

      A lot comes down to what is thinking. If it is just intelligence, then I would acknowledge that machine can and do think. That is true with slime molds and other natural processes too. But my criticism of Bostrom’s answer is that it conflates thinking and intelligence and that while thinking involves intelligence. it is also more because it involves an ability to have subjective experience.

      Like

  9. “You’re assuming humans would make a machine that would want to reproduce itself.” Not really. I assume that “humans would make a machine that COULD reproduce itself”. That is already a fact. There are already multiple robot types, which could self-replicate. A search for “self-replicating robots” on Google tells us that first such robots existed in 2005. Whatever we have in this area is just a beginning and engineers are working hard to improve that technology.

    As for “thinking” – I prefer to exclude “thinking” from this discussion at all. I would suggest to look at the facts, which could be verified, – an output (from humans and from intelligent and self-replicating machines) and compare it to each other. It makes also sense from a historical point of view. Nobody from currently living on Earth people could know for sure what now-dead
    people thought in the past. All our considerations about what and how did they thought could not be proved.

    Like

    • James Cross says:

      So I guess your answer to the Edge question is the same as mine. Machines don’t think since it can’t be verified.

      Can you verify that you think?

      Like

    • James Cross says:

      This is a common argument. “Let’s just look at what we can verify”.

      But there isn’t really anything that can be verified without something or some kind of process to do the verification, but that process is what you are saying can’t be verified.

      But if we can’t really verify that process itself, then the argument becomes reductio ad absurdum. Nothing can be verified because I can’t verify the process that does the verification actually exists.

      Like

  10. I was very imprecise in my statement about verification. Verification is a very broad topic. I think an absolute verification is a myth.
    However, I believe there could be some different grades of verifications, which could be measurable and agreed upon. Then within a framework based on a certain grade of verification, we could have productive discussions.

    I’m not sure if and when brain science will reach such a level, that we would be able to decipher in certain terms such an uncertain (by current definition) process as “thinking”.
    I doubt that an organic matter (like humans) and non-organic matter (like AI/AGI) have an identical process inside those organic and non-organic entities. Even with that, an output of those processes in certain circumstances could be similar.

    Liked by 1 person

    • James Cross says:

      I don’t think you go by output only. A electric car and a gasoline car have identical outputs but inside they are different, although they have some commonalities like gears and drive trains. Conceivably there could be something even more different inside, like sled dogs running on treadmills, and the output within a range of variation might be the same. As you point out, we only have indirect access to what is happening inside but what is inside is what thinking and consciousness are about. Without thinking and consciousness, there isn’t any verification and without mutual verification there isn’t any way of reaching agreement.

      I think we’re stuck with the fact we can’t ignore the inside.

      Having said that, I do think a good deal of progress can be made on what is really inside and how it works, although the final “hard problem” will remain elusive like the grapes always just out of reach of Tantalus. When we understand the inside sufficiently maybe we will be able to produce a machine that “thinks” and doesn’t simply mimic thinking. And, if we can, we will know it because it will have the key structural internal workings of our understanding AND produces the same outputs. Or, maybe we won’t be able to produce such a machine, depending upon what the understanding of how the inside is.

      Liked by 1 person

Leave a comment