The Edge question for 2015 is “What do you think about machines that think?”
The first two responses I read were from Sean Carroll and Nick Bostrom. I have been a long time followier of Sean Carroll blog’s Preposterous Universe and have also recently read Bostrom’s book Superintelligence: Paths, Dangers, Strategies.
Carroll took the opportunity to revisit yet again his long running atheist argument that we can explain everything with physical law by erasing the distinction between us and machines. His response entitled “We Are All Machines That Think” really misses the point of the question, I think.
Bostrom, on the other hand, in his response “A Difficult Topic” immediately begins discussing intelligence which may or may not be the same as thinking. I had planned to write a separate post on Bostrom’s book but will touch on my issues with that book here.
Not surprising, most open-ended questions with imprecise definitions of terms generate answers using different definitions. The terms in this question with imprecise definitions are “machines” and “thinking”. I will be curious to see if other responders even care to define the terms as they provide their answers. I will explore my thoughts on those terms and what they mean to the answer of the Edge question.
Let’s begin with the easier of the terms: “machine”.
The Wikipedia definition of machine is “a tool containing one or more parts that uses energy to perform an intended action”. This is mostly what we mean when we use the word in normal language. To consider us machines by this definition is a considerable stretch. We may be physical entities consisting of one or more parts that use energy but we certainly are not tools unless you want to consider we are the tools of our genes and our genes have the intention of propagating themselves. The notion of intention in the definition also carries with it the idea that a machine is constructed or perhaps chosen from the natural word in the case of very primitive tools. By this, only humans and some other intelligent species make or choose tools. So a machine that thinks would have to be constructed by us or another intelligent biological species unless a machine so constructed develops an ability to create another machine for its use to think.
We can hyperbolize this definition of machine as Sean Carroll does to make other points. His is that we and our ability to think can be explained by physics, probably classical physics, without resorting to gods, souls, or elan vital. I happen to agree with this point (although I am not so sure about whether classical or even current physics will suffice) but, for the purpose of Edge question, I think the Wikipedia definition is the most appropriate definition and the one that will yield the most interesting insights and new questions.
Now for the difficult term: “thinking”.
“Thinking” is an incredibly imprecise term. Thinking seems intimately involved with consciousness and intelligence but in support of those we often need to involve perception, cognition, pattern recognition, and more. Even Wikipedia says “there is no generally accepted agreement as to what thought is or how it is created.”
If consciousness and intelligence are at the core of thinking, then is consciousness a requirement for intelligence?
Where Nick Bostrom’s Edge answer runs aground is in the lack to address this question. He immediately begins discussing intelligence and super-intelligence as if intelligence is the same as thinking. We can easily see that machines are or soon will be intelligent. Deep Blue, the chess computer, beat Gary Kasparov, the reigning chess champion (although Kasparov claimed IBM cheated). So Deep Blue was clearly intelligent in the narrow realm of chess strategy. Whether Deep Blue was thinking is another matter. The confusion on this becomes even more pronounced in his book where he (along with many others who have written on this topic) heavily anthropomorphizes AI ascribing human intentions, mostly of the worst kind, to machines.
How does intention derive from intelligence? For that matter, what is intelligence? I can’t find a clear definition of it in his book on the topic.
I am going to attempt a definition. Intelligence is a physical process that attempts to maximize the diversity and/or utility of future outcomes to achieve optimal solutions. We see this in the operation of Deep Blue where the machine developed sufficiently optimal strategies to defeat Gary Kasparov. We see this in slime molds that “can solve mazes, mimic the layout of man-made transportation networks and choose the healthiest food from a diverse menu—and all this without a brain or nervous system.” We may even see this in the operation of evolution itself. Leslie Valiant argues in Probably Approximately Correct that evolution itself is computational and best understood as a natural neural network similar in many respects to the neural networks used in computer learning. He calls this algorithmic activity ecorithms and it is the actual mechanism by which natural selection operates.
Intelligence thus is a physical process and does not require consciousness. Intelligence may be something build into the very lowest levels of life and could have perhaps played a role in its origin. The basis of this intelligence may not be classical but may arise from the quantum level. I won’t go into this argument too much at this time but Paul Davies in “Does quantum mechanics play a non-trivial role in life?” argues that quantum mechanical processes may be at work as a source of mutation, accelerated reaction rates of enzymes, the genetic code which may be optimized for quantum search algorithms, and microtubules which play a role in neural firings and which some believe to the root of consciousness.
In my view, intelligence precedes consciousness and created consciousness through natural selection to “maximize the diversity and/or utility of future outcomes to achieve optimal solutions” to quote myself. Our ability to think is a biological product of evolutionary algorithms.
Machines do not think any more than slime molds think. They appear to us to be capable of thought because they are driven by algorithmic intelligence similar to that which created the capability of thought in biological organisms but of a depth and complexity that we are far from understanding. I suspect that when or if we approach an understanding we might be able to create something which thinks but it will more likely resemble life than it does anything we currently recognize as a machine.