Let’s suppose we decide to create an artificial human.
If you’ve watched the television show Humans, you might imagine what we want to create to be something like the synths in the show except even more human-like. The synths in the show are almost always easily distinguishable from real humans by their eye color, which is usually green but can be some other unusual colors. Some synths wear contact lens to mask their eye color to pass as humans. Aside from the eye color, most synths can still be easily detected by a certain artificialness in their facial expressions and emotions. The writers had a good reason for making the synths less than perfect since part of the show’s intent is to pose the question of what it means to be human. That isn’t a question we are trying to address here. The synths we are trying to make will be almost indistinguishable from humans except by opening their insides or following them around long enough to see them hooking up to a power outlet instead of eating dinner.
Let assume we have a team of engineers that has solved the engineering problems associated with the artificial human body. For short, I am going to call the artificial human body the shell. The shell has two arms and two legs. It can walk, run, jump. sit. It can lift a pin with its fingers. The shell looks completely human down to the sexual organs and capabilities, whether it be male, female, or metrosexual. The shell has a skin that looks and feels completely human. It has human-like senses: touch sensors over its body, sensors for hearing, seeing, smelling, tasting (although it doesn’t swallow). All the senses work in the normal human ranges. The eyes are human-like color. The face of the shell can form completely human-like expressions. Its voice unit can shriek, cry, and emit every phoneme required for human language. I see no showstopper problems why this shell could not be created mostly with current knowledge. Although there certainly may need to be some engineering breakthroughs to miniaturize all the parts and package them together, many capabilities of the shell have already been demonstrated in some form in robotics labs.
The next question is what sort of control unit do we put into the shell to run it?
We have had another team of engineers working on that while the first team (Shell Team) was working on the design of the shell. The time comes to go to the control unit team (CUT) and ask what they have for us.
CUT tells us they have developed an interconnected, parallel processing network that can be connected to the sensor feeds the Shell Team has designed and is compact enough to fit into the head of the shell (which was unused empty space anyway so where better to put it).
We ask if it will be conscious.
CUT tells us this is version 1.0, and they don’t think it will be conscious. They are not sure exactly what consciousness is but, at any rate, they didn’t attempt to design consciousness as a feature, since it seemed unnecessary. Their only doubt is whether the unit might just be so good that consciousness will emerge anyway even though they didn’t design for it.
Let’s go with version 1.0 and see what happens.
The first prototype with CU 1.0 comes out of the lab and has some problems. It can do the basic stuff – walk, lift things, find its car in the parking lot (a challenge for many real humans). Unfortunately, it breaks into crying jags for no reason. It doesn’t get jokes. It mixes up words like thinking “bare knuckles” is a reference to a bear. Its facial expressions are a little weird and it has a strange tic of blinking and head jerking whenever ice cream is mentioned. It also on a whim (apparently) decided to destroy some equipment in the lab before technicians could get to the kill switch.
Well, this is technology. Maybe we should have called that the beta version but no matter. We send it back to the lab which in the meantime we have had repaired. In a few weeks CUT announces version 2.0. It shows progress but is still with problems. We go through a few more iterations. Finally, two years later, CUT announces version 3.1.
The prototype with CU 3.1 comes out of the lab and seems perfect. All the bugs of the previous versions are fixed. The Shell Team has also done some minor fixes. The prototype is put in various situations – grocery stores, shopping malls, waiting in line at the DMV. Nobody seems to notice any issues except one observer remarked that the prototype seemed unusually patient at the DMV. We think that could be problem but on second thought decide that maybe we not only have created a passable artificial human but a better human. The prototype can even lie. We ask it if it is conscious. It tells us it is. We think that is a lie, but we’re not sure.
We ask CUT if it is conscious. CUT is convinced it is not conscious. But funny you should ask, they say. One of the developers happened to code a debugging routine and ran it while the voice unit was hooked up. The CU started giving marital advice that seemed pretty good and convinced the developer that it had become conscious. They could enhance the debugging routine slightly and make it part of the code base if we wanted.
It takes two more versions to get the consciousness routine working properly.
CUT announces CU 3.3. The prototype is assembled and comes out of the lab.
We run all the tests and it passes with flying colors. The only difference we notice is that now it becomes impatient waiting in the DMV line.
I enjoyed reading this latest entry of yours, and even though I understand that your premise is one of those, “let’s suppose we can do this,” it seems to me that you glossed over so many essential steps and challenges that such an endeavor would be required to address, that it almost argues for the exact opposite conclusion regarding how it might actually turn out.
Supposing that the engineering challenges in creating your “shell” might be overcome with “current knowledge,” is perhaps the least tenable supposition. The degree of sophistication available “currently,” as evidenced in the robotics industry efforts to create “human-like” machines, while impressive to be sure, are so “un-human-like,” in the context of your framework for fooling anyone about how real they might be, really just falls flat in my view. Even though you generously acknowledge the process shortcomings that would likely occur in attempting to work out the kinks in any such creation, the likely time frame would be much longer than your supposition suggests, and if you check out the most recent information on the actual “current” state of robotics engineering, there really isn’t anyone “supposing” that such a device would be conscious. I liked your suggestion that the engineers wouldn’t include such efforts by design, but that consciousness might “emerge anyway,” even without it being built-in to the process. This suggests that consciousness may have simply emerged in actual humans, and that is only one of a handful of other equally speculative theories about consciousness, but not a leading contender in my view.
It is intriguing to speculate about what sort of entity might be “created” with A.I. and robotics research, and even if we never figure out how to build something even remotely resembling your “human-like” android who gets impatient at the DMV, it will be fascinating to see what does “emerge.”
LikeLiked by 1 person
This was written somewhat in jest so I glossed over a lot of real problems.
I do think the shell could be challenging but certainly possible to be done soon. The CU is another matter entirely.
To expand on that a little.
Let’s say the task is to pick up a pin, carry it across the room, and insert it into a pin cushion. The only question for the shell, as I am envisioning it, is whether it has dexterity in its mechanical fingers to grab the pin and insert it into the pin cushion, and that the legs can do the movements to walk across the room. Coordinating all of that is the role of the CU. I think a lot of the limitations of current robotics are in the CU not the shell.
And I understood that your posting was written somewhat in jest, it’s just that your post seemed to suggest that, in theory, we might be able to build something that might be able to convince us it was conscious, fool us about its status as an autonomous human, and that it might “pass the test with flying colors.” This premise, whether offered in jest or not, is so beyond our current technology as to enter the realm of the absurd. There are a bunch of really smart and well-financed technical experts producing amazing results in building life-like robots that one might charitably describe as “sufficiently humanoid,” to prompt us to respond in kind, but our willingness to extend typical social courtesy to such a device says more about us than about the device. Programming a device to perform specific actions is one thing. Producing an android like “Data” from Star Trek Next Generation, who can perform autonomously and convincingly like a human, if it ever becomes possible, will likely be well beyond the 23rd Century.
I enjoyed the way you framed the idea and I have no objection to speculating about such things, but to suggest that we are anywhere near being able to produce such a device “currently” just isn’t credible in my view. The larger point of how we might be able to establish whether or not any artificial construct is “conscious,” regardless of how convincing it might be in any other way, is ripe for your keen sense of the subject area, and it clearly is of interest to your readers. The conversation about the nature of consciousness and how we might enhance our understanding of it is of enormous interest to many of us, and promoting that conversation is well worth even venturing into the absurd, if it starts a dialog about these important ideas.
The only thing I am saying is credible in short term is the shell not the CU. It would take both of them together to pass for a human. However, I’m not sure the CU is 200 years away. It is probably closer than you think unless it is impossible. And I don’t think it is impossible. I wouldn’t try to guess when it could happen but I don’t think it would be conscious. It would just be a darn good imitation of a human.
LikeLiked by 1 person
“and convinced the developer that it had become conscious.”
In the end, the only measure of consciousness we’re going to get. Even if we think we have an objective measurement of it, it will have to be validated using this method, or one similar to it.
LikeLiked by 1 person
Certainly the only measure we have now and it is hard to imagine a measure we could come up with in the future.
To me that is what makes the ultimate questions unanswerable by science and in the realm of philosophy. That is not to say that a lot of good science can’t be done. Just that the big question can’t be answered by it. If you can’t measure it, it isn’t really science.
Ultimately all science can do is examine which attributes and capabilities reliably trigger our intuition of consciousness and then study those. Unfortunately, our intuitions aren’t consistent, so getting agreement on which ones matter is itself a problem.
I think this issue, in and of itself, tells us something important, that there’s no fact of the matter.
Consciousness is an attribution, part of an empathic model we make of others, and of ourselves. If that’s right, we shouldn’t ever expect to find any objective thing that is consciousness, just the model.
LikeLiked by 1 person
“no fact of the matter”
Or “no matter in fact” as would say an idealist. 🙂
“In the end, the only measure of consciousness we’re going to get. Even if we think we have an objective measurement of it, it will have to be validated using this method, or one similar to it.”
Following to the logical conclusion, you can only measure consciousness by another consciousness.
Put another way, consciousness only exists relative to other consciousnesses.
On idealism, the thing about it, if it’s true, its rules and consequences are identical to the objective physical world. If reality is a mental illusion, then that illusion is our reality, and we’re stuck in the game.
“On idealism, the thing about it, if it’s true, its rules and consequences are identical to the objective physical world. ”
I think some idealists, like Kastrup, would agree with you on that point. And it is somewhat the same point I’ve made at various places that any monism – everything mind or everything matter – still results in the same world. But I’m not sure that means monism is wrong. It might be this expresses the truth best:
1-The world is matter.
2- The world is mind.
3- The world is both matter and mind.
4- The world is neither matter nor mind.
In other words, we don’t know what the hell it is and it might be beyond any of our simplistic notions about it.
Hmm. I’m not usually up to date with the whole A.I or consciousness in A.I, thing and whatnot.
But I recently just saw your No More Secrets post, and you talking about computers and A.I, reminded me of this DOS model, developed by Parag Jasani. He’s someone who is able to completely explain consciousness by looking at the brain as an ‘integrated system’. Someone “no scientist/neuroscientist has ever done before”:
To me, it looks like a computation, functional, and eliminative view all in one. Even though this guy is not really the “only one with a causal model of consciousness”( someone else called him out on that elsewhere ), it’s interesting. He has many minisites and has written a book about it.
That link doesn’t work and most attempts to Google him lead to teasers that point to his book.
Looks like it’s dead now, but not the others. Try this.
I am not seeing much that is unique in any of his explanations. It just seems like sort of standard neuroscience type of stuff with maybe a few invented terms like “Awareness Buffer”, which I guess is something like access consciousness.
I agree. What’s funny is, in the dead consciousness explained site, he had a $1,000 US challenge to refute his DOS model.
But I’m not all that impressed with DOS. I don’t understand why he says he’s the only one to discover this sort of stuff.