Let’s suppose we decide to create an artificial human.
If you’ve watched the television show Humans, you might imagine what we want to create to be something like the synths in the show except even more human-like. The synths in the show are almost always easily distinguishable from real humans by their eye color, which is usually green but can be some other unusual colors. Some synths wear contact lens to mask their eye color to pass as humans. Aside from the eye color, most synths can still be easily detected by a certain artificialness in their facial expressions and emotions. The writers had a good reason for making the synths less than perfect since part of the show’s intent is to pose the question of what it means to be human. That isn’t a question we are trying to address here. The synths we are trying to make will be almost indistinguishable from humans except by opening their insides or following them around long enough to see them hooking up to a power outlet instead of eating dinner.
Let assume we have a team of engineers that has solved the engineering problems associated with the artificial human body. For short, I am going to call the artificial human body the shell. The shell has two arms and two legs. It can walk, run, jump. sit. It can lift a pin with its fingers. The shell looks completely human down to the sexual organs and capabilities, whether it be male, female, or metrosexual. The shell has a skin that looks and feels completely human. It has human-like senses: touch sensors over its body, sensors for hearing, seeing, smelling, tasting (although it doesn’t swallow). All the senses work in the normal human ranges. The eyes are human-like color. The face of the shell can form completely human-like expressions. Its voice unit can shriek, cry, and emit every phoneme required for human language. I see no showstopper problems why this shell could not be created mostly with current knowledge. Although there certainly may need to be some engineering breakthroughs to miniaturize all the parts and package them together, many capabilities of the shell have already been demonstrated in some form in robotics labs.
The next question is what sort of control unit do we put into the shell to run it?
We have had another team of engineers working on that while the first team (Shell Team) was working on the design of the shell. The time comes to go to the control unit team (CUT) and ask what they have for us.
CUT tells us they have developed an interconnected, parallel processing network that can be connected to the sensor feeds the Shell Team has designed and is compact enough to fit into the head of the shell (which was unused empty space anyway so where better to put it).
We ask if it will be conscious.
CUT tells us this is version 1.0, and they don’t think it will be conscious. They are not sure exactly what consciousness is but, at any rate, they didn’t attempt to design consciousness as a feature, since it seemed unnecessary. Their only doubt is whether the unit might just be so good that consciousness will emerge anyway even though they didn’t design for it.
Let’s go with version 1.0 and see what happens.
The first prototype with CU 1.0 comes out of the lab and has some problems. It can do the basic stuff – walk, lift things, find its car in the parking lot (a challenge for many real humans). Unfortunately, it breaks into crying jags for no reason. It doesn’t get jokes. It mixes up words like thinking “bare knuckles” is a reference to a bear. Its facial expressions are a little weird and it has a strange tic of blinking and head jerking whenever ice cream is mentioned. It also on a whim (apparently) decided to destroy some equipment in the lab before technicians could get to the kill switch.
Well, this is technology. Maybe we should have called that the beta version but no matter. We send it back to the lab which in the meantime we have had repaired. In a few weeks CUT announces version 2.0. It shows progress but is still with problems. We go through a few more iterations. Finally, two years later, CUT announces version 3.1.
The prototype with CU 3.1 comes out of the lab and seems perfect. All the bugs of the previous versions are fixed. The Shell Team has also done some minor fixes. The prototype is put in various situations – grocery stores, shopping malls, waiting in line at the DMV. Nobody seems to notice any issues except one observer remarked that the prototype seemed unusually patient at the DMV. We think that could be problem but on second thought decide that maybe we not only have created a passable artificial human but a better human. The prototype can even lie. We ask it if it is conscious. It tells us it is. We think that is a lie, but we’re not sure.
We ask CUT if it is conscious. CUT is convinced it is not conscious. But funny you should ask, they say. One of the developers happened to code a debugging routine and ran it while the voice unit was hooked up. The CU started giving marital advice that seemed pretty good and convinced the developer that it had become conscious. They could enhance the debugging routine slightly and make it part of the code base if we wanted.
It takes two more versions to get the consciousness routine working properly.
CUT announces CU 3.3. The prototype is assembled and comes out of the lab.
We run all the tests and it passes with flying colors. The only difference we notice is that now it becomes impatient waiting in the DMV line.