Artificial Consciousness?

This is a very brief post relating to possibility of artificial consciousness on a silicon substrate. I’ve been skeptical of this and remain so, but I know both McFadden and Pockett both believe that consciousness is identical with certain types of electromagnetic fields and, therefore, the substrate these fields are created with is not relevant.

I came upon this in an interview with McFadden on

To quote McFadden in response to a question about EM fields and computer design:

They have to be designed to be sensitive, rather than insulated from, the EM field. Remarkably, an experiment performed more than 20 years ago by the COGS group of the University of Sussex actually achieved such a feat. They evolved, rather than designed, an electrical circuit (on FPGA chips) to perform a simple task. The evolved circuit used EM interactions, as well as wired connections, to perform that task. Perhaps they inadvertently built the first artificial conscious mind. However, as far as I know, nobody has yet attempted to repeat and extend this approach but I would be very interested to hear from anyone who be willing, and would have the resources, to give it a go.

Then I found an article,  Evolving A Conscious Machine, from Discover Magazine in June, 1998.

To quote from that article about an attempt to understand why the chip seem to perform the way it did:

Thompson gradually narrowed the possible explanations down to a handful of phenomena. The most likely is known as electromagnetic coupling, which means the cells on the chip are so close to each other that they could, in effect, broadcast radio signals between themselves without sending current down the interconnecting wires. Chip designers, aware of the potential for electromagnetic coupling between adjacent components on their chips, go out of their way to design their circuits so that it won’t affect the performance. In Thompson’s case, evolution seems to have discovered the phenomenon and put it to work.

It was also possible that the cells were communicating through the power-supply wiring. Each cell was hooked independently to the power supply; a rapidly changing voltage in one cell would subtly affect the power supply, which might feed back to another cell. And the cells may have been communicating through the silicon substrate on which the circuit is laid down. The circuit is a very thin layer on top of a thicker piece of silicon, Thompson explains, where the transistors are diffused into just the top surface part. It’s just possible that there’s an interaction through the substrate, if they’re doing something very strange. But the point is, they are doing something really strange, and evolution is using all of it, all these weird effects as part of its system.

This entry was posted in Consciousness, Electromagnetism, Human Evolution, Robotics. Bookmark the permalink.

37 Responses to Artificial Consciousness?

  1. It seems to me that if physics is responsible for consciousness in life, then that same physics ought to also be possible in non-life. And this should certainly be the case if certain EM fields happen to be responsible.

    I see from the Tam Hunt interview that McFadden is extremely opposed to his ideas being associated with panpsychism. Furthermore he also ridicules notions of quantum consciousness — hot wet brains simply don’t seem conducive to it. That’s all fine with me.

    What I’m not on board with is his defense of a higher order conception. If advanced consciousness needed to evolve, or any functional form of it, shouldn’t it have done so from less advanced or even non-functional iterations? If EM fields happen to be responsible, then surely there must have been brains producing phenomenal experience that had no ability to implement the desires of a thusly produced epiphenomenal experiencer. So I’d say that he ought to rethink this position. Functionless consciousness (by means of EM fields or whatever) should have evolved into basic forms, and they in turn should have evolved into the human form. I don’t understand why so many intelligent naturalist, nevertheless place us apart from what they believe we evolved from.

    I do like how he referred to the non-conscious brain as something that functions in parallel, while consciousness seems to only function serially. As he noted, we can’t have a conversation with someone while doing a crossword puzzle — each task must be taken in turn by means of a single conscious processor. The brain architecture which I’ve developed also details this.

    Liked by 1 person

    • James Cross says:

      He’s written a good book on quantum biology but, as you write, definitely doesn’t think it is needed or used for consciousness.

      I may need to reread the parts relating to “higher order conception”. All I thought he was saying was that human consciousness – possibly because of brain size and architecture – has taken a major leap in its ability to control the underlying unconscious circuits and this accounts for unique human capabilities. That same control ability is present to a lesser degree in many species, probably any species capable of learning more sophisticated than simple conditioning. It has just advanced enormously in humans.

      Liked by 1 person

      • All I thought he was saying was that human consciousness – possibly because of brain size and architecture – has taken a major leap in its ability to control the underlying unconscious circuits and this accounts for unique human capabilities.

        You could be right James. I found it somewhat of a “we can’t prove anything is conscious unless it talks with us” kind of perspective, which to me seems too limiting. I think this gets into Mike’s criticism below as well. And it may be that he went this route because Hunt and others seem so interested in taking his theory over to their panpsychism club — as if everything is at least a little conscious given their wave properties (which I suppose gets quantum again).

        I should try not to be a hypocrite here since have my own interests to promote as well. If I could talk with him I’d suggest that he theorize certain EM fields exist as a functionless phenomenal dynamic in general, though this could potentially provide a sufficiently evolved brain with agency based feedback. Here the brain may be analogized as a vast computer that creates and uses the conscious entity for autonomous function in the more open environments by which “If…then…else…” logic operations do not seem sufficient. Whatever number of calculations that the highly parallel brain does, the tiny serial and agency based form (or “thought”), should do less than 1000th of 1% as many.

        Furthermore I’d have him ponder this second form of function in terms of a phenomenal dynamic (theoretically the waves themselves), an informational dynamic that this experiencer has (commonly referred to as “access”), and a memory dynamic, since the neurons which have fired to create these waves have a far greater tendency to fire again when prompted. Each of these would serve as input to a tiny waved based processor, or entity with personal incentive to interpret them and construct scenarios about what to do, or once again, “thought”.


    • James Cross says:


      You comment:

      “This matches my own view that if EM fields are relevant beyond simply enhancing the peak of waves, it amounts to an alternate substrate for information processing.”

      I don’t disagree that information processing is involved.

      The problem with the other theories (GWT, IIT, etc) by themselves is that they don’t clearly provide any sort of mechanism for having consciousness come from neurons firing or computations being done because neurons are firing (and computations being done) all over the brain pretty much all the time. There is missing the explanation of what circuits cause something to rise into consciousness and which are background processing.

      Hence, there is in my view the fruitless effort to find where in the brain consciousness is happening.

      By appending cemi to GWT and/or IIT, which might both be right in their ways, we can view consciousness as a property of the whole (or most of the) brain and explain the apparent integrated experience of consciousness. We can possibly correlate what rises to consciousness from the number of neurons firing and their relationship to the strength of the EM field generated. And, most importantly, we can explain the evolutionary advantage of consciousness by the ability of the EM field to act in feedback with the neural circuits which are by themselves are unconscious.


  2. I was struck by how much McFadden emphasized the requirement for the field to be complex enough to convey information. This matches my own view that if EM fields are relevant beyond simply enhancing the peak of waves, it amounts to an alternate substrate for information processing.

    However, he loses me when he then rules out alternate substrates, that the fields in and of themselves are required. So which is it? Is consciousness the field? He seems to reject this, saying that EM fields without the requisite information are not conscious. So is consciousness then the information in the field? He implies this, but then seems to undercut it when he says computers won’t be consciousness until they too make use of fields.

    Or does consciousness require both the information and the field? If so, why? Why can’t we get by with just the information processing? What specifically does the field provide?

    His reasoning about the differences between primates and other animals, which is what I think Eric meant by “defense of higher order conception”, seems anthropocentric. Why would only primate brains generate the requisite EM field? What evidence is there for that distinction?

    I will say McFadden is more grounded than I expected. (The issues Hunt disliked are actually strengths in my book.)

    Liked by 2 people

    • James Cross says:

      My understanding is that consciousness is the field but not every EM field is conscious. It gets back in a way to Pockett’s idea that specific forms of waves are conscious; otherwise, every electrical wire in your house would have a wave of consciousness surrounding it. My own view it is the wave forms that are generated by neurons and can in turn influence neurons so they would need to have forms congruent in some fashion with the biology of the nervous system. Hence, my skepticism about other substrates. In any case, it needs to have the ability to influence and generate action so it must carry information in the wave form.

      I don’t think he limits it to primates but mentions here or elsewhere other organisms like rats. Pockett, of course, speculates that even insects might have some consciousness based on EM waves associated with odor sensing in locusts. For McFadden it seems to be related to the degree of control the EM field has over the neurons.

      I was working on a longer post relating to some of these questions when I encountered this. I think we might find the evolutionary origins of consciousness in learning (particularly motor skills) and that the EM field plays a key role in entraining neural circuits. More later on this.

      Liked by 1 person

    • Mike,
      I can see how the the informational component of EM fields associated with neuron function would pique your interest here. Apparently the last letter in cemi does stand for “information”. Clearly my Bluetooth keyboard provides EM information to my phone in order for my words to be recorded, for example. But I’d like to emphasize that this sort of brain communication is not what’s being theorized here. The theory (or at least in my own terms), is that certain kinds of neuron function create EM waves that in themselves constitute phenomenal experience. Here they aren’t simply transmitting information to other parts of the brain (or not exclusively), but rather exist as output of brain function — phenomenal experience itself. So it wouldn’t matter what information alone any given system produces — without the proper EM fields, no phenomenal dynamic shall exist for the brain to potentially receive feedback from.

      I say this somewhat defensively because I can see how someone might simply add this to a list of the processed information required to produce phenomenal experience, and so still fall under the domain of “look up tables”. But you might already have grasped this as a physics based mechanism beyond any theorized output virtues of processed information alone. In that case I should say no more…

      Liked by 2 people

      • Eric,
        “The theory (or at least in my own terms), is that certain kinds of neuron function create EM waves that in themselves constitute phenomenal experience.”

        This is always the moment of truth in any theory, where we attempt to cross the line from the objective to the subjective. The question is, what about an EM wave makes it phenomenal experience? The answer always involves an IOU of sorts; the question is how large the IOU needs to be.

        If you describe an experience to me, I can usually reduce it to information, such as a toothache indicating damage, the taste of chocolate conveying high energy, or the vividness of red calling attention to something, probably a firing of circuitry evolved to recognize ripe fruit. You may not accept these as a full accounting of the experience, but it does at least reduce the IOU. Can a similar accounting of these experiences be provided in terms of the EM wave?

        Liked by 2 people

    • Mike,
      I do admit that an IOU is being made here. It isn’t quite mine, but that’s fine. My ideas concern the architecture of brain function rather than its engineering. This is the truly important question as I see it. Note that here psychologists should be able to help actual people, not someday perhaps build other conscious beings (or whatever). But if everyone is so focused upon the engineering side that they can’t grasp my architecture, that does cause this engineering question to also be a concern for me.

      If I were a talented “sales” type of person, then I’d pretend to go along with the mainstream information processing side. My dual computers model does work just fine in it, so with sufficient skill I should be able to help people understand it’s nature and thus receive good feedback. I’m a bad liar however, so it should be difficult for me to successfully pretend that I’m okay with a position that I consider to reflect substance dualism. I may as well remain genuine and so try to interest some of the outsiders. Or at least the non goofy ones!

      Is there anything inherent to the sensation of a toothache that indicates damage, the taste of chocolate that indicates high energy, or red that indicates vividness? Of course not. This observation seems to sacrifice your side’s proposed reduction in IOUs. Over the course of evolution it simply should have been adaptive for certain types of punishing to rewarding input to motivate appropriate behavior. It should be more adaptive for chocolate eating to feel good rather than horrible, for example.

      Here you could say “But that’s all caused by means of generic information processing — perhaps ‘brain based’, or theoretically even ‘Chinese room’ based”. If so however, then what would be a second example of something that exists as an output of information processing alone — no other mechanisms required? Beyond externalities such as heat or entropy, I don’t know what information processing alone can be said to “do”.

      It’s from this position that I’d have you consider the potential for mechanisms to be involved beyond information processing alone. If there are non-conscious brain mechanisms which create something else that has the potential to feel bad/good in a vast number of ways (by means of EM fields or something else), then here something could potentially evolve to provide feedback for the brain to use. Just as a standard computer will process input for output to a computer screen, the brain would process input for output to phenomenal EM fields. Note that a computer should produce no screen image without a screen to animate. Similarly a brain should produce no phenomenal experience without EM fields (or something) to animate, and regardless of the magnitude of the information processed.

      Liked by 1 person

    • James Cross says:

      This is some good discussion.

      To reiterate a little bit on my view. I think there will always be something of a subjective gap but it might be primarily a philosophical problem. By that I mean the way the hard problem is framed there can be no answer. If there is an objective/subjective distinction, we can never explain the subjective by the objective. So I don’t see this as a problem that science should even concern itself with addressing.

      If we want to venture in the philosophical realm, I think a pragmatic panpsychism, or a pragmatic dualism might be a better term, is the best approach. There is a world we refer to as physical and material which we cannot experience. There is a mental world which is all we can experience. Whether both worlds are the same stuff (neutral monism) we can’t really know so we should just make the best of it.

      Liked by 2 people

      • I agree that the hard problem will never be solved by science, at least not to the satisfaction of those troubled by it. The meta problem does seem solvable, but it’s becoming clear to me that even the existence of the meta problem, much less possible solutions, won’t be accepted by many in that group.

        I could see the case for an epistemic dualism, since our models of the mental and our models of the physical will never be intuitively relatable. And I can see the case for a deflationary version of panpsychism, which simply accepts that consciousness is composed of physics that aren’t special to the brain.

        The problem is that people never seem willing to just leave it at that. They always try to take that beachhead and bring in other theoretical commitments. Epistemic dualism turns into some variant of ontological dualism (substance, property, etc), and panpsychism leads to talk of the experience of protons. All of which ends up clouding evaluation of scientific theories.

        For EM fields to factor in consciousness, they first need to be established as part of the functionality of the nervous system. That at least is an empirical question. If it’s reality, neuroscience will eventually find evidence for it.

        Liked by 2 people

  3. James,
    I get the sense that I’m interpreting your position a bit differently than Mike is, so let us know.

    The way I was considering your “pragmatic panpsychism”, is that there is a mental world which I experience, and therefore everything in it shall reflect “consciousness”, or “subjective existence”. But then I also presume that there is an objective world outside of my mere subjective experience which I can’t directly experience. So here there will be two different realms of existence, and thus a “pragmatic dualism” distinction could be asserted. Is that about what you meant by “panpsychism” and “dualism”?

    If so however, I think it could be a bit dangerous to use such extreme terms to reference this sort of thing — misunderstandings are quite standard. I’m not sure that I’ve done much better though. I call myself an “epistemic solipsist” for essentially the same reason, though some will naturally take this term ontologically, as if I arrogantly believe I’m all that exists! As always, we must be careful when terms may be interpreted in very different ways.


    I could see the case for an epistemic dualism, since our models of the mental and our models of the physical will never be intuitively relatable.

    Though fundamentally different realms of existence, I do think that I’m able to effectively relate the two. Imagine a perfectly objective realm of existence, and thus nothing will be good or bad for anything that exists in it — by definition no subjective experience shall be present here. But if there were causal dynamics of this realm (such as certain EM fields) which are able to create “functionless” sentient existence by means of these properties, then this sort of physics would relate objective existence to subjective existence. This is what I suspect to be the case in our own realm of existence, but let me also discuss what I consider to have thus evolved.

    If certain living systems were to evolve with rule based central organism processors, then these machines might not instruct organisms all that productively under more “open” circumstances — the number of rules might grow too extensive for much success. Thus it could be that rule based organism processors would harness and implement the physics which creates subjective existence, and so create an auxiliary agency based organism processor. This is to say that while the objective processor should remain rule based, it might also evolve a sentient variety and so objectively monitor the subjective desires of that processor, and so gain effective agency based function from which to better deal with more open circumstances.

    Liked by 2 people

    • Eric,
      I’m not sure about the panpsychism part, but on pragmatic dualism, I think we’re saying the same thing. What you describe is pretty much what I mean by “epistemic dualism”. And I agree that the language is problematic. In order to avoid confusion, someone using the word that way would be obligated to either consistently prefix it, or periodically remind their audience what they mean by it.

      Of course, an unfortunate tactic in some philosophical circles is to do that conflation on purpose, and hide behind the ambiguity. Define the term explicitly in a way that can’t be disproven, but then implicitly use it in the stronger and less defensible sense.

      On bridging the mental and physical, note that I stipulated “intuitively”. I do think they can be related logically, and eventually even empirically. But it won’t ever feel right, which causes many people to reject all attempts.

      But to do that relation successfully, requires that the subjective be reduced to the objective, the mental to the physical, at least in principle.
      What I perceive you to have said:
      objective realm + causal dynamics = sentience = subjective realm
      That’s true as far as it goes, but it covers any naturalistic theory of consciousness. The question is how to describe those causal dynamics. Saying “EM fields” is simply naming a speculative substrate for the dynamics, but not speaking to what the dynamics themselves are.

      Liked by 2 people

      • James Cross says:

        “Saying “EM fields” is simply naming a speculative substrate for the dynamics, but not speaking to what the dynamics themselves are.”

        Not 100% sure what you are saying with that.

        Axel Cleeremans writes in The Radical Plasticity Thesis: How the Brain Learns to be Conscious:

        “Conscious experience occurs if and only if an information-processing system has learned about its own representations of the world in such a way that these representations have acquired value for it. To put this claim even more provocatively: Consciousness is the brain’s (emphatically non-conceptual) theory about itself, gained through experience interacting with the world, with other agents, and, crucially, with itself. I call this claim the “Radical Plasticity Thesis,” for its core is the notion that learning is what makes us conscious.”

        His theory has a lot in common with GWT and higher order theories.

        The problem with these theories for me has been to see how it actually works. But I think EM field(s) might be how the brain creates its representations, learns about itself, and is able to exert causal control over itself.

        Liked by 1 person

        • The idea that we have to learn how to be conscious does seem pretty radical. I’d need pretty compelling evidence to buy it. It sounds too blank-slate to me, or perhaps Julian Jaynes / Bicameral Mind territory. I could see an argument that in-utero development involves more learning than we currently conceive, but that puts us in the grey area between learning and environmentally contingent development.

          “But I think EM field(s) might be how the brain creates its representations, learns about itself, and is able to exert causal control over itself.”

          My question is, what do the fields provide that is lacking in the normal understanding of neural processing? As far as I can see, the brain forms representations automatically. The pattern of activation in the retina, for example, are topographically mapped in the wiring to the LGN and then V1, forming the initial bases for all other visual representations. And exerting control over itself happens with the frontal lobes receiving, evaluating, and sending recurrent signals to the other regions.

          If the various regions of the cortico-thalammic system weren’t heavily interconnected, I might see what EM fields bring to the table, but given that those connections do exist, the fields seem like a solution in search of a problem. That doesn’t mean they’re not a factor, just that I don’t see any logical or empirical necessity for their role yet.

          Liked by 1 person

      • James Cross says:

        “My question is, what do the fields provide that is lacking in the normal understanding of neural processing?”

        Integration of and feedback to the circuits that are otherwise unconscious.

        Or we could ask the other side of your question. Why is not all neural processing conscious?

        Liked by 1 person

      • We’re in general agreement Mike. And like you, as science progresses I can’t say that any physical bridging of the object to subject divide will feel right to me. For what it’s worth however, it does seem that the more science teaches me, the more that things also tend to make sense to me. Even if this object/subject thing will end up being an exception, let’s stick with causal explanations. This is to say that here we should search for something like EM fields rather than hoped for information based dynamics exclusively. For example, I reject notions such as “Eating chocolate ought to feel good because higher energy information exists here”. Specific physics should be required regardless of such “information”, and even if we don’t quite understand what’s going on. Some day something might “understand”, and even if well beyond the human. But nothing should ever need to understand reality in order for reality to be real.

        Liked by 2 people

        • Eric,
          In truth, I think our intuitions are modifiable over time. People can get used to powerfully counter-intuitive ideas, such as heliocentrism, evolution, relativity, etc. But to get to that point, it first means being willing to conceive that the intuition might be wrong, something we’re usually willing to do when we’re young, but apparently become increasingly resistant to as we age.

          On causal dynamics, physics, and information, I assume your stance is not one of faith. At this point, it’s sometimes productive to note the things that could change our mind. I could be convinced by some evidence of trans-informational physics in the brain. Can you cite what could change your mind?

          Liked by 2 people

        • James Cross says:

          “This is to say that here we should search for something like EM fields rather than hoped for information based dynamics exclusively.”

          I’m with Mike, however, that the EM fields likely are informational, just not digital They may be also more than just informational. If the theory is right or even partially so, there still will still be a lot more needed before we understand it.

          Liked by 1 person

      • Right Mike, my belief regarding the production of affect is based upon non-faith intuition, and non-faith intuitions such as this one can change. In fact I think that these sorts of positions commonly do change when inquisitive people like us learn more about a given subject. I have a rough grasp of history, chemistry, biology, and others. Therefore I should have certain intuitions about various associated topics. But if I were to explore these fields harder, then surely some of my intuitions would need to be altered or abandoned as I learn. That’s fine with me!

        Why is my intuition so strong that information processing alone should not be sufficient to create phenomenal experience? Somewhat because I consider this stuff as reality’s most amazing dynamic — consciousness itself (though potentially in non-functional forms as well). How could reality’s most amazing stuff exist as generic processed information alone?

        But yes, I think that I could change my intuition here if need be. I don’t quite require direct evidence yet, but rather ask for observations of other varieties of computer based output (obviously beyond externalities such as heat or entropy), that information processing alone is able to produce. If the processing of the computer that I’m now typing on can produce various things by means of its information processing alone, or without physics based output mechanisms to animate, then I suppose that for me this would open the door for phenomenal experience to potentially exist in such a way as well.

        I don’t know if the mechanics of EM fields serve this function in the brain, though I consider it an extremely interesting thought given that we know these fields are produced by neurons. Here we have a culprit that seems to be at the right place at the right time, and is armed with an instrument which spans distances across the brain so that various brain areas might contribute in the production of a unified conscious experience. To me this seems pretty remarkable!

        This is not evidence of trans-informational physics happening in the brain associated with phenomenal experience. As an architect of brain function, that’s not my thing. But if most everyone in the field is convinced that information processing alone is sufficient to produce this, then they shouldn’t tend to explore mechanism based potentials such as EM fields, and even if they don’t have evidence that their converse intuition is right. Observe how unhappy people have been with John Searle’s Chinese room thought experiment. Beyond yourself, does anyone on your side accept its premise? Pinker?

        Anyway I’ve provided my logic, as well as a source of evidence which I’d like to see in order to at least question my intuition here. If you can provide no such evidence, and also grasp my logic, then I’d hope for you to consider mechanism based phenomenal experience in a stronger way.

        Liked by 2 people

        • Ok Eric. I’m not going to rehash these topics yet again. The essential difference here seems to be that you see phenomenal experience as an output of some type from the brain, while I see it as neural processing. For me to accept the proposition that it is output, I’d need to see evidence for it, and that the output is something more than just “externalities such as heat or entropy”, that it actually has something to do with experience.

          Not sure why you think I accept the Chinese room argument after all the ways I’ve criticized it.

          Liked by 1 person

        • James Cross says:


          You might be interested in this by Mark Solms.

          Click to access The%20Conscious%20Id.pdf

          Depending upon how comfortable you are with psychoanalytic terminology, it might be difficult to see anything of value in it. It reverses a lot of common conceptions about where consciousness originates and what it does. I don’t want to get to deeply into it now but it may become a part of what I keep promising to post. There are really some counter-intuitive notions in it.

          Liked by 1 person

      • Mike,
        I consider phenomenal experience as an output of the brain, because the brain seems to produce it. It seems to me that the images that I experience, or the pains that I feel, or the words that I think, may effectively be referred to as a product of my brain. And if so, it seems to me that there should be associated mechanisms by which such production occurs, whether EM fields or something else. I’m saying that these fields would exist as what I’m phenomenally experiencing right now. But if the information processing of my brain itself happens to be what creates this (no “output” I guess), isn’t it reasonable to ask for other examples of what information processing alone is able to create? Shall this be considered an exclusive case which requires no associated mechanisms beyond just the processing?

        I wasn’t implying that you support John Searle’s conclusion — you’ve even written a post discussing your disagreement with it. Instead I was noting that in the past you’ve agreed with the premise of his thought experiment to me — such a Chinese room would thus understand Chinese. Or from my own “thumb pain” version, it would feel what I do when my thumb gets whacked. I consider this as a mark of your integrity — information processing alone is what creates the phenomenal. Furthermore I was wondering if Pinker or any other prominent person on that side has taken this step?

        Liked by 1 person

        • Eric,
          “But if the information processing of my brain itself happens to be what creates this (no “output” I guess), isn’t it reasonable to ask for other examples of what information processing alone is able to create?”

          You’re still positing an output of some type. The only output would be information, to be consumed by other processes. That’s happening right now with the device you’re using to read this. It’s holding data structures of this website, as well as connection data for whatever network connection it has, probably information on your various accounts, and a lot of other stuff. It likely has neural networks for various purposes running in it. It’s just information processing. Granted, the information processing involved for consciousness is far more complicated, but in the end that’s all that’s happening, even if parts of it are happening via electromagnetic fields.

          On the Chinese room, you should remember that I noted that the scenario would require centuries or millenia for an answer. That’s effectively impossible. So saying I buy into the premise is stretching things a bit. As far as Pinker, I don’t think he really takes philosophical thought experiments very seriously, but here’s what he said in How the Mind Works:

          Similarly, Searle has slowed down the mental computation to a range in which we humans no longer think of it as understanding (since understanding is ordinarily much faster). By trusting our intuitions in the thought experiment, we falsely conclude that rapid computation cannot be understanding, either. But if a speeded-up version of Searle’s preposterous story could come true, and we met a person who seemed to converse intelligently in Chinese but was really deploying millions of memorized rules in fractions of a second, it is not so clear that we would deny that he understood Chinese.

          Pinker, Steven. How the Mind Works (p. 95). W. W. Norton & Company. Kindle Edition.

          Liked by 1 person

      • Mike,
        I guess I wasn’t clear by what I meant by “computer processing with no output mechanisms”. For example, let’s say that the computer you’re now using is tasked with finding the 5,000,000th digit of Pi, though not to display an answer on your screen, or record it to a website, or even put it in memory for a potential instantaneous future answer if this comes up again. Imagine it correctly making this calculation, and thus producing associated heat and entropy, though here the answer itself becomes lost since it never goes anywhere beyond that processing itself. That’s what I mean by computer processing without output mechanisms. If you’re saying that when my thumb gets whacked, neural information goes to my brain that’s processed, and this processing itself constitutes the pain which I thus feel, then I ask for other examples of information processing alone which does something in this world sans output mechanisms. I know of nothing like this.

        Thus instead I’m proposing that when my thumb gets whacked, not only is associated neural information processed, but that this information animates mechanisms which produce what I end up feeling. This could be the EM waves associated with certain neuron firing, or it could be something else. I’m pretty sure it’s not just computer processing alone however, since computer processing seems to always need associated output mechanisms in order for associated realization to occur.

        I suspect that back in 1980 Searle came along and rained on everyone’s parade with his thought experiment, though failed to grasp the positive, or that the brain must have mechanisms in it for producing phenomenal experience. Thus he became the bad guy, and so now people consider it justifiable to take his mere thought experiment, as a flawed real experiment. All that should be needed to fix this however, is to look for one more step in the process. When neurons fire in a way that evokes phenomenal experience, this should be happening because mechanisms are being activated which cause that experience. EM waves appear to be a strong candidate, though something must be responsible given that computer processing seems to always require output mechanisms in order for realization.

        Liked by 1 person

        • Eric,
          You’ve answered your own question. If we assume no output in both cases, the data structures holding the current status of the PI calculation and the initial registering of thumb pain are both intermediate processing states. One is certainly more complex than the other, but they’re both information.

          I said “initial” above because usually with affects there’s a physiological feedback aspect (changes in heart rate, blood pressure, etc), but if we imagine someone locked in with no motor control, hormonal output, or even Vagus nerve output, then we have an equivalent situation.

          Liked by 1 person

      • Wow Mike, if you agree that I’ve answered my own question here, then I’m a bit worried about what comes next. Would you thus say that beyond the information processing that your brain does when you get whacked in the thumb (which shouldn’t in itself produce thumb pain since this concerns processing rather than output), that this information should tend to animate phenomenal experience mechanisms regarding what you then feel? Or have I missed something?

        Liked by 1 person

        • Eric,
          Not sure what you’re asking here. I’ll just note that I think “phenomenal experience mechanisms” are information processing, or, if you want to look at it at a lower level of abstraction, neural firing patterns.

          Liked by 1 person

      • Mike,
        So you consider patterns themselves to constitute phenomena, with the substrate incidental? I’m partial to both given my own conception of physics, but that’s fine. And indeed, I can see how accepting the title of “self aware substrate patterns” would be extra cumbersome! 😬

        The main positive here as I see it is that you aren’t disregarding the notion of phenomenal experience itself. It seems to me that Pinker did with that line about “a person who seemed to converse intelligently in Chinese but was really deploying millions of memorized rules in fractions of a second”. That merely sounds like standard computation. There should be good reason that evolution didn’t create philosophical zombies — non-teleological things seem unable to effectively deal with more open circumstances, such as grasping and intelligently replying to human speech.

        To me the big picture is that consciousness needs to be thought of as a lower order idea of phenomena itself (even if pattern based beyond the substrate), which became increasingly functional for the non-conscious brain to take feedback from as the phenomenal entity gained new sources of affect, sense, and memory information from which to think. At least Feinberg and Mallatt take an evolutionary approach (not that they’re quite armed with my own brain architecture). Conversely higher order conceptions of consciousness seem merely convenient given that the human is higher order, and so should be doomed as standard displays of anthropocentric bias. Note how often we begin from the human centric ideas, though progress in science ends up relieving us of such notions.

        But hold the phone! Your new post came up just before submitting.
        Here you agree with GWTers that access consciousness is the real show, while phenomenal consciousness is merely what access consciousness feels like from the inside. Conversely my own ideas essentially go the opposite way. Well I guess this at least helps explain why it’s been hard for you to grasp my ideas well enough to predict my perspective on a host of issues. This has surely been mutual.

        Liked by 1 person

        • Eric,
          On patterns, it’s worth noting that, strictly speaking, a substrate is just another pattern. A particular substrate will be composed of substances, composed of molecules, atoms, elementary particles, field excitations, etc. If you accept that, then your question reduces to: do I consider higher level patterns constitutive of phenomena, with the lower level patterns incidental?

          The answer is, it depends. We can build a computer with nothing but mechanical switches. However, even if we could make one large enough and attached the right communication equipment and peripherals to it, it would make a useless smartphone. You’re not getting a useful smartphone without a substrate capable of thousands of MIPS and miniaturized differentiation and manipulation.

          But the question is, how necessary is the specific neural substrate of biological nervous systems for consciousness? Well, we already know that many neural functions can be reproduced in alternate substrates. A computer chip can be connected to a mechanical device and send signals causing that device to perform work, similar to the signals from the brain to the neuromuscular junctions. And computer perform a wide range of tasks that decades ago required human cognition.

          So the real question is, do we, at some level of functionality, reach a point where only carbon based biology will suffice? I’m open to that possibility, but only if someone can provide a functional reason, or evidence of such a limit. Until then, the above successes make me skeptical we’ll hit that kind of limit.

          That said, I wouldn’t say that the substrate is completely irrelevant. As with the mechanical switches, there are many practical issues with most possible substrates, making only a few effective. It might be that complex cognition is only practical in technology with some form of neuromorphic hardware.

          Thanks for linking to the GWT post. Why don’t we discuss it over there?

          Liked by 1 person

      • Mike,
        I’ve been taking some extra time with this. In truth I keep starting and then decide, “Hey, that’s not going to help”. And I’m not sure discussion at your site about it will help right now either. That puts me in the role of “bad guy” in front of lots of your friends, and so you in the role of “defender”. If you keep posting for the rest of your life, I plan to be there for the rest of mine. And regardless of any differences in our positions, so no worries there. But we’re all human, and so feel what we’re caused to feel.

        I had been thinking about giving up challenging the “consciousness exists as software” position, since theoretically my ideas do still work under that format. But it’s hit me now how strongly Global Workspace Theory has become for you. It’s like one thing leads to another. So I’ll say a couple of things that may at least give you pause regarding this path.

        But the question is, how necessary is the specific neural substrate of biological nervous systems for consciousness?

        I do wish it were that simple, though we’re simply not opposed here. I’ve never been about “only living stuff can potentially be conscious”. Non-living stuff with the right physics must be conscious as well, given that physics. I don’t know if EM waves happen to be responsible, but life certainly has no monopoly on its production.

        I’ve decided that you don’t believe that the right symbols which are properly processed into other symbols, is what creates something which feels thumb pain. That simply can’t be your perception of the physics which create affective states. Even if you tell me that you do, I’ll still believe that you don’t.

        Under the Chinese room situation I’ve decided that Pinker is free to claim that a massive look up table is all that’s needed for computational understanding of Chinese, and thus consciousness. This is because the thought experiment grants that the system speaks as a human does. So functionalism should give him that right. The thought experiment itself should simply be wrong that something with a look up table could indeed converse as a human does. My own version leaves no room for this sort of failure however — either the system can feel what I do when my thumb gets whacked, and thus it is conscious, or it can’t.

        On GWT, and indeed all top down theories which relate consciousness in terms of human function, it seems to me that they harbor at least one fatal flaw. As I understand it they say that “there is something it is like to exist”, only for the human, primates, and possibly some exceptions depending upon the version. Ah, so we’re expected to believe that beating a dog to its gory death, shouldn’t be any more “evil” than beating a vacuum? That’s a position which simply will not fly. Under this challenge there should be reason for proponents to open things up, though that sacrifices the significance of treasured human brain structures. And of course proponents could decide that some things can be sentient though not conscious, which means changing the word to something quite different from what people in general mean by it.

        Liked by 1 person

        • Eric,
          Never be afraid to challenge me on my site. It’s one of the reasons I blog, specifically to get those kinds of challenges. If you haven’t noticed, a lot of our friends have no compunction about doing so, either there or on their own blogs. Or if you’re not comfortable challenging me in front of others, feel free to do so by email.

          “I had been thinking about giving up challenging the “consciousness exists as software” position, since theoretically my ideas do still work under that format.”

          I’d say don’t compromise as long as you believe it, although I would urge you to carefully consider why you believe it.

          “Even if you tell me that you do, I’ll still believe that you don’t.”

          That might make discussion a bit difficult.

          “As I understand it they say that “there is something it is like to exist”, only for the human, primates, and possibly some exceptions depending upon the version.”

          GWT doesn’t imply that. If you think it does, then you don’t understand the theory yet. It does allow for varying levels of consciousness, but it gives no reason not to suspect that many species have at least primary consciousness. Baars himself sees all vertebrates as being conscious. Dehaene sees it in mammals and possibly birds. Peter Carruthers takes a narrower view, but he’s really an outlier. My own view is closer to Baars, but with the caveat that I think the consciousness of fish and amphibians is limited.

          In any case, none of them say harming animals is justified. Even Joseph Ledoux, a HOT advocate, who seems pretty skeptical of animal intelligence, doesn’t say animals aren’t moral subjects.

          That said, if those theories did imply that, it wouldn’t be a valid reason to reject them if they otherwise made successful predictions. Like you’ve said many times, morality can’t be the arbiter of scientific truth. But luckily, GWT doesn’t put that conundrum in front of us. Some HOTs might, depending on the specific theory, but I think those theories have problems aside from animals.

          Liked by 1 person

      • Mike,
        If I weren’t around, would you explain to others that one of the implications of your beliefs about consciousness, is that if certain symbols on paper were properly processed into other symbols, then the result of this would be to create an entity which experiences what we commonly know of as “thumb pain”? Of course you wouldn’t. This is not something that you actually believe, but rather is something that I’ve goaded you into saying that you believe in order to support a cherished position. (Perhaps a better name for that position would be “softwarism”, since even people like John Searle consider the brain to computer analogy effective.)

        The story of Pinocchio may be said to exist on any medium which is able to convey this human language based tale, and so could be considered in a similar way. The difference however is that without something like a human to interpret the story, any given example will not actually exist as such. That’s not the case for softwarism however. Here we’re proposing that the information in itself (no output mechanisms) causes there to be observer independent phenomenal experience in a given form of life. Since you and I know of no other variety of computer output which transpires without associated mechanisms, this belief seems to concern a second kind of stuff, or substance dualism.

        My own ideas do still apply under dualistic notions — even a wooden boy that’s animated by magic — so I should be able to tolerate softwarism. The problem is that when phenomenal experience becomes classified as merely a software based dynamic, people tend to overlook its role in favor of mechanism based sense and memory dynamics. Here phenomena becomes secondary, as in HOT and GWT. My own ideas put things the opposite way however, so I’ve decided to continue fighting the apparent dualism associated with softwarism.

        If Joseph Ledoux believes that beating a dog doesn’t feel bad to the dog any more than this would feel to a vacuum, and yet opposes dog beating rather than vacuum beating, then something should be askew here. Either he doesn’t believe his own ideas, or he does believe them, but won’t publicly stand behind their implications. I consider either reprehensible for a theorist.

        There is one thing that I’d like to ask Ledoux. We know that a doctor could numb my toe by inject a given substance, and so take off the nail without causing me too much discomfort and things of that sort. Because Ledoux believes that “dog discomfort” does not exist, I presume he believes the apparent discomfort displayed by a dog whose toe is being worked on, is merely an instinctive reaction. But if that’s the case then why is it that the same substance which numbs a human toe, also seems to have this effect for a dog?

        You’re right that I haven’t looked very hard at GWT, and GWTers may not be quite as bad as HOTers in terms of my above common sense observation. I wouldn’t say that you’ve quite exonerated them about this however.


    • James Cross says:


      I think you are understanding the pragmatic [whatever] correctly. I’m not sure whether panpsychism or dualism or something else is the better term. Basically I am asserting there are two different sorts of things but underneath they may be (probably are) the same. The dilemma we have is that all we know and experience is one kind of thing and the other kind of thing is something we can’t directly ever know or experience. The “pragmatic” aspect comes into the picture when operating on the working assumption that the mental comes from physical.

      “But if there were causal dynamics of this realm (such as certain EM fields) which are able to create “functionless” sentient existence by means of these properties, then this sort of physics would relate objective existence to subjective existence. This is what I suspect to be the case in our own realm of existence, but let me also discuss what I consider to have thus evolved.”

      Yes. I see the mental arising from the physical through evolution and survival advantages from ability to learn and deal with uncertainty (maybe what you call “open circumstances”). As I said somewhere, prediction only takes you so far not just because it can’t be perfect but also because an organism always has the problem of determining if the present circumstances match closely enough the circumstances for which the prediction is valid.

      Liked by 1 person

      • James,
        It does sound like we’re beginning from about the same place. I’ve taken to using the “open” and “closed” environment terms rather than “certainty”, because I consider this idea to be a bit more objective. For example we might say that there is more uncertainty in the game of football than the game of Chess (given the virtually unlimited moves that can occur in football versus Chess). But to get away from a conscious perception I’ve instead come to say that the environment in football is relatively more “open”. While I consider certainty to exist as a function of conscious grasp, more and less open environments should go beyond any such understandings. Note that a computer may be programmed to direct the movements of Chess pieces in an effective way, but should not be able to do so for instruments which play the game of football — the circumstances here should be too open for such instruction. This is to say that Chess is appropriate for rule based dynamics in the “If…then…else…” form, while football playing seems to require purpose based function, also known as teleology.

        I think it can be helpful here to reduce “consciousness” back to an idea that is not inherently functional, or the feeling of good/bad itself. It’s possible that certain types of EM fields effectively exist as an entity which feels good/bad. Why? Well I don’t understand that any better than I understand why mass would attract mass. But surely if so, then each should be explorable by means of science. If sentience is caused this way however, then we needn’t put all of our hopes into the idea that information processing alone creates this. (And since information processing alone doesn’t seem to create anything beyond externalities like heat or entropy, sources such as this should be productive to explore).

        Since we seem aligned here I’ll also try something a bit different. I believe that nothing has any personal significance to anything in this world, except by means of a sentience dynamic. If that is the case then it seems to me that we could relate this to the value of existing as any given sentient (and thus conscious) entity. This is to say that the value of existing as you over the course of a day, year, lifetime, or whatever, will be your positive qualia experienced minus the negative over that specified period. So what is best for you to do in reference to such a period? Whatever makes you feel best over it. I theorize this as a subjective dynamic which is objectively true.

        The point I wish to make is that a fixed dynamic should exist which constitutes the personal value of existing as anything whatsoever. I’m merely intrigued by the thought that certain EM fields constitute this, but something must. My theory here is amoral (since it’s not about rightness and wrongness). It’s also perfectly subject based (since that’s what experiences). And it references whatever causes this stuff (EM fields or otherwise). How good/bad is it to exist as a rock? If EM field based, probably nil. How good/bad is it to exist as you over your lifetime? The EM waves associated with your brain function would constitute this. How good/bad is it to exist as the people of California over 2019? The EM field dynamics of each person over this period would theoretically decide this once aggregated. It’s the social tool of morality, I think, which has prevented science from formally acknowledging this sort of thing. Given natural selfishness it tends to be politically damaging to support hedonistic ideas. From time to time ironies like this do seem to exist.

        Liked by 1 person

  4. James,
    I’m pretty anti Freud, and esoteric academic language isn’t my favorite. But if you do end up incorporating the Solms paper into a future post then I’ll certainly be interested in your take. As I see it the entire brain functions non-consciously, though it may potentially produce consciousness (perhaps somewhat as light may be produced by means of electric current running through a light bulb). Conversely I’m aware that many people who study brain function like to argue about which parts of the brain are conscious. I consider this misguided. I’d at least like them to argue which portions of the brain create consciousness, not which portions happen to exist in such a way. It may be that this is what they actually mean, but if so then I’d like this formally stated at least occasionally.

    I will say that I’m not for higher order theories of phenomenal experience. Yes the human brand of this is higher order, though I suspect that there were steps leading up to this which probably produced phenomenal experience even back around the Cambrian explosion of life. I’d call this sort of thing “conscious”.

    As for my position on an informational component to EM fields, should they constitute phenomenal experience, I’m certainly on board with that! I’m saying that perhaps some of the waves associated with the electrical nature of neurons produce an associated phenomenal entity. Thus something that doesn’t otherwise exist would be created when it feels good/bad on the basis of that production. If something that doesn’t otherwise exist were to be produced to feel bad in some way for example, then it should be “informed” of this by means of that feeling.

    Note that while the computer that I’m typing on “understands” nothing, even a non-functional conscious entity that’s caused to feel bad should in some sense “understand” this variety of information — the phenomenal feeling itself should exist this way. I consider this as the beginning to a functional form of consciousness. While the brain seems to do things by means of logic based operations without understanding, the phenomenal form of function should occur in another way, or through agency based logic. If it feels good then do more, and bad then do less

    Originally I suspect that phenomenal experience existed epiphenomenally. This should have become functional however when the central nervous system started taking feedback from that entity’s desires to do certain things, such as move a muscle given a perception that it might relieve its suffering. Apparently such teleological function helped under certain more open circumstances that couldn’t otherwise be programmed for. So even human consciousness would evolve from such basic phenomenal experience.

    Liked by 1 person

    • James Cross says:

      ” entire brain functions non-consciously,”

      Solms’ view:

      This in turn suggests that the ideal of cognition is to forego representational (and therefore cortical) processing and replace it with associative processing—to shift from episodic to procedural modes of functioning (and therefore, presumably, from cortex to dorsal basal ganglia). It appears that consciousness in cognition is a temporary measure: a compromise. But with reality being what it is—always uncertain and unpredictable, always full of surprises— in our lifetimes actually reach the zombie-like state of Nirvana that we now learn, to our surprise, is what the ego aspires to.

      Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s