This is the second part of two posts on a paper by Johnjoe McFadden Synchronous Firing and Its Influence on the Brain’s Electromagnetic Field Evidence for an Electromagnetic Field Theory of Consciousness.
The paper provides an overview of McFadden’s EM field theory of consciousness (cemi). In the first post, I focused mainly on eight predictions his theory makes. In this post, I want to focus on the parts of his argument about how EM field theory helps to answer some of the difficult problems in consciousness research.
McFadden begins with a discussion about various ways EM fields might be related to consciousness. McFadden takes what he calls a strong interpretation approach:
I propose here that our thoughts are similarly electromagnetic representations of neuronal information in the brain, and that information is in turn decoded by neurons to generate what we experience as purposeful actions or free will. This circular exchange of information between the neurons and the surrounding em field provides the ‘self-referring loop’ that many cognitive scientists have argued to be an essential feature of consciousness.
A strong interpretation can be contrasted with a weak interpretation that EM fields provide a sort of portal to view the brain’s internal operations or the epiphenomenal view that EM fields are present in the brain but are mostly side-effects of neurons firing that contribute little to consciousness.
McFadden also contrasts his approach also with Susan Pockett’s in his emphasis on informational aspects of EM waves and fields. In this regard he is careful to distinguish between synchrony of firing neurons that transmit information and excessive synchrony that transmits little information, such as the case of epileptic seizures.
Next he tackles five problems of consciousness research that he believes his theory helps to resolve. As before, the selected content below is quoted directly from paper. Highlights in bold are mine. I would especially like to call out his critique of the neural identity theory in the binding problem section.
1. The difference between conscious and unconscious information processing
In the cemi field theory, the neural circuits involved in conscious and unconscious actions are proposed to differ in their sensitivity to the brain’s em field. During unconscious driving, the sensory and motor activity responsible for the driving the car would have been performed by neurons with membrane potentials far from the critical threshold for firing (either positively or negatively) and thereby insensitive to the brain’s em field. When new or unusual stimuli reach the brain (presence of child on road) the consequent synchronous firing of neurons involved in processing that new information would transmit the information to the brain’s em field, allowing it to reach our conscious awareness. This additional sensory input may shift the membrane potential of some of the neurons involved to near the firing threshold and thereby make the whole neuronal pathway sensitive to augmentation by the brain’s em field. Our conscious mind — the cemi field — does indeed take over.
The cemi field theory also provides a natural explanation for how, in the words of Bernard J. Baars (1993), ‘a serial, integrated and very limited stream of consciousness emerge from a nervous system that is mostly unconscious, distributed, parallel and of enormous capacity’.
2. The role of consciousness in memory
If the target neurons for em augmentation are connected by Hebbian synapses then the influence of the brain’s em field will tend to become hard-wired into either increased (long-term potentiation, LTP) or decreased (long-term depression, LTD) neural connectivity. After repeated augmentation by the brain’s em field, conscious motor actions will become increasingly independent of em field influences.
Similarly, in the absence of any motor output, the cemi field may be involved in strengthening synapses to ‘hard-wire’ neurons and thereby lay down long-term memories.
3. The nature of free will
Therefore, whereas in agreement with most modern cognitive theory, the cemi theory views conscious will as a deterministic influence on our actions, in contrast to most cognitive theories it does at least provide a physically active role for ‘will’ in driving our conscious actions. In the cemi field theory, we are not simply automatons that happen to be aware of our actions. Our awareness (the global cemi field) plays a causal role in determining our conscious actions.
4. The nature of qualia
However, awareness per se, without any causal influence on the world, cannot have any scientific meaning since it cannot be the cause of any observable effects. In the cemi field theory, consciousness — the cemi field — is distinct from mere awareness in having a causal influence on the world by virtue of its ability to ‘download’ its informational content into motor neurons. It therefore corresponds quite closely to what Ned Block terms (Block, 1995) ‘access consciousness’. How far animals or inanimate informational systems are conscious will depend on whether they possess complex information fields that are capable of having a causal influence on the world. This may well be amenable to experimental testing.
Since, in the cemi field theory, a conscious being is aware of the information contained within the cemi field, qualia — the subjective feel of particular mind states — must correspond to particular configurations of the cemi field. The qualia for the color red will thereby correspond to the em field perturbations that are generated whenever our neurons are responding to red light in our visual field. However, since at the level of the brain’s em field, sensory information may be combined with neuronal information acquired through learning, the ensuing field modulations would be expected to correlate not with the sensory stimuli alone, but with the meaning of particular stimuli. This was indeed what Freeman discovered in his classic experiments on rabbit olfaction (Freeman, 1991).
5. The binding problem
To illustrate the problem, consider two entirely independent neural networks (either biological or artificial): the first is able to recognize green objects, and the second is able to identify round objects. If an apple is presented to both networks then, in a neural identity theory of consciousness, each network would (if sufficiently complex) experience their own particular qualia for roundness or greenness. But now we add a wire (or neuron) that connects the two networks so that the combined assembly is able to recognize objects that are both round and green. A neural identity theory would then predict that the enlarged network should experience qualia in which roundness and greenness are somehow bound together in a way analogous to our own conscious perception of an apple. Yet the only additional input that either network would have received is a single binary digit travelling down the connecting wire from the adjacent network. Neither ‘roundness’ nor ‘greenness’ could be fully described by a single binary digit. To account for the existence of unified qualia that includes the information coming from both networks, the neural identity theory must propose some overarching reality that connects and unifies the two networks. But no such overarching reality exists — at least at the level of matter
But in the human brain, there is an overarching reality that connects neural networks: the brain’s electromagnetic field. At the field level, and in contrast to the neuronal level, all aspects of the information representing the apple (colour, shape, texture etc.) are physically linked to generate a single physically unified and coherent modulation of cemi field that represents conscious perception.
The cemi field theory is compatible with many contemporary theories of consciousness. The cemi field can be considered to be a global workspace (Baars, 1988) that distributes information to the huge number of parallel unconscious neural processors that form the rest of the brain. Similarly, the brain’s em field may be considered to be the substrate for Dennett’s multiple drafts model (Dennett, 1991) since its informational content will be continually updated by neuronal input until a field configuration is reached that is capable of generating ‘output’ that is downloaded as motor actions or the laying down of memories. The theory also has much in common with quantum models of consciousness (Penrose, 1995) since both propose a field-level description of consciousness. However in contrast to quantum consciousness models that must propose a physically unrealistic level of quantum coherence between neurons or microtubules within neurons, the cemi field theory has no such requirement.
James,
I get the sense from Wikipedia and his home pages that McFadden isn’t really a “consciousness” guy, but rather is known for ideas in fields such as medicine and quantum mechanics. Sometime it takes “new eyes” to see things in more effective ways. Still I don’t get the sense that he yet grasps that he’s proposing something which gets around the strong challenges to the status quo presented by John Searle’s Chinese room thought experiment. Furthermore I believe that my own psychology based “dual computers” brain architecture could improve his perspective a great deal. But if went through his paper and noted what I’d change, this might look like unstructured criticism. Instead I’d like to present a more general account of my model, which should demonstrate how I’d like to help improve what he’s begun. I’ll probably have something for you tomorrow.
LikeLiked by 1 person
He’s written a good bit on quantum biology which makes his position on it in relation to consciousness compelling to me. His degrees are in biochemistry and is in the school of Biosciences and Medicine at the University of Surrey.
https://www.surrey.ac.uk/people/johnjoe-mcfadden
LikeLiked by 1 person
That final paragraph seems to make McFadden’s theory more of a substrate theory than a high level cognitive one, an alternative, or enhancement, to recurrent processing theories. As I’ve noted before, it isn’t discussed in any of the neuroscience books I have electronically. (The closest thing I get when I search for “electromagnetic” are discussions about EEG or how retinal receptors work.)
Christof Koch does briefly refer to electrical fields as possibly conveying information in his 2013 book, but doesn’t seem to in his latest one. In general, the earlier book was a bit more freewheeling in its speculation, but I wonder if there were any empirical results that led to the later omission.
LikeLiked by 2 people
Yep. I think it is a substrate theory but I guess that is the problem I’ve been having with GWT and such since they seemed to lack a substrate or actual mechanism. They just seemed to be descriptive. I guess now I can actually like both theories and find them complementary rather than a either/or choice.
I’d been warming to GWT for a while anyway with a number of quotes from Baars in my learning/evolution post.
Koch likes IIT and I am guessing it doesn’t care what form the information comes in as long as it get tied together. Also, with his panpsychist bent, he is probably still thinking in terms of some sort of exotic material.
LikeLiked by 1 person
On GWT, global neuronal workspace theory gets much more into the neural substrate details, although GNW is more specific than GWT, with both empirical successes and failures, requiring tuning over the years. But GWT itself is more agnostic, just referring a lot to the thalamo-cortical system overall.
Koch in his last book seemed to put a lot of emphasis into structural factors, which makes sense since a lot of IIT’s phi calculation is based on structure. I don’t get the impression he’s into exotic physics (quite the opposite). But he does seem fixated on IIT and the back of the brain, which seems incongruent with a lot of the cognitive neuroscience I see out there.
James, for some reason I’m not getting notifications for comments on this post, despite subscribing to them. Just fyi, in case I seem unresponsive.
LikeLiked by 1 person
I was noticing in the reader that one of my posts was also behaving like some of your posts have behaved in the past. It’s starting to look like there are some WP bugs in some recent releases. I’ve taken sometimes to going straight to your blog and checking it directly rather than relying on the reader or notifications.
LikeLike
I didn’t get an email for your 10:04 comment, but I did for the earlier one in Eric’s sub-thread.
On the posts not showing up in the Reader, one thing I did realize, is that posts I save in Drafts, then later publish, are much more likely to have the issue. So I try to either avoid saving drafts, or if I do, copy the verbiage to a fresh post before publishing. Since doing that, I haven’t gotten any complaints.
But WP these days seems much more buggy than it used to be.
LikeLike
Mike,
That you didn’t find anything in your “electromagnetic” search makes sense, that is if it’s widely believed by prominent modern theorists that phenomenal experience exists by means of information alone. Why go adding something when you believe that your ideas already address it? Hard problem, shmard problem!
Things may not be quite as convenient as they suppose however. Thus as more evidence comes in (or perhaps if things become too embarrassing regarding certain thought experiments?) then the ideas of these prominent people may need to be altered or abandoned. It could be that phenomenal experience (as well as all else in our realm), is entirely substrate dependent.
LikeLiked by 1 person
Eric,
“Why go adding something when you believe that your ideas already address it?”
Following Occam’s razor, you don’t, unless the evidence forces it.
Keeping this focused on James’ post, note this point.
The final point about it being compatible with GWT, and Many Drafts (a variant of GWT) seems to drive this point home. If you’re looking for a super-informational theory, it sounds like you might have to look at Pockett’s version.
LikeLiked by 1 person
Either you misunderstood what I wrote (my bad) or I may be misunderstanding what you wrote or maybe we’re on the same page. 🙂
To be clear. McFadden’s approach has more emphasis on information than Pockett’s.
But I do think there is a possibility of a super-theory emerging combining GWT, IIT, and cemi. Or, if not em for the substrate, some explanation about how the various consciousness research problems enumerated in this post get solved by neural identity or some other theory. I don’t think the problems can be explained by neural identity which is what led me to some variant of EM, but I’m open to other options. Occam’s razor only works if the simpler explanation really explains and not even always then.
BTW you guys, feel free to battle it out if you want as long as it is somewhat related to whatever my topic is.
LikeLiked by 1 person
Ah, ok. Sounds like I might have over interpreted your remarks about GWT and MDM. But to the extent things go super-informational, I’m not sure how meaningful it is to say the result is compatible with those theories.
Can’t say I’m big on identity theories. In my mind, if you’re not explaining functionality, you’re not explaining.
On battling it out, thanks for the permission, but honestly, until there is something new to discuss, my appetite for looping over the same points again isn’t there.
LikeLiked by 1 person
Mike,
Though I appreciate that Pockett rather than McFadden introduced James to cemi, the wiki entry on the topic mentions something that I find quite concerning:
Universal consciousness, that’s epiphenomenal to boot? For a simple naturalist like me, that sort of speculation is far too exotic.
I agree with James that McFadden’s cemi could be used to augment various more established ideas. For example it could be that neuron function must go “global” in order to produce the brain waves that create the phenomena of a given image. Hell if I know! I consider my own role to be more in terms of “architecture” rather than “engineering”. The only reason that I feel so strongly that phenomena must be super informational, is because of my strong naturalism. Causal dynamics of this world must, by definition, be super informational. As I see it, to propose that there is a single element of reality (phenomenal experience) which is instead exclusively informational, is to propose a second kind of stuff which is beyond the causality associated with material dynamics.
I realize that you don’t see it this way, and also that there’s little more for us to discuss in this specific regard. Far better would be to find others who are both willing and able to effectively challenge my position. Though I suspect that many or most of our acquaintances support you rather than me here, they seem hesitant to publicly argue the case. (I do respect that one of our friends is working on software that he believes will (or does) contradict me.) Regardless, if my arguments in this regard were to become prominent enough to challenge the status quo, then I’d take it as a “back door” from which to potentially help fix some tremendous ills that I consider to exist in academia today.
James,
I’m now going to begin illustrating a broad psychology based brand of brain architecture, and it’s from this perspective that I’d tweak certain elements of McFadden’s proposal.
Consider the human brain, or indeed any central organism processor, as a non-conscious machine that accepts input information, processes it algorithmically through AND, OR, and NOT varieties of neuron function, and so provides organism output function. Similarly I consider genetic material to function in such a way for the cell, though obviously quite differently. Each may be analogized with the computers that we build in the sense that input information is processed for output function. And from the perspective of these three varieties of machine, I’m going to present consciousness as yet another variety, (presumably with em radiation for substrate) which exists entirely as an output of a brain. This form of computer functions by means of a punishment/ reward dynamic, or what I consider to be reality’s most amazing stuff.
The reason that evolution engineered brains to implement a conscious form of computation, I think, is because while algorithmic function is great for “closed” circumstances, such as the game of chess, computers simply can’t be programmed with enough contingency routines to effectively deal with the sorts of things which teleological entries are capable of — something with purpose seems required. The purpose here is, by definition, to feel as good as it can each moment.
Theoretically when neurons fire in a certain way, this creates something else (such as em radiation) that phenomenally feels good/bad. So given that algorithms would tend to fail under more open circumstances, evolution must have taken this originally epiphenomenal dynamic, and given it something to do. It’s essentially an agency based buffer. Instead of “If [this] then [operation]” it’s “If [this] then [cause a given phenomenal experience]”. Note here that a good to bad experience inherently creates something with opinions, as in “I like that”. I call these opinions “thought”, or the processing element of the conscious entity. From an evolved entity, the theory is that the brain detects these em based opinions and then algorithmically implements certain things on that basis. So here we have a punishment/ reward dynamic input to this entity, it’s inherently processed given that something will thus have opinions about feeling good to bad, and the brain will then detect such opinions for potential algorithmic operation. Note that such function will now reflect the teleology which it formerly lacked.
At this point I’ll pause to see if you, Mike, or anyone else has any questions or comments regarding the premise of the brain architecture which I’ve developed. And I’ve got a few chores to do anyway!
LikeLiked by 2 people
I haven’t absorbed your entire comment yet but in glancing at it I did notice one thing. Here is a more complete quote from the paper by McFadden:
In this view, our will — the cemi field influence on neuronal firing — is not ‘free’ in the sense of being an action without a physical cause. It is entirely deterministic … Therefore, whereas in agreement with most modern cognitive theory, the cemi theory views conscious will as a deterministic influence on our actions, in contrast to most cognitive theories it does at least provide a physically active role for ‘will’ in driving our conscious actions.
It is a somewhat more nuanced view not so completely at odds with Pockett’s.
I’ve tried to avoid the causality quagmire and try to talk in terms of chains of causality. Billiard ball one hits two which hits three, etc. And before one there was something also. The chain could probably be traced back to the Big Bang and forward to whatever we end up with at the end of the universe.
So I think the causal question turns out to be where do you want to place the cause?
I may post some more in relation to this in the near future and comment some more on your comment tomorrow. I like McFadden’s perspective on it more than Pockett’s.
Also, I think that statement regarding Pockett’s “universal consciousness that experiences the sensations, perceptions, thoughts and emotions of every conscious being in the universe” is a vast overstatement of her position. I may have some more on that when comment tomorrow.
LikeLiked by 1 person
I went back to Pockett’s book and I don’t see anything like “a universal consciousness that experiences the sensations, perceptions, thoughts and emotions of every conscious being in the universe”. In the last chapter, she speculates some about the broader import of EM field theory but all of that is completely unrelated to her basic theory. She takes seriously the notion that EM fields in the right forms are conscious so anywhere a field manifested in the right form could be conscious. For the most part, she talks about localized fields in living organisms. She does mention a possibility of pervasive electromagnetic field in the universe of which the localized conscious fields might be a part of it. That’s about it. Whether she has said anything else somewhere else I don’t know.
LikeLiked by 1 person
Eric,
Regarding your more extended comment, I’m not sure I disagree much, although I think about it in a slightly different manner.
This is the way I think about it, although I haven’t given this much deep thought.
On causality, after the Big Bang, the events of the universe began to create systems. A system is a set of related matter/energy with a boundary and internal dynamics that is mostly unaffected by what is outside the boundary. So something may break across the boundary (input), something may issue out (output), but what happens inside has its own dynamics. From a blackbox perspective the system is causative of the output because we cannot know always based on the input what the output will be (although if we opened the blackbox and learned all of its rules we might be able to know for sure what the output would be).
Billiard balls on a table could be thought of as a system. (Imagine you can’t actually see all of the balls on the table to make it something of a blackbox.) An input comes in – I hit the queue ball – balls strike each other internally with their own dynamics, the eight ball goes into the side pocket – the output. We can see this in cells. Cell wall is the boundary. Food and nutrients pass across it. Internally the food is consumed to create protein and other living material and eventually a new cell issues out from the one cell, perhaps identical to the original, perhaps not. The brain is like this. However, the brain can be thought of as having a conscious and an unconscious part. I would like to think of these as two separate systems even though contained in the same physical unit. So the conscious part is one system with internal dynamics we are beginning to understand, the unconscious part is another system. They interact and feedback upon each other. Perhaps from the perspective of the entire universe, everything in every system is deterministic to every other thing. From the perspective of the two systems, the conscious system can be causative.
LikeLiked by 1 person
Sounds good on Pockett James. I figured that Wikipedia must have misrepresented her at least somewhat, if not tremendously. How am I suppose to feel what someone is feeling in China? Brain waves? I don’t think so. Who believes in “universal consciousness” anyway? If my own ideas ever gain any prominence, but people start ascribing bullshit notions like that one to me….
Your McFadden quote seems right to me. Personally I’m not bothered by concerns about freewill/ agency needing to subvert causal dynamics though. I don’t consider freewill to exist ontologically, but do consider it to exist in an epistemic capacity. For example, from my own pathetically tiny perspective I feel that I’m free to lift my arm right now. (And there, I just did it.) Let’s call that a personal perspective of freewill. But as a strong naturalist I also believe that I was mandated to do exactly that from the beginning of time. Ultimately I only feel free.
I put the cause at any given moment in time. For example I believe that everything which exists in the universe at this very instant, causes exactly what will exist at the next instant, and that this goes on perpetually. I think we’ve got this about the same, though with “boundary and internal dynamics” you seem to be referencing epistemic entities, such as a human, or a car. We need to do that as well for practical reasons, though in the end the only thing which should exist is a single causal system, or the premise of monism.
Okay, as I left things we have a brain that creates em fields which serve as a conscious entity, which is to say a teleological form of computer. So how does this second variety of computer work? I consider the main input here to be sentience, qualia, affect, or whatever term you like to represent feeling good to bad. Without this to at least some degree, I don’t consider consciousness to exist. It may also be referred to as the fuel which drives this form of function, somewhat like electricity drives the function of the computers that we build.
Through em waves I consider the brain to also provide the conscious entity with an informational form of input, or “senses”. Sight, sound, hearing, smell, taste… each of these provide information about the world, though often mixed up with the sentience input as well. For example a given smell might feel good to you, though through chemical analysis it may also tell you some things about the world, such as someone must be cooking some tasty food. Notice that a pain in your toe doesn’t just punish you (or the sentience input), but it also provides information about where the problem happens to be (or the sense input).
Then the final variety of input that I ascribe to the conscious form of computer is “memory”, or effectively the other two varieties of input that return later in a degraded way. For example you might remember yesterday in some capacity, though that doesn’t mean you experience it to the degree that you experience the present moment. We often tend to remember things that are significant to us, and by means of repetition can willingly try to retain various bits of information from the past. Theoretically memories exist for us because neurons which fire in the present to create the brain waves of a given conscious experience, thus have greater propensity to do so again in a degraded way. Without memory, for example, I wouldn’t realize that I’m writing this to put on your blog.
With the motivation of sentience, the information of senses, and the recollection of memory, how are these three forms of brain wave based input, thus processed? I consider the conscious entity to naturally interpret them and construct scenarios about what to do to make itself feel better. Why? Because sentience exists as the fuel which drives this form of function. Does this entity want to suffer? No. Does it want to feel good? Yes. I call the interpretation of such inputs and construction of scenarios “thought”. Note that we’re proposing that the brain waves exist as input, and so thought can only exist as well to the extent that such input exists. Removing associated brain waves would theoretically remove consciousness as well.
Then finally there is an output component to consciousness. Note that a thinker may decide to think about all sorts of things, and thoughts which have been recorded through memory may be considered as an output of thinking. For example I can plan to do something in a given situation, and this plan will be an output of my thought. But the only non-though based output of consciousness that I know of, is “muscle operation”. This is to say that the conscious entity can decide to move various muscles in various ways, and the other computer (which is the non-conscious brain) detects these brain wave based desires and tends to operate those muscles as instructed. Here is where the circle becomes complete. The brain can’t be programmed well enough to effectively deal with more open environments, so it creates a brain wave based sentient entity which is thus motivated to teleologically think about what to do. Then when it decides, the brain moves the associated muscles as instructed.
I’ve provided an extremely stripped down model of our essential nature here, though I believe that through it I’m able to account for a wide range of human psychological function. Below is a diagram which may (or may not) be helpful in relating the two computers. Here consciousness is presented as an output of the brain, it goes through the steps that I’ve noted, and results in ultimate muscle operation output. (For now disregard the “Learned Line”.) Regardless I consider the human brain to be an amazingly complex machine that should do more than 100,000 times as many calculations as the conscious machine which constitutes what we humans know of existence. So don’t worry that the diagram doesn’t show all sorts of lines to and from the brain. Theoretically it’s taking in a vast amount of information and is doing all sorts of things.
LikeLike