Two Recent Articles on Neurons

Two recent articles on neurons when combined together bring some additional support to EM field theories of consciousness. Both articles begin with a premise of some new mystery observation in neurons that seems to call for explanation. Yet the observations would be totally expected with McFadden’s cemi theory.

One article in Quanta Magazine Neurons Unexpectedly Encode Information in the Timing of Their Firing deals with researchers who claim to have observed for the first time “neurons in the human brain encoding spatial information through the timing, rather than rate, of their firing”. Frankly I thought this had been observed before and apparently it had been observed in rats. The article claims, however, this is the first time it had been observed in the human brain.

The phenomenon is called phase precession. It’s a relationship between the continuous rhythm of a brain wave — the overall ebb and flow of electrical signaling in an area of the brain — and the specific moments that neurons in that brain area activate. A theta brain wave, for instance, rises and falls in a consistent pattern over time, but neurons fire inconsistently, at different points on the wave’s trajectory. In this way, brain waves act like a clock, said one of the study’s coauthors, Salman Qasim, also of Columbia. They let neurons time their firings precisely so that they’ll land in range of other neurons’ firing — thereby forging connections between neurons.

The timing of neuron firing, of course, is critical to McFadden’s theory since it is synchronous firing of neurons that generates the EM field that his theory posits as the underlying substrate of consciousness. The researchers speculate that this timed firing is critical to learning; hence, the theory ties back to various theories that link learning and consciousness.

The other article in the Atlantic by Ed Yong is Neuroscientists Have Discovered a Phenomenon That They Can’t Explain. Researchers in this article are mystified by the observation that the neurons associated with a specific sensory input change over time. The neurons that fire in response to an odor in mice brains are different from month to month.

How does the brain know what the nose is smelling or what the eyes are seeing, if the neural responses to smells and sights are continuously changing? One possibility is that it somehow corrects for drift. For example, parts of the brain that are connected to the piriform cortex might be able to gradually update their understanding of what the piriform’s neural activity means. The whole system changes, but it does so together.

Another possibility is that some high-level feature of the firing neurons stays the same, even as the specific active neurons change. As a simple analogy, “individuals in a population can change their mind while maintaining an overall consensus,” Timothy O’Leary, a neuroscientist at the University of Cambridge, told me. “The number of ways of representing the same signal in a large population is also large, so there’s room for the neural code to move.”

The high-level feature, of course, that could be staying the same is the EM wave form that represents the odor. This ties directly to McFadden’s eighth prediction for his theory that consciousness should demonstrate field-level dynamics: “The cemi field theory thereby predicts that if distinct neuron firing patterns generate the same net field then, at the level of conscious experience, those firing patterns should be indistinguishable”.

Others noticed the connection to McFadden’s theory.

This entry was posted in Consciousness, Electromagnetism, Waves. Bookmark the permalink.

63 Responses to Two Recent Articles on Neurons

  1. Steve Ruis says:

    Thanks for this. I saw the article in Quanta but couldn’t drag my sorry, tired brain through it.

    Liked by 1 person

  2. Great work James! To me it seems only a matter of time before McFadden’s ideas in general become validated and improved by means of general scientific exploration. To get there scientists should need to cast aside a funky bias that they’re implicit taught, or that subjective experience exists through generic information processing alone. Here John Searle’s work should be validated as well.

    If you missed it you might be amused by some standard Searle misrepresentation that Matti Meikäläinen and I recently corrected. https://schwitzsplinters.blogspot.com/2021/06/new-article-yay-on-how-to-continue-to.html?showComment=1624912466573&m=0#c8981521506923929021

    For now I’ll savor reading the two articles that you’ve found!

    Liked by 1 person

    • James Cross says:

      I like this quote from Searle that represents my position:

      “There is no more a logical obstacle to an artificial brain than there is to an artificial heart. …Because we do not know how real brains do it, we are in a poor position to fabricate an artificial brain that could cause consciousness. … Perhaps it is a feature we could duplicate in silicon or vacuum tubes. At present we just do not know.”

      Of course, if cemi is right, then an artificial (and conscious) brain definitely should be possible, although even then it could be quite difficult to create. At any rate, until we know a lot more about how consciousness is produced, I would be skeptical of any claims of consciousness in non-organic material.

      Liked by 2 people

    • Wyrd,
      Since we’re on a topic that’s pretty close, let me try you on how it’s difficult for me to fathom a more plausible consciousness solution than McFadden’s.

      You and I realized that it doesn’t make sense in a causal world for implemented programming code in itself to create something with subjective experience. Often enough in the past you’ve written about the validity of Searle’s arguments here. But let’s take Searle’s point further. It seems to me that if programming code can only do things in this world by means of various actualization mechanisms, whether as simple as an electrical switch or complex as computer screen function, then consciousness actualization mechanisms of some kind must exist in the head to create human subjective experience.

      I can see how the radiation produced by means of certain synchronous neuron firing could exist as such a consciousness actualization mechanism. That’s a plausible possibility. But I can’t think of a plausible second solution that’s known to exist in the head. Beyond an EM field, what else might consciousness exist as? Surely not sound waves. Can you think of a second plausible possibility?

      Liked by 1 person

      • Wyrd Smythe says:

        By the brain doing what the brain does. Whatever that is, it seems to do it in an entirely holistic (and analog) manner.

        “It seems to me that if programming code can only do things in this world by means of various actualization mechanisms, whether as simple as an electrical switch or complex as computer screen function, then consciousness actualization mechanisms of some kind must exist in the head to create human subjective experience.”

        The brain itself is that mechanism. It may well include EMF or other as yet unknown or not understood influences.

        As far as numerical simulations, keep in mind there are two broad classes of such. One type models the physical structure and physics; the other type models the functionality. Note that for most real world simulations, the second type isn’t an option. Most physical contexts require physical models.

        The difference is modeling something like a calculator, which is an information processing machine with an abstraction that fully describes its operation. When such an abstraction exists, functional models are possible.

        Computationalism (information patternism) is (perhaps somewhat ironically considering it’s decidedly physicalist) based on a dualist notion that brain and mind are different enough that a substrate-free abstract description of mind, a functional description, does in fact exist. Artificial Neural Nets are a (simplified) functional model of a biological neural net.

        In either event, the Turing Test is this: Given two data streams, one describing the behavior or real person, one describing a simulation, can we tell the difference? Assume we can interact freely with both over a long period, say a month. It would be like chatting with a distant friend.

        As I’ve said before, we can be skeptical about this all we like (and I am; very), but we don’t know, and it’s possible that at least the first type of simulation might work (although I’ve discussed what I see as the theoretical and practical issues at length elsewhere).

        Liked by 3 people

        • I’m certainly not going to disagree with anything there Wyrd — that was about as concise an account as I’ve heard from you on this. Still despite the countless pages that people on our side have written about the Turning test, Searle’s Chinese room, the possible beyond abstract existence of certain simulations, and so on, information patternism seems quite entrenched today. I hit this scene with a potentially different perspective, and can hopefully help make a difference.

          To me founding the conception of “consciousness” on the concept of a machine that can speak like a normal person, set the bar far too high. Thus my thumb pain thought experiment brings the bar down substantially. Only Mike has had the integrity to agree that the status quo position is that if the right information on paper were converted into a proper second set of information on paper, then something here should experience what he does when his thumb gets whacked. Others seem to realize that since I’m a nobody, this observation may effectively be ignored without dealing with this implication of their beliefs.

          Then secondly if consciousness does exist by means of some sort of mechanistic element of the brain, what might that be? Surely it would need to somehow be related to neuronal and synaptic function. Furthermore every bit of information associated with neuron firing is not simply manifested in various “hardwired” connections, but theoretically also in the form of EM field perturbations. So qualia might exist in some variety of this field. Among the countless ideas on the market today, I don’t know of any which harbor such a characteristic. What else has this? So instead of just Searle’s “It ain’t an merely an abstraction but we have no idea what it might actually be” I think we can add, “EM fields may be the answer, and we don’t know of a second plausible solution”. I was asking if you can think of a second such candidate.

          Liked by 2 people

        • James Cross says:

          Of course, I’m with you in thinking the EM field is the best candidate for explanation at this time.

          Sometimes it does occur to me that that there could be some point in computation itself without any external field where digital computation takes on wave-like, analog properties. Frankly I don’t see how but it’s a possibility.

          The other possibility is that there is simply some other sort of field or force we just don’t know about now. This seems unlikely but I don’t think it can be entirely ruled out.

          Liked by 1 person

        • James Cross says:

          “Sometimes it does occur to me that that there could be some point in computation itself without any external field where digital computation takes on wave-like, analog properties. Frankly I don’t see how but it’s a possibility”.

          Actually it occurs to me that temporal encoding itself does have a wave-like, analog properties since it could consist of varying strengths of firings occurring at different times between firings over a defined time span.

          Like

        • Wyrd Smythe says:

          Getting into a little information patternism, there, James? 😀

          Liked by 1 person

        • James Cross says:

          Yes. But the pattern could be generating a wave. 🙂

          Liked by 1 person

        • James Cross says:

          BTW I think probably even McFadden might say there are information patterns involved, it is just a question about whether patterns in information movement/neural firings by themselves are a sufficient explanation for conscious experience without adding anything more. That is where I see the problem arising.

          Like

        • Wyrd Smythe says:

          Agree. Another necessary, but not sufficient, condition. Same is true of, say, IIT, I think.

          Liked by 1 person

        • Wyrd Smythe says:

          “…information patternism seems quite entrenched today.”

          Because it might be right and we might be wrong. Both sides of the argument have weaknesses and strengths.

          “Others seem to realize that since I’m a nobody, this observation may effectively be ignored without dealing with this implication of their beliefs.”

          I’d guess someone is misunderstanding something, because computationalism fully supporting the computation of (virtual) thumb pain is kind of a given. The Million Monks really could compute a Mind — that’s a fundamental assertion of information patternism.

          “Then secondly if consciousness does exist by means of some sort of mechanistic element of the brain, what might that be?”

          The brain itself is that mechanism. That may well (and I think there’s a good chance it does) incorporate EMF fields, but they aren’t magic sauce that separately enables consciousness. (For one thing, any effect they have is pretty subtle or we’d have discovered its importance.) What seems most likely to me is that the EM field provides some synchronization and remote connectivity between parts of the brain. There may be resonance effects within the skull. The brain evolved in its own EMF bath, and nature tends to leverage everything. Even quantum effects might be somehow involved in consciousness. The decay of individual atoms in a radioactive sample somehow conspires to perfectly preserve the sample’s overall half-life. In a way we don’t fathom, the sample somehow acts as a whole. Because of the self-similarity of structure, the brain might be somehow holistic at a quantum level.

          So our understanding of EMF is just part of what there is to explore about neuron firing timing, synapses, possible quantum effects, and much else about how the brain works. I’d call it the most complicated thing nature has produced, orders more complicated than trying to understand, say, every single aspect of the city of Tokyo (which is maybe the most complicated thing humans have produced).

          Liked by 2 people

        • James,
          Right, it makes sense to keep an open mind wherever things remain highly speculative, and that’s certainly true here. Furthermore there should be political ground to gain with that openness.

          Wyrd,
          If it’s a given to Dennett and information patternists in general that they believe something will experience thumb pain when certain inscribed paper is properly converted into another such set, then that’s definitely my bad. What I generally perceive from them are vague statements which aren’t all that incriminating given the number of ways that they may be interpreted. I’ll take my hat off to any with the integrity to plainly acknowledge the sorts of things that their position implies. At least for me that would be refreshing.

          Of course I agree that the brain may be considered a mechanism in itself, but like the computers that we use there should be sub mechanisms that make up this mechanism. For example our computers often have video screens. Regardless of the various mechanisms of the brain responsible for creating consciousness (obviously neuronal and more), I’m interested in the medium through which this exists. What is qualia, or subjectivity, or consciousness… made of? We know that videos are made up of light producing video screen stuff, and that these screens are animated by means of associated computer function. Conversely consciousness involves a famously “hard problem”. I can’t think of a plausible medium for it beyond the EM field.

          To test this theory I propose we use transmitters in the head which synchronously fire charges in a way that’s similar to how neurons fire. Unlike neurons however, this firing wouldn’t be synaptically wired up to the brain. The thought is that because waves of a certain variety are known to affect other waves of that variety, whether in amplification or negation, cemi holds that if we were able to use the right firing sequences, that this should at least affect a given test subject’s current subjective experience (and even if this firing doesn’t create a field which is advanced enough to create a standard experience in itself, such as “pain”). So a highly monitored and fully aware test subject would sit down with researchers and tell them if anything seemed phenomenally strange while countless sequences were tried.

          If there were various phenomenally interesting results that could be replicated, it seems to me that it might eventually be concluded that consciousness exists entirely by means of an EM field substrate. Since this in itself shouldn’t tell us why subjectivity results, it also shouldn’t solve “the hard problem of consciousness”. I suspect that’ll never be clear to us even if tremendous progress is made here. Nevertheless a paradigm shift should thus occur that rivals the significance of any other.

          Liked by 1 person

        • Wyrd Smythe says:

          “What I generally perceive from them are vague statements which aren’t all that incriminating given the number of ways that they may be interpreted.”

          “Incriminating” is a suggestive way to put it. 😉 Always remember: we may be in the wrong here, and they may turn out to be right. Both sides have strengths and weaknesses. (Although on the internet one isn’t supposed to ever admit that, I know.)

          “Of course I agree that the brain may be considered a mechanism in itself, but like the computers that we use there should be sub mechanisms that make up this mechanism. For example our computers often have video screens.”

          Not “may be considered” but actually is. The brain is a mechanism. Period.

          You seem to be trying to shoehorn the brain into a “computer” hole. You keep comparing the brain to computers and asking why it doesn’t have sub-mechanisms like they do. But, the brain isn’t a computer, so comparisons ultimately fail. (FWIW, the brain does have analogs of computer peripherals: eyes, ears, muscles, nerves, etc. The brain also has various sub-mechanisms within itself.)

          As to consciousness — let’s loosely define it here as subjectively experiencing qualia — we don’t know why it exists or how it happens. Because brains are so complicated and we have no comparison, we can’t rule out that sufficiently complicated mechanisms experience qualia. It may be that there is something it is like to be a sufficiently complicated mechanism. Period. That’s just how reality works.

          The EMF might be a byproduct, like waste heat, or might be an aspect of how the mechanism works. The thing to keep in mind is that, given what we do know about how the brain works, if EMF plays a role it’s not an obvious and immediately apparent one (such as, say, with synapses).

          “To test this theory I propose we use transmitters in the head which synchronously fire charges in a way that’s similar to how neurons fire.”

          There are formidable ethical issues, but even so haven’t we done a lot of this sort of thing in various ways? Stuff like ECT or magnetic fields or probes into the brain? Seems like we’ve been messing around with that stuff for a long time. (Didn’t Elon Musk have something going about brain-computer interfacing?)

          “…it might eventually be concluded that consciousness exists entirely by means of an EM field substrate.”

          I would find that very surprising. To me, it would almost be a form of panpsychism. At the very least it’s a strongly dualist point of view. Why would consciousness exist separately from our brains?

          I’d say subjectivity and consciousness are the same thing, and they are the “hard problem” — hard both because it’s such a mystery and because it’s the only science problem with an inside.

          Liked by 1 person

        • Wyrd,
          Apparently I wrongly assumed that since James and I have been fascinated by McFadden’s cemi since late 2019, that you must have picked up how it works from us. I shouldn’t have. I’ll now go through a rough introduction and stand by for any questions or concerns that you might have.

          McFadden’s primary work actually involves potential quantum elements associated with biology. In the following interview he notes various puzzling elements to the function of life that seem far more possible to explain through the tunneling, superposition, and the entanglement of teeny tiny things. Furthermore at minute 53 he gets into how the Penrose and Hameroff consciousness proposal doesn’t make quantum sense at the scale of the brain, though an electromagnetic explanation might. He seems to discuss the sorts of things that I’ve noticed you to be interested in particularly.
          https://directory.libsyn.com/episode/index/show/senseandscience/id/15007949
          In any case note that your consciousness is a unified dynamic each moment — right now you should have all sorts of notions and feelings that are somehow all combined together into an individual experience. How might a brain bind all this stuff together each moment? This is of course known as “the combination problem”.

          So then how do various musical notes and instruments combine together to produce integrated music? Note that sound waves are reasonably integrated, though not truly given how slow sound waves happen to travel. This is to say that music isn’t instantly all one thing in sound wave form, though at least with waves as opposed to particles the theme does seem to be going in the right direction. (Of course if a tree falls in the forest and no one is there to hear it, phenomenal “sound” is not created, just waves.)

          And when ear parts thusly vibrate to translate into neural firing, still true integration should not exist. Regardless of how complex, such firing should still just be various individual occurrences. For an EM field created by that neuron firing however, true combination does seem possible — the speed of light should create an informational field that’s instantly a unified thing. Or at least that’s his theory. But apparently the synchronous firing which this depends upon is the best neural correlate for consciousness found so far.

          McFadden also gets into the physics of this. The big sticking point for many is why power lines for example don’t interfere with such an apparently fragile field? From his 2002 paper entitled “Synchronous Firing and It’s Influence on the Brain’s Electromagnetic Field”:

          Prediction 6. The high conductivity of the cerebral fluid and fluid within the brain ventricles creates an effective ‘Faraday cage’ that insulates the brain from most natural exogenous electric fields. A constant external electric field will thereby induce almost no field at all in the brain (Adair, 1991). Alternating cur- rents from technological devices (power lines, mobile phones, etc.) will generate an alternating induced field, but its magnitude will be very weak. For example, a 60 Hz electrical field of 1000 V/m (typical of a powerline) will generate a tissue field of only 40 μV/m inside the head (Adair, 1991), clearly much weaker than either the endogenous em field or the field caused by thermal noise in cell mem- branes. Magnetic fields do penetrate tissue much more readily than electric fields but most naturally encountered magnetic fields, and also those experienced dur- ing nuclear magnetic resonance (NMR) scanning, are static (changing only the direction of moving charges) and are thereby unlikely to have physiological effects. Changing magnetic fields will penetrate the skull and induce electric cur- rents in the brain. However, there is abundant evidence (from, e.g., TMS studies as outlined above) that these do modify brain activity. Indeed, repetitive TMS is subject to strict safety guidelines to prevent inducing seizures in normal subjects (Hallett, 2000) through field effects.

          My take is that the brain seems to be relatively insulated from standard EM radiation, whereas the static nature of the MRI variety that it’s not insulated from leaves this potential relatively benign, and then the alternating nature associated with Transcranial Magnetic Stimulation will of course have physiological effects, since here electric current becomes inducted in specific parts of the brain for the purpose of causing neural firing in these areas.

          My embedded transmitters proposal would seem to get around all this by creating the same sort of radiation that groups of neurons produce right in the head. But unlike “wired in neurons”, these transmitters would only produce EM radiation. The point would be to see if such radiation could alter a theorized EM field that exists as someone’s consciousness, and given that waves tend to interfere with similar waves. Here someone might feel something unexpected and say “Well that was strange. Try it again.” Thus perhaps science would learn about the parameters of consciousness as an EM field.

          I don’t know of anything unethical about such an experiment. Note that here a subject would be able to yell out things like “Hey stop! That hurts!” If this isn’t a good way to test McFadden’s theory, then I’d love to know why.

          The reason that consciousness seems not to exist as “brain” itself, is because while the brain functions non-subjectively and is massively parallel, consciousness is subjective and serial. And indeed, how could the singularity of the subjective dynamic in us ever function in parallel anyway? There’s only one each moment!

          Dualism? Yes that’s what McFadden calls his theory. It’s Einstein’s dualism however, or where one side of the equation deals with energy and the other mass.

          Liked by 1 person

        • James Cross says:

          Electrons (matter) moving generate an EM field (force, energy) that is separate. But there isn’t anything mysterious or supernatural about it. Matter can become energy and energy matter. Again nothing supernatural. The EM field in the theory is like a separate manifestation of the physical brain activity.

          In another sense, they aren’t all that separate – they are more like two sides of the same coin.

          On a broader note, I can appreciate your skepticism. I maintain some degree of skepticism too and the theory even if mostly correct still is missing some critical details. Ultimately it would nice to be able to measure the EM field in the brain in enough detail that we could actually “mind read” from it. Or, modify the field and induce an experience. Or do something like they did in the movie Brainstorm ( always thought that clear tape they used to record the brain was neat). Until the theory can drill down to a deeper level like that it remains a theory, perhaps a promising one, but still a theory.

          Like

        • Wyrd Smythe says:

          @Eric:

          Thanks, but to be honest, I’m not all that interested in neuroscience details (just because time). Another part of it is that so much of it is speculative, and I tend to focus on more concrete topics. So I’m limited on how deep into the weeds I want to get with McFadden, is my point. 🙂

          “How might a brain bind all this stuff together each moment? This is of course known as ‘the combination problem’.”

          (Pssst: I think you mean the binding problem? The “combination problem” is usually the term for the issue panpsychism faces in explaining how conscious pieces combine to a unified conscious mind.)

          FWIW, my answer to the binding problem is simply that the brain is a unified organ that evolved to do one thing: produce a mind. I think the binding is just a natural consequence of that unified operation.

          My overall attitude towards all of this is wait and see. I do think EMF maymay — play a role, but, as I said, I don’t think it’s a grossly obvious one.

          “I don’t know of anything unethical about such an experiment.”

          Well, experimenting on humans always raises ethical questions. Doesn’t mean they can’t be answered, but they do have to be asked.

          “The reason that consciousness seems not to exist as “brain” itself, is because while the brain functions non-subjectively and is massively parallel, consciousness is subjective and serial.”

          And yet you just said:

          “…right now you should have all sorts of notions and feelings that are somehow all combined together into an individual experience.”

          Which sounds awfully parallel to me. Consider also that multiple muscle fibers work together to produce a unified movement. When I play piano, I use multiple fingers and notes to produce one song. I don’t see any logic that parallel elements can’t produce something unified.

          @James:

          “The EM field in the theory is like a separate manifestation of the physical brain activity.”

          Right, and it might be nothing more than a byproduct, like waste heat. The combined field is likely to be extremely complex given all the contributing neurons, and I wonder how much variation it really has if I’m, say, thinking about a dog versus cat. How about just two different dogs?

          Given that each brain is individual, what would the variation be between you and I both thinking about a dog, versus one of us thinking about a dog or a cat?

          I know we can spot major differences easily, sleeping versus awake, but how much variation does the whole field show for much more subtle differences in thought?

          Liked by 2 people

        • James Cross says:

          My guess would be a lot of differences between us and over time in each of us. Just a guess. Probably the principles are the same but the actual details of implementations are different varying over time and between us.

          Like

        • Wyrd Smythe says:

          I think so, too. (I’ve always been dubious about accurate detailed mind-reading or telepathy because of those differences.)

          Liked by 1 person

        • James Cross says:

          Of course, what to make of this?

          https://www.sciencealert.com/scientists-have-converted-a-paralyzed-man-s-brain-waves-to-speech

          Notice they only have 75% effectiveness but there is another issue. It’s been shown that people can learn to control neuron firings down to the single neuron level with feedback. I wonder if what is really happening is the person with feedback from the device is effectively teaching the device how to understand his own neuron firings. This would be somewhat different from the natural neuron firings that might occur in a brain with no feedback from an external device.

          Like

        • Wyrd Smythe says:

          I noticed the paragraph, “Over the next several months, the team recorded his neural activity as he attempted to say the 50 words, and used artificial intelligence to distinguish subtle patterns in the data and tie them to words.”

          As you said feedback, “teaching the device how to understand his own neuron firings.”

          Exactly. It would be interesting to put that electrode over someone else’s speech motor cortex and see what the system produces. I would guess the signals from the brain would be different enough that it wouldn’t work. Every individual would have to train their own system.

          I do imagine such devices will come along, though. “Siri, are you thinking what I’m thinking?”

          Like

        • I appreciate not being all that interested delving into the weeds of neuroscience Wyrd. I’m actually more of a big picture psychology guy. I’ve only ventured this far here because I’m interested in experimental progress regarding subjectivity. It seems to me that most of the theories out there are unfalsifiable. Without a substrate to worry about, information patternists can always claim that a given failed experiment must not have achieved the right information pattern. Cemi is not only falsifiable, but I may have developed a reasonable way to effectively test it.

          I was originally thinking “binding problem” too, but before publishing my comment I noticed the same Wiki article that you directed me to. It breaks binding into segregation and combination varieties. It’s specifically the combination part that I was referring to rather than the segregation. How might certain kinds of brain stuff get unified into fully integrated experiences? That’s a hard problem. But perhaps a variety of wave which travels at the speed of light could thus phenomenally unify various elements within, unlike slower varieties of wave that should only exist sequentially? My proposed experiment seems to test this notion anyway.

          Panpaychists have a combination problem too, which they largely ignore. And if everything is conscious, how can they believe that anesthesia can render a person non-conscious? Oh, well that’s different they say. Apparently they aren’t actually talking about human subjectivity, or at least not when there’s something to explain. In any case for anesthesia they say that subjectivity remains, though not the evolved human variety. It’s more of a phenomenal white noise that probably doesn’t hurt and can’t be remembered. Whatever.

          It seems to me that brains don’t inherently create a subjective dynamic. Lots of creatures have brains that may never provide them with any subjectivity. As I recall you doubt that fish brains foster subjectivity in them. Perhaps ant brains make them more like biological robots than experiencers? In any case brains don’t always render the human conscious. Thus there seems to be non autonomous brain function that often creates an autonomous subjective form. This second form seems to help the first form with autonomy. Our robots for example tend to fail under more open circumstances given that they have no subjective purpose helping them out, other than us that is.

          The reason that you should be able to play the piano and all sorts of other parallel things, is because here you’ve essentially taught the non-conscious robot to do your bidding. But is the music that you create unified in sound wave form? Perhaps not until its integrated into a phenomenal experience. We can of course wait to see what the experts find, that is unless they’re on the wrong track. I’d like to help move them away from information patternism.

          Like

        • Wyrd Smythe says:

          “But perhaps a variety of wave which travels at the speed of light…”

          You’ve mentioned that before, so I’ll comment. Keep in mind that photons travel at c in a vacuum. Interacting with matter slows them down. Visible light, for instance, moves more slowly in water or glass. The EMF field inside the brain would likewise be slowed. And filtered. Some frequencies would penetrate brain tissue better than others. I’d guess most of the high frequency stuff would be absorbed at short distances. Stuff in the radio range would go further.

          That isn’t meant to contest your program with EMF. It’s just that the field isn’t quite light speed. It would still be extremely fast, though.

          “It seems to me that brains don’t inherently create a subjective dynamic.”

          I know that’s your opinion, but nothing we know so far rules out that a sufficiently complex system would experience subjectivity. I think there are good odds in favor, given the data we have.

          “As I recall you doubt that fish brains foster subjectivity in them. Perhaps ant brains make them more like biological robots than experiencers?”

          Subjectivity, sentience, and consciousness, are all slightly different things, and their differences come out, I think, when we get down to fish, lizards, and insects. I do think all three fade out as brains become less complex. It’s a fuzzy line and I draw it somewhere among insects, lizards, and fish. I think brain structure is a critical factor.

          “But is the music that you create unified in sound wave form?”

          You mentioned this before, too. As stated, absolutely, because the wave form is a single energy level over time. That’s pretty unified — all the sound at any given instance boiled down to a single energy level at that moment. Likewise, the speaker cones reproducing the music and your eardrum — they vibrate in a way that unifies (superposes) all the component frequencies.

          Our inner ear breaks that unified sound into frequencies which our brain processes and re-unifies and then experiences. I think it’s that last bit you’re referring to by unified.

          This is a more sophisticated version of tree in the forest. That one boils down to what one considers “sound”. If it’s what a brain decodes and experiences, then no. If it’s physical vibrations in the “sound” range, then absolutely. This one depends on what you mean by “unified” music.

          “I’d like to help move them away from information patternism.”

          That only happens through exploring it. Our intuitions are just a starting point. We’re a very long way away from experiments that can prove or disprove it. Very likely not in our lifetimes. And remember, it might turn out to be right.

          Liked by 2 people

        • James Cross says:

          ” so far rules out that a sufficiently complex system would experience subjectivity”

          That doesn’t mean much without adding in what kind of system. Maybe you are implying something or omitting something, but weather is a complex system. I don’t think you are meaning any complex system. But if you mean a complex system of neurons I would probably agree. It might not even need to be all that complex. I wouldn’t reject out of hand the idea that a spider’s brain might have subjectivity of some form.

          Like

        • Wyrd Smythe says:

          I don’t (reject it out of hand) and it may very well be.

          I was referring to either a complex set of neurons or something that is structurally and functionally isomorphic. (So I was omitting something for brevity. Sorry. Given my oft-stated views, I just assumed you knew what I meant.)

          Liked by 1 person

        • James Cross says:

          Your original reply was to Eric, who maybe knew what you were implying.

          I assumed something like that was what you meant. 🙂

          But the “isomorphic” aspect raises issues that I think we have gone through before. The problem is how isomorphic must it be and in what aspects. In neurons, for example, signals are generated by sodium and potassium ions flowing through ion gates of biological material in a structure with axons, dendrites, and cell body. If we assume neurons have something to do with subjectivity in humans, then how isomorphic do we need to be to the structures of neurons for it to work? Can we replace the cell body with a chip? Can we use copper wires with electrons instead of ions? Do we have to be isomorphic with axons and dendrites or does some other form of wiring work? The “isomorphic” argument doesn’t really provide us much insight without further elaboration.

          If McFadden’s and Pockett’s core argument is right, then you could produce consciousness simply with isomorphic EM fields, no matter how they were generated.

          Like

        • Wyrd Smythe says:

          “The problem is how isomorphic must it be and in what aspects.”

          Absolutely. I’ll say what I probably said last time we discussed it: We just don’t know how far down in organizational level the isomorphism needs to be. I’m certain it’s lower than than just the neural net (although ANNs have been effective tools), and I’m pretty certain it’s lower than neurons and synapses. Something at the nano level. I tend to think biology is not a necessity, but I think any alternatives won’t be too far off.

          “If McFadden’s and Pockett’s core argument is right, then you could produce consciousness simply with isomorphic EM fields, no matter how they were generated.”

          I suspect the “antenna” and driver for such would essentially be a brain. It would be the only way to produce such a complex field. As I mentioned to Eric, there is also that the field exists in a medium (brain tissue), and that might be an important factor.

          Liked by 1 person

  3. Wyrd Smythe says:

    FWIW, what I recall about neuron firing timing that, like you, I’ve heard before, had to do with the rise and fall times of the pulses. The idea was it encoded information. (It’s not a sharp memory, but that’s what I seem to recall.) What’s described in the quote sounds possibly different. If so, just one more indication about how holistic and analog brains are.

    Liked by 2 people

    • James Cross says:

      If you read the Wikipedia on neural coding, there are all sorts of schemes that have either been proposed or identified.

      https://en.wikipedia.org/wiki/Neural_coding

      It is hard for me to tell if what is being talked about here is different from those mentioned in the Wikipedia article.

      Liked by 1 person

      • Wyrd Smythe says:

        This all does make me wonder about the supposed “Jennifer Aniston neuron” — a notion I’ve always been a bit askance of.

        Liked by 1 person

        • James Cross says:

          It could be there is a Jennifer Aniston neuron but it might not be the same neuron from one month to the next.

          On the other hand, it is also possible that there is some (small?) subset of neurons that do remain constant even through most of the others are constantly changing.

          If I set out for the mall here in Atlanta, I have a number of different routes I could follow. Depending upon traffic conditions, side errands, or maybe even which lot I want to park in, I might choose a different route. After setting out, I might discover the city has blocked off my chosen route because of a water main break. Eventually I end up at the mall but I could get there in many different ways.

          Since connections are always changing between neurons, the best route to the destination might be also constantly changing so the neurons associated with the route might constantly change.

          The mystery of this explanation would be that something would almost need to know the destination before setting out on the trip.

          Like

        • James Cross says:

          Of course, if sensory information comes in through the sensory neurons like a hash value (analogy, of course), it’s possible the initial processing of sensory information might actually directly map to a physical location of neurons or group of neurons. In that case, it would kind of like knowing the destination. The other neurons firing might be like reactions to the primary mapping.

          Like

        • Wyrd Smythe says:

          “It could be there is a Jennifer Aniston neuron but it might not be the same neuron from one month to the next.”

          Exactly. It’s a temporary coincidental focal point of a much larger structure that almost necessarily evolves, especially upon more inputs related to Ms Aniston (a connection that could be quite remote; a thought about an old friend who talked about Friends a lot might shift the structure slightly).

          “The mystery of this explanation would be that something would almost need to know the destination before setting out on the trip.”

          It could just involve systems seeking lowest energy local minima. The way rainfall etches a landscape over time, for instance. Each part of the rain making a small change based purely on local dynamics and structure.

          “Of course, if sensory information comes in through the sensory neurons like a hash value (analogy, of course),…”

          Understanding it’s an analogy, question: Hash value as in compressed lossy generally unique abstract of the data or as in generally unique index tag to larger actual data? From what I know, the first is almost certainly true. Our initial sensory perceptions are a kind of wireframe version of actual reality. Generally uniquely representative (of something real), but still representative.

          Liked by 1 person

        • James Cross says:

          I’m thinking of it as a compressed lossy generally unique abstract of the data.

          I keep going back to the squirrel monkeys where they modified the light cones in the eyes to react to red (a color the normal squirrel monkey doesn’t detect) then they could suddenly see red without any modification to their brain. It seems to me to strongly indicate that some critical aspect of sensory qualia resides in the sensory neurons themselves.

          https://www.livescience.com/21275-color-red-blue-scientists.html

          So neurons in their communications (whatever coding is going on) might effectively be passing something like the equivalent of hash values. After the cone modifications, a new set of hash values (perhaps a new and unique range of values) started to flow from the eyes. This might even apply to communications between upstream neurons. What’s more it could map directly to some place in a network of neurons but that place might not be totally fixed. It might be somewhat dynamic based upon the state of the network at the time. Of course, where living material differs from machines (or any that exist today), is that it can physically modify itself extensively on the fly, perhaps in some manner that maintains a continuity at the higher level conscious functions.

          Anyway, just some thoughts.

          Like

        • Wyrd Smythe says:

          Yeah, I think we’re on the same page. The perceptive sensory framework begins with the receptors. The neurons in the retina of the eye are a good example. They do a lot of pre-processing of visual data.

          AIUI, the squirrel monkey’s had existing “green” cones modified to react to red. That means a population of cones that used to register green now register when looking at red. Meanwhile, other cones that still register green would continue to do so. That should result in new sensations for the monkeys, a split in inputs that used to be unified. They’d see a weird new shade of green (?) when those previously green now red cones fired. Since other green cones don’t fire, it’s hard to guess what the mind would make of it. Would it see it as some odd dark green (assuming fewer cones respond to red than to green)? Would it synthesize the sensation of a new color?

          We’ll have to wait until it’s tried on humans. I’d volunteer to have the color perception of a mantis shrimp!

          Liked by 1 person

  4. The Yong article is interesting. My takeaway from it is that it demonstrates the implicit assumption many in neuroscience seem to make, that mental images somehow map cleanly to firing patterns in early sensory regions, is not a good one.

    As Yong indicates, the shifts seem likely to be the result of learning. And if we regard mental imagery and concepts as a galaxy of associations that exist well past the early sensory regions, then they aren’t problematic, merely early conclusions and dispositions shifting as the rest of the system becomes better at predicting what’s there and different error correction patterns are needed.

    Liked by 2 people

    • James Cross says:

      How much do mice learn about a smell in a month that would cause most of neurons associated with the smell to change?

      Like

    • James Cross says:

      This also ties to the Duke Study on MRIs relating to performing tasks. So it isn’t just sensory related.

      “For six out of seven measures of brain function, the correlation between tests taken about four months apart with the same person was weak. The seventh measure studied, language processing, was only a fair correlation, not good or excellent.”

      Duke Study May Confirm McFadden Prediction

      Liked by 1 person

      • On learning, if we view the early sensory processing as essentially error correction for higher order predictions, then I can see it changing a great deal. And I think seeing mental imagery distributed throughout large sections of cortex is compatible with the other stuff you cite.

        Incidentally, this seems like it can be true whether or not CEMI is true.

        Like

        • James Cross says:

          How do you square your argument for learning with this comment from the article?

          “Daily sniffs can slow the speed of that drift, but they don’t eliminate it. Nor, bizarrely, does learning: If the mice associated a smell with a mild electric shock, the neurons representing that scent would still completely change even though the mice continued to avoid it”.

          Like

        • James Cross says:

          This doesn’t look like this happens only with sensory processing. Although more research needs to be done, the researchers think this might be prevalent all throughout the brain.

          In general, learning is thought to strengthen neural connections so that the same neurons fire but. in this case, even “learned” connections seem to drift over time.

          “We have a hunch that this should be the rule rather than the exception,” Schoonover said. “The onus now becomes finding the places where it doesn’t happen.” And in places where it does happen, “it’s the three F’s,” Fink added. “How fast does it go? How far does it get? And … how bad is it?”

          Like

        • James, all I can do is reiterate the point I’ve already made. I think this makes a lot more sense if perceptions are dispositions all the way down with no sharp boundary between images and dispositions. That and that it’s mostly a top down predictive process rather than a feed forward one from the sensory regions coming in. In that case, higher order processing would be expected to have a powerful effect on lower order processing.

          Or not. We have to let the research continue and see what happens. I don’t think jumping to conclusions will serve us well.

          Liked by 1 person

        • James Cross says:

          I don’t see how what you are stating (which I mostly agree with) has any relationship to neuronal drift, especially to the magnitude it seems to be occurring with smell.

          What I would expect with improving predictions is tweaking of the neurons, not wholesale replacement of them.

          There is something fundamental being missed, I think.

          Liked by 1 person

        • James,
          It could be that what’s missing for you here is an explicit statement of how subjective experience arises under non-McFadden proposals. So let me try to fill in this detail.

          It’s thought that certain neurons fire in a way that creates the mediumless qualia of a scent for a mouse. So here neurons create a higher level subjective domain. Apparently Mike is saying that this domain, which is serviced by the non-conscious neuronal domain, can progressively change the neuronal domain that services it no less than in McFadden’s EM scenario. Right Mike?

          But here’s an issue with that proposal. If qualia can exist here without medium, then there shouldn’t be anything to prevent this account from conforming with virtually any evidence. I’m saying that this appears to be an unfalsifiable proposal. Thus the only way that “information patternism” should ever lose popularity in academia, would be for a falsifiable proposal such as McFadden’s to become experimentally validated in enough ways.

          Liked by 1 person

        • James Cross says:

          I like the expression “information patternism”.

          One problem with this study is its focus on the olfactory sense. Since this is a relatively old and primitive sense (it even follows a different path in humans from other sensory input) then it might be conclusions drawn from it will not extrapolate to other senses.

          I’m not sure Mike will agree with your interpretation of his position.

          However, if learning and prediction is directly correlated with neuronal firing patterns, then I don’t see how it would be expected to see continual change in the patterns and turnover in the neurons.

          Liked by 1 person

        • Try this James. McFadden’s theory is quite consistent with neuronal drift in the sense that it’s not certain specific neuronal firing that creates a given subjective dynamic, but rather a resulting EM field. So different neurons theoretically should be able to create a given subjective experience to the extent that they’re able to replicate a suitable field.

          Similarly from the information patternism premise it’s not that specific neurons must fire properly to create a subjective dynamic, or indeed, any neurons at all. All that should matter here is the resulting informational pattern that’s created, and whether neuronal or theoretically even by means of inscribed paper that’s properly processed into more inscribed paper.

          On “information patternism”, I’ve long considered the standard “computationalism” title far too generous. Thus I’ve instead rebranded this as “informationism”. Recently Wyrd told me that this reminded him of Susan Schneider’s “information patternism” term, which I’ve adopted given the added “pattern” component. I don’t know much about her however or if she means it as I do. Perhaps not.

          For learning and prediction, consider what my dual computers model suggests (which actually doesn’t favor cemi over information patternism). The proposal is that a massively parallel non-conscious and algorithmic brain creates an entirely different purpose based, or sentient form of “computer”. Theoretically this second computer should do less than 1000th of a percent as many operations as the vast supercomputer which creates it. The thought is that algorithmic instruction alone fails a creature under more open circumstances, and so the vast supercomputer creates the simple sentient one to base some of its decisions upon to help overcome such failure. Here the supercomputer tends to fool the tiny sentient computer into the belief that it’s in control.

          In any case the sentient being should thus have perpetual incentive to interpret its qualia input and construct scenarios about what to do to make itself feel better from moment to moment. Here “learning” may be interpreted as a subjectively successful heuristic for the experiencer. Then “prediction” seems to fall under scenario construction. In any case they should not directly be correlated with the firing patterns of specific neurons. For cemi the EM field should be the determinate, while for information patternism all that should matter is the pattern produced.

          Liked by 1 person

        • James Cross says:

          I think McFadden’s theory is consistent with it but it doesn’t explain why it occurs.

          Like

        • Wyrd Smythe says:

          Confirming I picked up the term “information patternism” from Susan Schneider, who does use it to mean what is often termed “computationalism” (FWIW, what I’ve termed strong computationalism).

          I like the term (as she does) because it strikes to the heart of the matter: whether consciousness can be found in the patterns of information — regardless of substrate — or whether process is important. Does consciousness arise in consequence of the physical process of the brain (like laser light arises from the physical process of lasing material), or is it an abstraction found in the pattern of information (per the title of friend Mike Smith’s blog, Self Aware Patterns)?

          (My canonical example has been the million monks with abacuses and millions of acolytes running messages between them. Per strong computationalism — information patternism — they could over millions of years calculate a virtual conscious mind (with a virtual body) enjoying some virtual environment. Their output — lists of numbers — would be identical to a similar numerical output describing a real person doing the same thing. Under computationalism, this is possible.)

          Liked by 1 person

  5. Lee Roetcisoender says:

    These are all interesting musings however, unless or until the scientific and/or academic communities are able to develop and/or accept a term that effectively expresses what matter is in and of itself, a term that can be universally canonized by the community, they might as well “remain silent and keep calculating” and reframe from writing fiction novels like MWI.

    The fundamental issue that has to be resolved before any progress can be made is this: WHAT IS MATTER IN AND OF ITSELF? Clearly, matter is a “representation” of some thing that is universal; and if that some “thing” is universal, then that some “thing” therefore becomes fundamental to all physical systems across the entire spectrum of the physical universe, from the mystery of the quantum realm to mystery of our own consciousness (a localized field of experience that is multi-faceted).

    Sorry to any and all of you idealists who might read this comment: That some “thing” is NOT consciousness…….

    From my own perspective, what I see as the most formidable obstacle for the progression of science aside from our own personal confirmational biases is the influence that the ideology of idealism has on the science of metaphysics itself, be it objective idealism and/or subjective idealism. Idealism is an ancient religious artifact that is a huge distraction for the scientific and academic communities, and it appears that highly educated people and human beings in general are unable to get past this impediment.

    Keep blogging friends…..

    Liked by 2 people

  6. I’ll now get a bit more into the weeds on testing cemi, or at least as far as I’m able to. Hopefully this will spark some interest.

    As I understand it individual neurons fire all the time in standard brain function, and the net effect of the EM radiation becomes effectively canceled out as an uninteresting waste product. A number of synchronously firing neurons however should produce a more unique field given that the combined energies should make this less typical. These are still minuscule though at least amplified fields should be able to have effects upon neurons further away given combined energies. Apparently when neurons are already close to firing, such an EM field will sometimes incite it to also fire, and do so synchronously to add to this theme. As I understand it, epileptic seizures are essentially an uncontrolled example of thusly induced synchronous neuron firing.

    Cemi holds that when light enters our eyes to ultimately become a phenomenal image, that various neurons synchronously fire to produce an EM field which becomes this experienced image in itself. It’s the theorized substrate of consciousness. The same goes for all that we feel / think. As far as I can tell nothing “of this world” exists without substrate, which is essentially why I consider information patternism to not be “of this world”.

    So if all that we feel / think happens to exist as the electromagnetic product of certain synchronous firings, then how might we test that proposal? For this I’d love to know the number of synchronously fired neurons which are typical for brain function. Or better still, when scientists notice synchronous firing given that someone is finally able to recognized what they’re seeing, for example, how many do that firing?

    In any case if we were to serially wire an appropriate number of transmitters inside the head that each had firing energies similar to neurons, then we should be able to test McFadden’s cemi. After many years of trying to affect a person’s theorized existing EM consciousness for self report, if we couldn’t then the theory would grow more suspect. If true, why would we continually fail to get in the zone? But if we were to develop firing patterns that could reliably affects someone’s phenomenal experience in a number of ways, then it seems to me that this theory could become generally accepted. It wouldn’t quite answer this “hard problem”, but it would at least get science on the right track here.

    In October of 2020 I wrote to McFadden, suggesting that perhaps he has been too polite versus the many non worldly but popular consciousness proposals on the market, and offered my thumb pain thought experiment to help validate his proposal versus theirs. He sent me a lovely short reply on the relevance of Searle’s work and made sure that I hadn’t overlooked his 2020 paper, which of course I hadn’t.

    Then in April 2021 I added to our correspondence by suggesting the above mentioned way to potentially test his theory. Unfortunately I didn’t hear back from him on that however. Is there anything wrong with what I’m proposing here?

    Liked by 1 person

    • James Cross says:

      I don’t know.

      How many transmitters? Where do you put them? What are characteristics of the EM field they emit?

      If they only produce a small field, then their effect might not be different from single neurons and be almost unnoticeable. If the field is large, then they might simply create a phenomenal chaos. If the field doesn’t “speak” the same language as the brain, they might produce chaos or nothing or something in between.

      Neurons in the brain also have synaptic connections to help synchronize their behavior. Unless the transmitters are tied to the synaptic network, they might transmit out of sync with the rest of brain.

      I haven’t even addressed the risk to the subjects of transmitters in the brain as well as risks associated with putting them in the brain and then later removing them. Some sort of surgery would be required, wouldn’t it? Would you volunteer for brain surgery to prove the theory?

      Liked by 1 person

      • Thanks for the feedback James. I do think I have some reasonable answers for those particular questions, though let me know. Others probably dismiss my proposal given those issues and more, but have been too timid to say anything. I’ll start with your final concern. Wyrd mentioned the human risk element to such testing as well, which I didn’t quite get into. Now I will.

        I think it might be productive to handle the human risk element by offering qualified people who already have scheduled brain surgery, the option of also being compensated to include an implanted transmitter apparatus along with their scheduled procedures. Here patient would be provided with no strings attached money (maybe $25,000?). Then if things went well, at a later date the person could choose whether or not to be clinically tested for hourly compensation (maybe $100/hour?). I presume the transmitter system could be built to be quite benign and left in the head for life. Greater risk would warrant greater compensation however. Note that less healthy and older people should tend to be less worried about the long term health effects of such an implant, so might be interested enough in the money to gladly volunteer.

        Essentially what I imagine is an electrical cord that’s at least positioned under the skull, though perhaps some of it would also extend deeper if associated brain injury can be avoided. This cord should consist of one or more circuits of transmitters. Each circuit should be wired in series so that each transmitter in them fires at the same time. I don’t know how many transmitters would be implemented in a given circuit because (as I mentioned above), for standard brain function I don’t know how many neurons typically fire in synchrony. Whether tens, hundreds, or millions, that’s what I’m proposing the experiment would require. I suppose from the outside there’d be some terminals to connect to.

        As I see it each individual transmitter should fire a charge that’s typical of individual neural firing, and so such synchronous firing should create something that doesn’t get cancel out since such combined energy levels shouldn’t be standard. The point would be to get as close as we can to typical brain firing synchrony. The theory is that if some of the brain’s synchronous firing creates vision EMF for example, then if we could produce waves that amplify and cancel certain elements of that specific field, then a person should notice when their vision gets funky in various ways. Researchers should then see if such reports correlate with the auxiliary synchronous firing. Theoretically they’d learn about certain parameters of consciousness as an electromagnetic field, that is if consciousness does happen to exist by means of this medium.

        The point would not be to integrate this with brain function, but rather to disrupt standard phenomenal experience by means of nothing more than the right EM radiation produced in the right place. Initially I suppose synchronous firing would be initiated randomly with the use of a tremendous number of sequences tried very quickly to see if anything strange is noticed by the subject. And if something did ever seem to alter standard phenomenal experience in some manner, then there would be those conditions to base future attempts upon. Would such sequences generally produce that effect again or would “one off coincidences” tend to be found? Note that scientists would monitor the subject’s existing brain activity for clues about what’s going on so that certain conditions would trigger their firing machine.

        It seems to me that whether successful or not, this should give associated scientists a taste of what it’s like to explore a falsifiable consciousness proposal. Mediumless theories seem unfalsifiable because science shouldn’t be able to test that which lacks a unique medium (such as electromagnetic radiation) from which to exist. So this kind of experiment might help right some fundamental misunderstandings that seem to exist in the field currently. I see this as a difference between worldly versus nonworldly proposals.

        Liked by 1 person

        • James Cross says:

          Still not sure you would get many takers.

          Another issue that occurred to me later relates to how to determine definitively whether the effects are the result of the EM field or just neurons firing?

          Since the artificial EM field will result in neurons firing, how would you know any alteration in subjective experience is caused by the EM field or just caused by the neurons firing?

          Since neuron firing and the EM field are interrelated, how to untangle the effects of each by itself?

          Liked by 1 person

        • James Cross says:

          BTW, regarding number of neurons and firing rates. I found this:

          quote

          But, if you press me for a back-of-the envelope calculation, I’d say the best way to estimate the firing rate of a neuron is to come up with a potential range. Now, there’s probably been a bunch of research on the distribution of firing rates within various cell populations, and quite frankly, I’d only really believe that rate in the context of a particular activity you are interested (rates can change dramatically between passive sitting and active participation in a task). But generally, the range for a “typical” neuron is probably from <1 Hz (1 spike per second) to ~200 Hz (200 spikes per second).

          To ruthlessly simplify, treating all 86 billion neurons in the human brain as copies of that a single "typical" neuron, ignoring all of the glorious cellular specificity that characterizes the brain, we're left with a range of 86 billion to 17.2 trillion action potentials per second.

          Let's go back to the question of synaptic firing rates. Even though an action potential produced in a neuron is not guaranteed to produce release of neurotransmitter at a synapse, let's ignore that point and assume the opposite. I've seen people quote a minimum number of synapses as 100 trillion (although I'm not clear where that number came from). So, let's do our math again. 100 trillion synapses, each with an independent firing rate range of < 1Hz to ~200 Hz. So a range of 100 trillion to 20 quadrillion.

          Again, and I really cannot stress this enough, these numbers doing reflect what actually goes on in a human brain in any given second. The actual firing rate depends so much on what the brain is doing at that moment, that back-of-the-envelope calculations such as the ones I just wrote down are (in my opinion) absolutely meaningless. But for what its worth, there they are. And if these numbers at least give us a range, you can imagine the sheer computational power that will be required to record all the neurons the human brain.

          quote

          http://www.neuwritewest.org/blog/4541

          This doesn't address how many are firing at the same time but given the enormity of the numbers I am guessing the number is at least in the millions for most awake activities.

          Liked by 1 person

        • Wyrd Smythe says:

          Indeed! I’ve pointed out numbers like this myself, and they are a key obstacle when it comes to brain uploading, simulating, or even just storing. (In the Amazon Prime show, Upload, digital human minds are transported on hardware drives as big as a large purse.)

          The fact that, at least with any computing hardware we can imagine now, simulations take so much more time, energy, and computing power, than what they simulate strikes me as a good argument against the Virtual Reality Hypothesis. Simulating just the Earth and everyone and everything (let alone the universe) might take a computer as big as the Earth.

          Liked by 1 person

        • James Cross says:

          Also, I think I’ve referenced some research on insects before.

          But just for olfactory processing in locusts,

          “Odor-driven circuit interactions coordinate these neurons into widespread oscillatory synchrony [1], and transform representations of any given odor into specific, reliable, exuberant and temporally complex patterns of action potentials 3 distributed across the majority of the 830 or so projection neurons”

          These synapse on 50,000 Kenyon cells in which only a few react to any particular odor.

          https://www.sciencedirect.com/science/article/pii/S0960982207011207

          So even with just olfactory and insects you’re up to something around a thousand neurons.

          Liked by 1 person

        • James,
          Given the number of people who require brain surgery each year, not getting many takers for this experiment would probably leave us with far more than we’d need or agree to use. Only the very most qualified of these should have any opportunity for such surgery and potential study.

          Regarding an artificial EM field potentially causing the firing of neurons, and that firing residing as a competing account for experimental results…

          Well firstly here you’re proposing a potential issue to deal with given some manner of a successful experiment. As in “Our favored theory predicted that this sort of experiment would yield these sorts of otherwise strange results, so we seem to be validated so far”. Given the vast paradigm shifting implications of experimental validation for McFadden’s theory, I’d expect all manners of critic vying to otherwise explain such findings based upon various technical details of what the initial results happened to be. Too much has been invested elsewhere for professionals in general to lie down given experimental validation for competing theories. If there were only one test subject initially, you can bet that there’d soon be dozens fitted with various types of sub cranial transmitter apparatuses. I don’t for a second think it would be quick or simple for positive test results to be achieved given how complex brain function happens to be, but come what may.

          I’m sure that some would say that the artificial EM field was probably just causing other neurons to fire, and coincidentally it was actually that firing which probably caused whatever phenomenal dynamic the subject happened to notice. But if it were continually found that various types of exogenous EM fields could tamper with various types of subjective experience, such explanations should progressively be dismissed as highly unparsimonious . How often would we be expected to believe that the field we create in the head inadvertently makes various other neurons do what we were proposing that the field itself might? In any case there’d be all sorts of challenges to contend with given initial success, though the job of experimental science is to come up with answers that seem most plausible.

          I’d expect neuroscientists who have studied how recognizing something in a picture (for example) tends to be associated with neural firing synchrony, to have a pretty good idea of how many neurons tend to do so. Surely McFadden knows. In any case such information would interest me very much in terms of the logistics of my proposed cemi test.

          Liked by 1 person

        • James Cross says:

          BTW, I happened to read an idea from Susan Pockett about how to test the theory.

          Basically it involves determining the wave form that is associated with pain then transmitting the inverse of the wave form to a brain to cancel the sensation of pain. This could actually be done, I think, somewhat by trail and error. The wave form, assuming we are somewhat in the ballpark of the correct form, could simply be altered in some systematic way until a person in pain feels no pain.

          Liked by 1 person

        • I appreciate that Pocket introduced you, and from that me, to the idea of EMF consciousness. In practice however I’ve tended not to associate myself with her ideas, and often forget why. The current Wikipedia article on EM consciousness provides a reminder.
          https://en.m.wikipedia.org/wiki/Electromagnetic_theories_of_consciousness
          Fortunately it mainly goes through McFadden’s theory, to which I find myself repeating “yep… yep… yep…”. At the end however it notes that Pocket considers all consciousness to be the same consciousness, and that she doesn’t consider it causal for our actions. So to me she seems “exotic”, to be polite. Didn’t she once tell you that she’s been on a crusade to stop 5G phone technology?

          Anyway yes, if something exists as a wave then theoretically there should be the possibility for the opposite variety of wave to cancel it out, whether in the form of water, sound, light or whatever. I presume however that even if it exists in EMF form that there is no unique signature of “pain”. Before we could cancel out any given variety however it seems to me that we’d need to detect this in the head as such. That should be trouble given all the neural firing that goes on. Then from that point, if that specific field occurred in the person again, we’d need to be able to produce the exact opposite and get this energy in the right place. Remember that the skull exists as a quite effective faraday cage, so the proper EM transmissions would need to be formed right in the head given the tiny energies involved. I’m not sure that Pocket acknowledges this constraint.

          Liked by 1 person

        • James Cross says:

          But your objections to her proposal are objections to your proposal. How are the transmitters supposed to generate an experience if we do not know the proper wave forms? In either case, we have to understand the existing wave forms to be able to interact correctly.

          At least focusing on pain cancellation has some big pluses.

          1- Volunteers who are in pain would more likely volunteer.
          2- Pain probably (can’t say for sure) a simpler experience than a something complicated like a complete visual or auditory scene.
          3- As long as we produce even a slight reduction in pain, the wave form could be tuned.

          Liked by 1 person

        • Well I’m certainly not objecting to my proposal. I’m saying that if someone is in pain, I doubt we’ll be able to identify that specific element of their EM field should the theory be true. That’s the main problem here. We’ve got no clue what’s what. I get around this by providing a method from which to potentially get a clue. If she has such a plan to help science figure this out, then I’m not aware of it.

          Let’s say that under my proposal we have a rigged up volunteer in pain that we hope to relieve. But instead we seem to create field effects that alter the person’s vision. If validated this would be an amazing success! And what if we instead cause the person to feel more pain? That should once again help teach us the parameters of consciousness as an EM field if further testing demonstrates that we’ve altered the field to make this person feel worse. Maybe this person had thumb pain and we add toe pain. Success!

          Like

Leave a comment