Duke Study May Confirm McFadden Prediction

A study out of Duke threatens to throw into chaos the last decade or more of fMRI studies that correlate consciousness with brain activity.

Hundreds of published studies over the last decade have claimed it’s possible to predict an individual’s patterns of thoughts and feelings by scanning their brain in an MRI machine as they perform some mental tasks.

But a new analysis by some of the researchers who have done the most work in this area finds that those measurements are highly suspect when it comes to drawing conclusions about any individual person’s brain.

They also examined data from the brain-scanning Human Connectome Project — “Our field’s Bible at the moment,” Hariri called it — and looked at test/retest results for 45 individuals. For six out of seven measures of brain function, the correlation between tests taken about four months apart with the same person was weak. The seventh measure studied, language processing, was only a fair correlation, not good or excellent.

Finally they looked at data they collected through the Dunedin Multidisciplinary Health and Development Study in New Zealand, in which 20 individuals were put through task-based fMRI twice, two or three months apart. Again, they found poor correlation from one test to the next in an individual.

McFadden made eight predictions for his cemi theory of consciousness. Prediction number 8 was:

The last prediction of the cemi theory — that consciousness should demonstrate field-level dynamics — is perhaps the most interesting, but also the most difficult to approach experimentally. In principle it should be possible to distinguish a wave-mechanical (em field) model of consciousness from a digital (neuronal) model. Although neurons and the fields generated by neurons hold the same information, the form of that information is not equivalent. For instance, although a complete description of neuron firing patterns would completely specify the associated field, the reverse is not true: a particular configuration of the brain’s em field could not be used to ‘reverse engineer’ the neuron firing patterns that generated that field. This is because any complex wave may be ‘decomposed’ into a superposition of many different component waves: a particular field configuration (state of consciousness) may be the product of many distinct neuron-firing patterns.

The Duke study is suggestive that McFadden’s prediction may be confirmed and that brain mapping projects associating particular circuits with specific states of consciousness or activities may be somewhat misguided.

This entry was posted in Consciousness, Electromagnetism. Bookmark the permalink.

11 Responses to Duke Study May Confirm McFadden Prediction

  1. Steve Ruis says:

    If anyone were to find this study, I am not surprised it is you! Thanks for this.

    Actual progress in this field has only been available when we developed the tools to measure the brains functions. But, we have to first figure out what we are measuring and what it corresponds to … and that is not easy (e.g. quantum mechanics).

    Liked by 1 person

  2. The limitations of brain scanning technologies are well known. Read about them in a neuroscience textbook and they’ll be readily acknowledged. There may be some people claiming they can predict the actions of an individual with such scanning, but it seems few and far between. It seems like the article is mostly attacking a strawman.

    But the functional localization you often read about in neuroscience literature didn’t start with brain scans. They started with observing the disabilities of those with brain injuries, then correlating them with the location of brain lesions in post-mortem examinations. fMRI technology has just confirmed and sharpened the picture started by those decades of post-mortem examinations.

    In any case, I don’t think we’ll ever get a neuron by neuron account, not to mention a synaptic one, from scanning alone. That will likely require nanorobotics technology that may not exist for a long while.

    Liked by 1 person

    • James Cross says:

      I don’t know it is strawman if you can’t get the readings to match for the same person, same activity, different times.

      Like

      • I think it’s a strawman because that’s a false standard. It’s well known that the brain is a stateful system. Processing changes its state, which in turn alters future processing. Not all circuits that get excited on a new task are going to be excited on the repeat, and other regions might get more excited as associations strengthen or atrophy. And that’s before we get into all the endogenous activity always going on.

        Like

        • James Cross says:

          This is about biomarkers for disease risk.

          “Identifying brain biomarkers of disease risk and treatment response is a growing priority in neuroscience. The ability to identify meaningful biomarkers is fundamentally limited by measurement reliability; measures that do not yield reliable values are unsuitable as biomarkers to predict clinical outcomes”.

          Like

        • I think he’s right that the technology isn’t up for those kinds of determinations. But I don’t know that most scientists working with it would have said it was.

          Like

        • James Cross says:

          It could be there is just a lot noise in the fMRI. If we’re running an fMRI on somebody looking at a screen projecting birds flying, the fMRI may be picking up everything from the person running the experiment, how the seat feels, what clothes were worn, and what the person ate for lunch. If you run the same experiment at a later time, nothing is going to match because most of what the fMRI was showing was noise. It might not even match in the visual areas if there is a lot of cross-contamination of other senses and other factors with what is happening the visual area itself. It could be that even when doing a primarily visual task the brain pulls into the visual area aspects of the entire set and setting to create the integrated experience.

          Like

        • I think that’s right. There are also inherent limitations in the way fMRI works, using blood flow, where there’s always a temporal delay of a few seconds. This provides excellent spatial resolution, down to a voxel, but poor time resolution.

          EEG / MEG is typically the opposite, registering changes in milliseconds, but with spatial resolution only as accurate as the number of electrodes that can be fitted on the scalp. (Although in surgical cases, they have been inserted into a single neuron before.)

          fMRI is still useful for finding which brain regions are involved in certain tasks, but it has to be averaged across numerous people and sessions, with an awareness of the types of factors you mention.

          Liked by 1 person

  3. My sympathies lie with Mike’s take on this, though I doubt there’s any true straw manning here. It should more be the case that Duke professor Ahmad Hariri has been misled. I’ll explain.

    Karl Leif Bates of Duke Today says: “Hundreds of published studies over the last decade have claimed it’s possible to predict an individual’s patterns of thoughts and feelings by scanning their brain in an MRI machine as they perform some mental tasks.”

    Do they state this explicitly? Mike implies that this isn’t actually believed, and I do hope he’s right. Professor Hariri implies not only that he use to believed this, but that it’s at least implicitly believed in the field in general. Thus with a new study debunking that perspective, he says that he’s “throwing himself under the bus”. So either just a few professionals have been misled in this way, which is still not good, or it’s worse.

    Beyond the provided study, why should we not believe that fMRI monitoring of brain function would ever “predict an individual’s patterns of thoughts and feelings”? Not only should we expect such blood flow data to be too course to gauge specific neuron firing patterns, but when a person under such monitoring is asked to do a cognitive task, there will be an assortment of ways to go about it. For example, when we are asked to evaluate whether one shape is bigger than another, there’s not just one algorithmic process that’s taken, but any number could be tried at a given point in time. (I personally would be inclined to cheat and pull out a tape measure.)

    So anyway, it seems to me that one moral of this story is that neuroscience today is in need of guidance from fields which supervene upon it, such as psychology, to work out what it is that neuroscientists should be exploring.

    But wait… Isn’t Hariri both a neuroscientist and a psychologist? Exactly. Because the field of psychology has not yet developed an effective broad general theory regarding our nature, it hasn’t yet been able to support neuroscientific ventures into consciousness. Thus we get one joke after another. I’m all for McFadden’s cemi, but given support from the psychology based model that I’ve developed.

    Liked by 1 person

    • James Cross says:

      I agree a lot with what you are saying but I can’t see how what you are saying is all that sympathetic to Mike. Functional MRIs are used extensively in all sort of studies. Even if we take claims about reading minds with them with a grain of salt, they are used ostensibly because they are showing us something useful. But with this study is unclear exactly what they are showing us or how useful and reliable it is.If the circuits for the same task for the same person are not consistent, then the relationship between the circuits what is going on with consciousness and behavior is much more problematic. Keep in mind that this study analyzed the retests on the Human Connectome Project, which is supposed to map connectivity within the healthy human brain, and found the same problems there.

      My claim about this validating the McFadden prediction may be a stretch but this study is certainly pointing to there being something fundamental missing in current approaches. Maybe that broad general theory you speak of will begin to fill in the gaps.

      Liked by 1 person

      • James,
        Though Mike may have gone too far with that “straw man” accusation, when I say that my sympathies are with him, I just mean that fMRI should be considered a rough form of measurement regarding brain function. So if considered from the proper context, fMRI should indeed have its uses. That’s of course the case for all forms of measurement.

        Apparently Hariri and some professionals have considered fMRI to be much more than it is. So a study concluding that we’ll never use it to understand what someone is thinking, shouldn’t be too much of an epiphany for most. I guess it depends upon how many professionals thought otherwise. But if Mike considers the number low, then to me that does at least seem like it’s probably the case.

        (As I see it, connectome work shouldn’t be too misguided, that is if taken from a broad enough context. The “Jennifer Aniston neuron”, for example, is obviously just one of countless conditioned placeholders. Such function should never be mapped out for a person with too much detail, though there should be an increasing grasp of which parts of the brain tend to do what, along with certain associations for specific neurons.)

        Where you and I are agreed that Mike is still hesitant on, is that the medium by which existence is experienced (or “consciousness”), shouldn’t be considered “brain software”. I met a UK professor by the name of Mark Bishop over at Mike’s in October, and he’s a strong supporter of John Searle. Actually Mike did a post on a xTed talk that he gave back in 2016. https://selfawarepatterns.com/2016/02/04/panpsychism-and-definitions-of-consciousness/

        With this presentation Mark essentially went over people’s heads. Instead of his intended “reductio ad absurdum”, apparently he was taken as an actual panpsychist! Of course panpsychism has gotten more and more trendy, so this miss is understandable for people who didn’t already know Mark.

        Anyway he and I got on pretty well in Mike’s October post on mind uploading. Thus I’ve been thinking about sending him an email to potentially gain his interest on the latest iteration of my “thumb pain” thought experiment. It seems to me that he and his embodied cognition friends will need to make their arguments more “in your face” rather than continue depending upon subtle academic inferences. If the right people had my thought experiment at their disposal, then it could be that a serious breach in this element of the status quo would finally emerge.

        So that’s something that I’ve been fantasizing about in recent months. The chances may seem long, but science should eventually get to my position if my ideas do happen to be as good as I suspect they are, and with or without me. I’d of course like to speed this process along, and so be included! If nothing else, I do continue to have fun with this stuff.

        Liked by 1 person

Leave a comment