Making certain a wing and fuselage design in a new airplane works as expected is critical before the airplane flies. It also isn’t easy. The dynamics of air flow is complicated and the number of variables so enormous that the computation of fluid dynamics for plane has been beyond the capacity of most computers. Simulations of this sort generally had to be undertaken on government super-computers until recently. The Wright Brothers solved this problem by flying models of their airplane tethered to the ground over many years and many designs until they perfected the wings used in first powered flight in 1903. Before the Wright Brothers in 1871 a British engineer Frank Wenham invented the first wind tunnel to perform the same kind of testing on a small scale, a 3.7-meter-long square tube that sent air through at speeds up to forty miles per hour. A miniature model of the plane or wing could be placed in the tunnel and measurements made of its aerodynamic performance. A smoke gun could also color the air and permit visualization of the air flows. The wind tunnel became to the go-to method for testing wing and airplane aerodynamics for almost every plane designed and flown since. Even with the advances in the computational dynamics and computer power, wind tunnels are still in use. NASA has one over 430 meters long.
Aeronautical engineers without computers and with a partial understanding of aerodynamics solved the problem of designing planes that could fly by creating a model. It was an exceedingly practical approach to a complex problem. Instead of doing basic science until achieving a full understanding fluid dynamics, then inventing supercomputers to do the calculations, engineers tried their models out to see how they worked. Could the evolution of consciousness be a similar exceedingly practical solution to a complex problem – the problem of surviving and thriving in a complicated world?
Computational models are a major paradigm for understanding how brains work. The brain itself with its network of neurons, in turn, has become a model for attempts to simulate intelligence with computers. The comparison between brains and computers is more than superficial. Neurons themselves can fire and trigger other neurons based upon inputs from other neurons. In a crude sense, neurons function like digital switches and their impulses like the 1’s and 0’s of digital computation. A neuron is either on or off. It fires or it does not. A key part of human brain activity is taking information from the environment in the form of sensual impressions and processing it. We learn to separate cars from bicycles, lions from mice, and we take actions from our perceptions. We might try to run from lion that is not in a cage. We might shriek if we see a mouse but more likely we will ignore it or make a note to pick up some mousetraps at the hardware store. Artificial neural networks can be fed training data and learn to perform impressive feats of image recognition and prediction even though it is not exactly understood how they work. Claims of sentience for sophisticated artificial neural networks might be disputed but are no longer dismissed out of hand. A Google engineer had to be put on leave after claiming the company AI chatbot had become sentient. While most experts in the field of artificial intelligence dismissed the claim, many of the same experts while disagreeing about when would expect eventually there will be an AI sophisticated and powerful enough to be declared conscious.
What we can observe happening in brains is clearly different in major ways from what we observe happening in digital computations. Silicon chips do not look like brain matter. In outward behavior, however, a Super AI might be indistinguishable from a human being. It could pass the Turing Test. For every “trick” question we might devise for the Super AI, its training data and layers of neurons could produce a response we might conclude a conscious human could make. That the Super AI should be declared “conscious” based on outward behavior alone rests largely on the naive idea that consciousness is just a list of functions. Once all the boxes are checked Super AI can be declared conscious. The only real answer to the consciousness question for Super AI is simply that the question is unanswerable. We cannot share the conscious experience of Super AI just like we cannot share the consciousness of another human being.
The more interesting question might be how human brains do now what Super AI eventually will do. Human brains process information but they do it with complex organic chemicals and ions. They are powered on glucose from blood pumped by hearts. Human brains, unlike AIs, do not perform a small set of computing tasks. Brains have a large role in running the entire organism from idle thoughts to deciding what to eat for dinner. They do it without any circuits connected to a power grid. The human brain uses an estimated 20 watts of power. IBM’s supercomputer Watson runs on 20,000 watts and nobody is claiming it is sentient. If we extend sentience to animals with smaller brains than humans like crows, cats, dogs, and chimpanzees, the power usage for living brains and number of neurons is even less and the difference with AI more dramatic. What is more, neuron firing rates, the presumed information transmittal mechanism, are actually very slow compared to instruction processing in digital computers. Even the rapidly firing neurons spike only around a hundred times a second and many neurons seldom fire or take seconds to fire.
How does the brain do it?
One explanation for these observations is parallelism. The human brain is sometimes described as massively parallel. That means it may be slow, but it makes up for it by doing a lot of different things at the same time. While this could account for some of what the brain does, parallelism has its own problems especially when implemented in digital computing. Strictly speaking, a digital computing is always sequential. It executes instructions one by one in time with results of previous steps available to following steps. The best it can achieve is simulated parallelism through the creation of parallel threads with each thread executing sequentially. Not all computational problems lend themselves to this type of parallelization. Generally only computations that are completely independent, or that can divided into independent steps but whose results are related to each other, gain from parallelism. Information sharing among the threads of processing is another problem. Coordination of the parallel threads so threads dependent on each other can let each other know when their computations are complete is another overhead. In data processing, there is frequently a diminishing value in adding new threads to any computing task.
Another explanation is that the brain may be computing in a different way from how digital super computers are doing it even if it is doing a significant amount of simulated parallel processing. Or, at least, it may be computing in more than one way and can successfully hybridize multiple techniques into a whole. A simple organism with a light sensor and a reaction circuit that makes it swim towards the light might be a successful adaption implemented with a neural-like circuit best categorized as digital. Similarly, many functions of the human brain, especially reflexes and automatic responses, may also done with simple digital-like circuits. The question is how much this mechanism scales up as organisms become more complex, their number of senses and demand for acuity grow, and the demands increase for more nuanced perceptions and behavior in a competitive environment with similar organisms. The demand for more energy for more neurons and faster firing rates had to be balanced against any competitive advantage to be gained from additional processing power. The digestive tract itself limits the amount of the energy that can be expended by an organism and, thereby, also limits the amount of energy that can be expended by the brain.
Evolution. instead of adding more neurons to a digital network, must have figured out a different sort of solution to the energy crunch/computing problem. Information processing is not all of one sort. Classical computing works with distinct states: ones and zeroes, on or off. Non-classical computing is analog and works with quantities that are not discrete. They can be infinitely divided like the real numbers between zero and one. Non-classical computing is also distributed, probabilistic, and parallel. Non-classical computing is truly parallel, not simulated parallel, so the problems with digital parallelism are avoided. While non-classical computing has become associated with quantum in modern science, there are other forms of non-classical computing that are not quantum based. A model is an example of non-classical computing. We can change the speed of air passing by the airplane model in the wind tunnel over infinite gradations of speeds and see immediately the changes in turbulence and air flow. No high-powered computing is required.
I am suggesting evolution built a model, but it is not just any kind of model. A statistical simulation or abstract model would run up against the same issues airplane designers modeling airflow would have had before computers. Not only were the equations too complex and fluid dynamics not completely understood, but also the model would require additional computing resources that did not exist. The solution for airplane designers was to make a small version of the plane and blow air pass it to see how the design worked. The solution for evolution was to build a physical model too, but the genius is in what and how it is being modeled. It is reality external and internal to the organism, including the physical body, its own self, and its relation to other objects in space and time. It is a model that is self-modifying – it grows with experience – and it has connections to the motor systems of the body to control interactions with the world. The model the brain creates is not a miniature model of world like the miniature airplane in the wind tunnel. There is not a miniature tree that forms in in your brain when you look at a tree. The model is a representative model that behaves sufficiently like the actual world that it can be, and almost always is, treated as the real thing.
The evidence for this model is hiding in plain sight. We need not point to brain scans, physics, or information theory. The primary evidence is your own conscious experience itself. Your conscious experience is the model. It may be difficult to appreciate how literally this is meant. Consciousness is a model of the world with yourself in it. That the model projects a reality externally from the body (and the brain) adds to the illusion that what we see, hear, and experience is exactly what is there instead of a model of what is there.
The biggest hurdle to understanding this concept is understanding that you, your body, your room, the tree outside your window, the clouds in the blue sky at a distance, all you remember about the past or imagine about the future, are parts of the model. You are not seeing a tree outside your window. You are seeing in your model a representation of a tree outside your window (which is also part of the model). The thumb that hurts if you accidently hit it with a hammer is not your real thumb. It is your model of the thumb. What is more, it is not just the tree or the thumb that is in the model, it is the “you” seeing the tree or feeling the thumb. Our self is embedded in the model, not separate from it.
Your model of reality is not immutable or fixed. It is built on the physical structure of the brain, possibly subtle differences in patterns of neural firings, and senses. Training and experience can modify the model. A sufficiently trained sense of taste and smell might permit the identification of the vineyard a wine comes from, whereas the untrained senses might not distinguish a French from a California wine. Ingestion of psychedelics and other psychoactive drugs will change the model. Aging can drop or distort memories from the model. A genetic abnormality that makes the cone cells in the eyes unresponsive color will mean the model will not contain color. Damage to the visual cortex might drop all sight from the model. The brain will adapt and create a new model, likely developing a degree of echolocation ability. Dropping hearing might mean an even more impoverished model but it will be a model nonetheless based on the best, most salient, and available information. The model will bend the available information into a coherent whole because that is its function: to assemble a picture of the world so concerted action can be taken in it.
That our model of the world is dependent on learning, experience, and the physical structure of the brain and senses may seem a trivial observation. It may also mean that even slight, almost undetectable differences in brain structure could result in world models that are significantly different. Even if my “blue” is the same as your “blue” (something that some scientists dispute), other aspects of our consciousness might be quite different even if we both have all our senses and are normal psychologically and physically. The fact that our own models can change significantly with age, experience, and ingestion of drugs suggests that we may each live in unique islands of experience even while behaving outwardly in ways that reflect a common consensus on reality.
This probably should not be a surprise since we can readily observe different approaches to the world in others simply from the behavior and expressed beliefs. Some believe in God, life after death, or see a world guided by spirits and hidden forces. Others are stone cold realists. Physicists cannot even agree on how to interpret quantum mechanics. The models of some may drive them to pick up a rifle and gun down innocent people. The models of others may make them immolate themselves in protest at the Supreme Court. These differences reveal that there could be differences even on the basics about how the world and objects look and seem to our conscious selves.
A mystery is how brains evolved from simple digital processing to create a model of reality and the answer to that might partially answer another mystery about how brains physically generate a model. This second mystery, of course, is equivalent to Chalmer’s famous “hard problem” of consciousness.
If speed of processing and economizing power were driving forces in the evolution of the brain, placing neurons closer together would generally be preferable to farther apart. That is why the neurons for consciousness are concentrated in the brain rather than spread randomly throughout the body. Placing relatively active cells, like neurons with their complex structure of dendrites and axons transmitting impulses, near each other could have side effects. These side effects include not only the ionic and chemical effects, but also electrical and small electromagnetic fields. Not only the distance between neurons, dendrites, and axons but also their physical relationship and temporal relationship of neuron firings to each other could produce effects. Just like the molecules of air striking each in the wind turbine produces patterns of turbulence, neurons placed close together might in effect generate patterns of turbulence. This side effect might be turned to evolutionary advantages if the patterns could effectively carry additional information to perform non-classical computing. A computing task for an organism that would offer the most generality would be to generate a model of spacetime and the organism’s place in it. The spatiotemporal relations in the brain model spacetime. The turbulent patterns of information could, in effect, be the model that is consciousness.
If consciousness is at least in part a product of the spatiotemporal relationships of neurons, their parts, and their firings then we know consciousness can be changed. Dendritic reorganization is a regular occurrence in the brain and has been associated with learning. Significant reorganization in neural structure occurs after traumatic injury or experience, during intense learning such as occurs during childhood, and after transformative experiences like mystical or near-death experiences. The model that picked up the rifle and killed a dozen people could become a different model, but understanding how it got to be the way it was and how to change is going to be complicated.
 Woodford, Chris, “Wind Tunnels,” accessed July 8, 2022, https://www.explainthatstuff.com/windtunnel.html.
 Unattributed, “Google Engineer Put on Leave after Saying AI Chatbot Has Become Sentient,” accessed July 9, 2022, https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine.
 James J. DiCarlo, “To Advance Artificial Intelligence, Reverse-Engineer the Brain,” n.d., https://science.mit.edu/reverse-engineer-the-brain/.
 Unattributed, “Neuron Firing Rates in Humans,” n.d., https://aiimpacts.org/rate-of-neuron-firing/.
 David Leech Anderson, “Computer Types: Classical vs. Non-Classical,” accessed July 10, 2022, https://mind.ilstu.edu/curriculum/nature_of_computers/computer_types.html.
 Natalie Wolchover, “Your Color Red Really Could Be My Blue,” accessed July 9, 2022, https://www.livescience.com/21275-color-red-blue-scientists.html.
 Chris Cameron, “Climate Activist Dies After Setting Himself on Fire at Supreme Court,” accessed July 10, 2022, https://www.nytimes.com/2022/04/24/us/politics/climate-activist-self-immolation-supreme-court.html.
I think we are all learning how to fly by the seat of our pants right now. Is that a fair comparison?
LikeLiked by 2 people
I like it.
30 years ago Francis Crick got beyond brains speculating about brains speculating about brains, by formulating the problem in concrete terms that could be tested in primates (Crick, “Astonishing Hypothesis”, 1994). Subsequently he and Christof Koch (2003) and many others have explored this practical approach, and a large literature has been generated, complete with discrete models which are being tested (Boly et al, 2017). Certainly the introspective approach pioneered by Descartes and others is still interesting, but speaking frankly, it has not produced much. The experiential approach that Francis Crick pioneered has led to a large number of scientists working on the problem, complete with dedicated research projects that have already brought more real progress than the previous 400 years, as evidenced by a large and growing literature.
LikeLiked by 2 people
Unfortunately I think Crick got hung up looking for the neural correlates of consciousness which will never be found because there isn’t one or even a few parts of the brain where consciousness happens.
I follow the work of Christine Koch and others, and looks to me like they are making progress. I’m a materialist, so my money is on Crick’s original intuition. Won’t be solved in my life time. but in the next 100 years I would guess.
LikeLiked by 2 people
Do you mean Christof Koch?
If you understand my view then you know it is also materialist. I think I emphasized quite clearly that consciousness is a physical model.
James Cross. So you understand the scientific method better than Francis Crick? Cummon? He received a Nobel Prize for his efforts on the structure of DNA. Earned another one for his efforts in figuring out the code for amino acids, they just didn’t give it to him.
Your assertion that ” I think Crick got hung up looking for the neural correlates of consciousness which will never be found because there isn’t one or even a few parts of the brain where consciousness happens.” reflects your lack of understanding of the Scientific Method. He spent the rest of his life searching for the NCC because he understood that posing a hypothesis is just the first step. The real work begins when you set out to disprove it. He did not succeed, but inspired a legion of scientists to continue the quest, which is ongoing. If you have in fact disproven his hypothesis, please point me to your published papers showing that.
It isn’t my job to disprove the hypothesis that there are particular places in the brain where consciousness happens. It was Crick’s job to prove it and he did not. Show me the paper where he or anyone else did prove it.
Sorry try reading Karl Popptater. Nothing can be proven, but things can sometime be disproven. Popper’s Conjectures and Refutations is my favorite.
Is a Popptater a hybrid between a potato and a poppy? 🙂
BTW, don’t know if you know this but Karl Popper actually subscribed to an early variation of electromagnetic theories of consciousness. And some form of duality that seems even weird to me.
Yes but Popper was very clear that you cannot prove theories, they remain tentative pending new observations. But you can disprove them. This is particularly clear for consciousness.
LikeLiked by 1 person
I think I agree with that Santacruzred — successful theories must forever survive new evidence. Conversely unsuccessful theories may simply be dismissed where evidence is shown not to support them. Still it seems to me that there’s something else which generally goes on for popular consciousness proposals. They position themselves to be unfalsifiable.
One example would be theistic proposals. Given the theorized magic that’s involved, how might worldly evidence ever be used to demonstrate that consciousness isn’t god based? It’s the same for straight substance dualism proposals I think. If various popular people on the consciousness circuit (like David Chalmers) want to believe that consciousness arises through magic then fine, though I’d say that science ought to be defended from such spookiness. So we should institute both “natural” as well as a “natural +” varieties of science so that true naturalists could disregard such notions as well as have a place to banish any in their club who stray.
Then there’s the now quite popular panpsychism faction. Some think they’re able to circumvent any “hard problem” by asserting that all elements of reality subjectively experience their existence, even if quite minimally. It’s an unwarranted faith that’s impossible to counter through science. One thing I’ve asked them to observe is that the effects of certain drugs on the brain render us non-conscious for a while. From here some are forced to admit that human consciousness is different from the theorized “experiential white noise” that they propose all elements of reality to have. Thus their unfalsifiable consciousness solution answers nothing.
I’d say that the greatest failure here today however is a spooky notion often referred to as “computationalism”. Apparently John Searle’s Chinese room was not effective enough and so the perspective of charismatic figures like Daniel Dennett have now become entrenched. The position is that our brains create consciousness by means of information processing that needn’t animate the function of any consciousness output mechanisms at all. Thus here we’re ultimately just running streams of code and so might just as well be uploaded to a vast human made computer. It’s a magical position because in a natural world computer information processing requires output mechanisms in order for it to do anything, such as create a subjective experiencer. Furthermore it’s inherently unfalsifiable because with any failed attempt to create a subjective experiencer this way, one could always claim that we got the information processing wrong.
Then beyond all these unfalsifiable/otherworldly consciousness proposals, there is Johnjoe McFadden’s proposal that the brain creates a subjective experiencer by means of the proper neuron produced electromagnetic radiation. This is falsifiable because here a physics based dynamic exists to potentially test. If your consciousness exists as such, it should then be possible for scientists to tamper with your consciousness by means of outside EM radiation of the right parameters that they produce, since waves of one variety tend to be altered by other waves of that variety. Thus you should be able to report this and so validate his theory. But if scientists were to try very hard to do so and yet always fail to tamper with someone’s consciousness this way for oral report, then consciousness should not exist by means of neuron produced EM radiation.
LikeLiked by 1 person
Theories can only be disproven if they are falsifiable. Not all science is falsifiable. A falsifiable theory might be preferred but not all science lends itself to generating such theories. We can have theories that can’t be disproven but yet might still be superseded if a better or more parsimonious theory comes along.
The problem with Crick’s search for NCC is no evidence has been found for it. I think consciousness arises when a critical mass of neurons begins to fire in an oscillating, cooperative manner. The number of neurons, the types of sensory input to neurons, and ability of the neurons to generate actions in the world control the nature of the consciousness. So, in my view, consciousness could be found in many sizes and types of brain but it would vary on size and type of brain. It wouldn’t be found in a particular part of brain like the claustrum as Crick once suggested.
I’m onboard with most of this. Of course, I do have a couple of quibbles. 🙂
“You are not seeing a tree outside your window. You are seeing in your model a representation of a tree outside your window (which is also part of the model). The thumb that hurts if you accidently hit it with a hammer is not your real thumb. It is your model of the thumb.”
I think a better way to describe this is you are seeing the tree outside your window and your real thumb is hurting, but the models are the mechanism that is the seeing or the hurting. Saying we see the model implies there’s a separate seer. But if so, then what is the mechanism by which the seer sees? With an obvious danger of regress here, which we avoid if we just see the models themselves as the seeing or hurting.
“Natalie Wolchover, “Your Color Red Really Could Be My Blue,””
I don’t really understand how Wolchover reaches that conclusion. When we discussed this experiment a few years ago and I read the actual paper, I came away with the opposite impression. There is only the signaling in relation to the world and the effects it generates in us. To say that my red and your red is different, even though all the effects are the same, I strongly suspect, is meaningless. This seem especially true if we learn colors. Although I’ll admit that the intuition that there must be some fact of the matter is very powerful. But if your red can be different from mine, then what prevents your red from one part of your fovea being different from your red in another part? How is it all parts of our visual field perceive the same color under the same circumstances?
I’m still not onboard with the electrical field business, but who knows what evidence might come up.
Interesting post James!
LikeLiked by 2 people
With your first quibble I think I will stick with my description. The object viewed can’t be disconnected from the model. Whether the object is even seen or not is dependent on the model. I’m not denying there is something maybe like a tree (or maybe not so much) out there.
See change blindness for example.
I’m not sure whether Wolchover reached that conclusion or whether she was just reporting what the researchers thought. Ultimately like the consciousness of Super AI, it isn’t answerable. However, we certainly know that people see (or don’t see) colors in different ways from the varieties of color blindness which can include reduced sensitivity to red, green, or occasionally blue light. We also have the rare condition where no colors are seen. It would certainly make sense that there could be more subtle differences between apparently normal individuals.
LikeLiked by 2 people
“I think a better way to describe this is you are seeing the tree outside your window and your real thumb is hurting, but the models are the mechanism that is the seeing or the hurting. Saying we see the model implies there’s a separate seer. But if so, then what is the mechanism by which the seer sees? With an obvious danger of regress here, which we avoid if we just see the models themselves as the seeing or hurting”.
To be clear I explicitly said the “you” is part of the model not separate from it.
What you seem to be doing is separating the model from the experience as if consciousness is something generated by the model. Whereas, my argument is that consciousness is the model.
We both start with something external that our eyes see that will become a a phenomenal tree. From there, if I understand your approach, we take different paths.
In yours, neurons execute a set of algorithms that constitute the model and the phenomenal tree is output of the model. The algorithms are abstract and can implemented on any computing device. In this approach, there is no evolutionary advantage to be gained from outputting anything because the model has already determined it is seeing a tree. This also falls deep into the “hard problem” because there is no physical spot where the experience takes place.
In mine, neurons create the model and the tree is in it (as well as our bodily relationship to it).. The phenomenal is not output of the model but is the model. The model is its own physical implementation in the brain that appears to us as consciousness. It is not something that can be abstracted from the brain. The phenomenal tree is the physical tree in the physical model of consciousness.
LikeLiked by 1 person
Change blindness seems like a special case of inattentional blindness. Unattended things don’t seem to get the full modeling treatment, at least not in any sustained manner long enough to have causal effects in memory or the executive and language centers. But I guess a lot depends on how we define “model” and “see”.
I am a computationalist who thinks there’s no barrier in principle to a machine eventually having functional consciousness. But other than that I can’t really recognize my position in your second reply.
Wolchover does quote Neitz, one of the study authors, as saying that the common emotional effects of particular colors don’t necessarily mean we perceive them the same way. Although to me it seems like more of a cautionary remark.
The paper itself only addresses color experience briefly in this interesting snippet:
LikeLiked by 1 person
Explain where phenomenality happens in your position? What is its physical basis?
Your functional position has no place or need for phenomenality.
Of course, a machine could and eventually will have functional consciousness. Just like we can probably now do away with wind tunnels in favor of computers and wave dynamics equations. We can do exactly what a wind tunnel does on a computer without building the tunnel or the model.
That may be a question that interests you but it is only of minor interest to me. The question, as I said, is how do brains do it.
I’m not sure why you are obsessing over Wolchover since it was an extremely minor part in a clause in a parenthesis where even I acknowledged we might see the same or similar blues.
The snippet you quote is mostly the same as my own understanding. It seems the brain can learn new colors. Does everybody learn them in the same way?
I haven’t read this yet, but could it be “the answer, my friend, is blowin’ in the wind?”
LikeLiked by 1 person
My bet is that at some point in time the Spiritual teachers will have the answers that the logical mind cannot figure out. Life will get truly interesting when humanity has to use both sides of its brain to accomplish the goals of the future. Seeking answers from either the left or right brain will not be enough.
LikeLiked by 1 person
A good bit of this I think is implicit from The User Illusion book you recommended a few years ago. I happened to dig that book out recently. Thanks for that recommendation.
My basic paradigm for all of this is the obvious evolutionary advantage of being able to remember things. Seeing a lion eat your brother and not remembering that, means you are more likely to be eaten by a lion. So, a capacity to remember is a vast, vast advantage evolutionary advantage. We see that a great many species possess this capacity.
After memories, the next big advantage came to be imagination. Imagination can be considered to be the ability to create memories that are speculative, untrue, however you might want to construe that. Being able to imagine scenarios and then see how they play out in your mind allows us to create multiple pathways through the future and to control the risks we take moving forward. Of course, some ability to make choices, to analyze the options is also needed, but getting from memories, to false memories, to analytical abilities makes for much smaller jumps in capacity than from dumb animal to fully conscious human being.
Our imaginations are your wind tunnels.
LikeLiked by 2 people
Excellent essay Jim, I think you are spot on…… Good work!
LikeLiked by 1 person
After I wrote this, some of your comments about subject/object fell into place. Both the external world (object) and our internal world (subject) are just part of the same model. They appear differently but at core they are the same substance. Effectively I am suggesting a sort of limited solipsism (although you may have better term for it than that). There is a reality-itself but effectively we cannot know it because the only world we know is our model of it. As a species we can cross-check our models with others of our species so our models seem to tend towards a limited consensus. But we still could have (and likely do have) species specific blind spots.
For our own experience, a limited solipsism is a good way to express it plus, it fits well with Carl Rovelli’s RQM as well because every system without exception lives in a quasi solipsistic state. So maybe instead of labeling it as a limited solipsism one could express it as quasi solipsism because it appears to be solipsistic and yet, it isn’t because our understanding is limited by our blind spots as a species.
I know you’re not a metaphysics kind of guy but, SOM is a metaphysics that by nature dominates our world view whereas like you said:
“There is a reality-itself but effectively we cannot know it because the only world we know is our model of it.”
In principle, this statement corresponds with reality/appearance metaphysics (RAM) where our model is the appearance and not the reality-itself. SOM itself is a huge blindspot, one that cannot be overstated simply because it is the prism through which we view the world. It’s not like SOM is some set of rules that we follow or obey. SOM was codified by Plato, Aristotle and their Greek cronies; and as a way of rationalizing, SOM is intrinsic to our nature as a quasi solipsistic system.
Great work Jim…….
LikeLiked by 2 people
If I had to choose between “limited” and “quasi” I would probably go with “limited” but both terms approach what I was trying to get at.
“Just like the molecules of air striking each in the wind turbine produces patterns of turbulence, neurons placed close together might in effect generate patterns of turbulence. This side effect might be turned to evolutionary advantages if the patterns could effectively carry additional information to perform non-classical computing. A computing task for an organism that would offer the most generality would be to generate a model of spacetime and the organism’s place in it. The spatiotemporal relations in the brain model spacetime. The turbulent patterns of information could, in effect, be the model that is consciousness.”
This is an interesting analogy however; just like with any analogy, it’s difficult to assert that what we know about fluid dynamics can even be close to the dynamics that occur in something as complex as the brain. In this hierarchy of increasing complexity coextensive with energy conservation, I think a quantum field would be the best choice for the model that is consciousness. And likewise, neurons placed close together would contribute and sustain the coherence of that quantum field in a warm moist environment.
So for me, asserting that consciousness is a classical physics dynamic is just like idealism asserting that everything is mental. Both story lines have a certain amount of truth in them, but that limited amount of truth is only a small part of a much larger narrative. Furthermore, if one is going to stick to a materialistic framework for an explanation, I’m convinced that quantum physics is where the answer resides. Unfortunately, due to the measurement problem the frontier of quantum physics is currently and will continue to be off limits for science.
LikeLiked by 2 people
It is just an analogy but it is also clear that the actual detectable patterns of brain activity are themselves wave-like and that those patterns don’t require any sort of quantum effects.
I tried to leave out in this article the actual details of how this model might be physically implemented. Probably my previous posts have focused more on that.
I still think that possibly there could be something in the fifth dimension. Let me note a few things.
– Neurons firings do produce small EM fields
– Theodor Kaluza could unify electromagnetism with gravity with a fifth dimension
– If consciousness models spacetime or is a model of spacetime for the organism, then electromagnetism could be the mechanism
– A paper I discussed a while back discussed this:
The link between general relativity and electromagnetism becomes clear by assuming that the so-called four-potential of electromagnetism directly determines the metrical properties of the spacetime. In particular, our research shows how electromagnetism is an inherent property of spacetime itself. In a way, spacetime itself is therefore the aether. Electric and magnetic fields represent certain local tensions or twists in the spacetime fabric.
It means that the material world always corresponds to some geometric structures of spacetime. Tensions in spacetime manifest themselves as electric and magnetic fields. Moreover, electric charge relates to some compressibility properties of spacetime. Electric current seems to be a re-balancing object, which transports charge in order to keep the spacetime manifold Ricci-flat.
LikeLiked by 1 person
“If consciousness models spacetime or is a model of spacetime for the organism, then electromagnetism could be the mechanism…”
Certainly, but that’s a big “IF” right? On the other hand, if spacetime itself is nothing more than a part of the model that is consciousness, one has to accept that we could be wrong about spacetime and that both space and time are nothing more that useful constructs that help us navigate and therefore, do not reflect the true nature of reality.
This rendition corresponds with Rovelli’s RQM in which he asserts that what we observe as reality is a “flash ontology”, one that does not require spacetime. And this “flash ontology” is intrinsic to both the quantum and classical realms which could also explain how our own experience of consciousness itself is a “flash ontology” which consists of moment to moment dynamics that are in a constant state of flux and never fixed or still.
LikeLiked by 2 people
Definitely a big “if” but neurons do produce EM fields so that part isn’t in doubt.
I don’t think I’ve mentioned these articles but they have been in the back of my mind for a while. It appears the sort of grid mappings in the brain to represent spacetime may carry over into abstract ideas. Some odd hexagonal patterns of neurons involved in it.
And time is tracked in a somewhat spatial fashion.
I would think that the primary, original point of the spacetime model (consciousness) is to control locomotion and physical interactions with the world. So the techniques used to do that could have carried over to mapping memories and abstract concepts.
I am 100% NOT saying spacetime is fundamental or anything more than a useful construct. It might be but I don’t know whether that makes any difference to the argument. Even if not fundamental, it could still have been the foundation upon what everything else in the conscious model is built. We may never actually know what is fundamental anyway.
“I would think that the primary, original point of the spacetime model (consciousness) is to control locomotion and physical interactions with the world. So the techniques used to do that could have carried over to mapping memories and abstract concepts.”
Definitely; so it makes sense that these techniques would be responsible for constituting our blind spots as well. And I do agree with your assessment that the concept of spacetime “is” the foundation upon which everything else in the conscious model is built. So yeah, I think these ideas of yours are both insightful and quite revolutionary.
Will we ever actually know what is fundamental? One thing is for certain; if a fundamental reality doesn’t correspond to the spacetime model of consciousness, we won’t recognize it even if it was handed to us on a silver platter. I remember a statement that the theoretical physicist Sylvester James Gates Jr. once said:
Maybe a fundamental reality is so simple that it is sitting right under our noses, and because of its simplicity and the close proximity to what we are as as system we just can’t see it.
LikeLiked by 1 person
There’s a lot going on in this post so it has taken a while for me to find something reasonable to say. I guess one way in is to observe something that I’m sure you agree with James, though wasn’t explicitly mentioned. To do what it does evolution doesn’t need to use computer simulations, or even model testing, given that it never understands anything. It’s “blind”. It merely functions because genetic mutations that succeed better tend to get passed on. To us its success simply makes it appear to understand given that we’re understanders. So we should try to moderate our natural biases in this regard. Similarly I know you don’t believe that the more we program a computer to seem like it understands, or seems like it’s in pain, the more that it will understand, or will be in pain. That should be a natural bias to guard against.
On the brain computing with only 20 watts while our computers use tremendously more, to me this makes sense. The conservation of power has not been economic for us since we have plenty to spare. Not so for an organism. Brains had to earn their consumption of calories by functioning efficiently. I don’t think we grasp the efficiencies of brain based computation yet, and even without presuming any quantum element. Quantum biologists in general would love to add brain function if they could, though haven’t been able to so far.
The big question is, how did evolution create a brain that creates an understander? It must have come across this serendipitously rather than through planning, and then cultivated it through a vast number of iterations over time. Initially however I don’t think it would have understood anything, but rather phenomenally experienced its existence. Then with more advancement it should have been able to effectively understand on that basis and plan. This mandates that for it, perceptions would inherently exist as a simulation of reality — perceptions would exist through its consciousness.
LikeLiked by 2 people
There is quite a bit of evolutionary advantage to being able to move about, identify food, mates, or predators from a distance. The better the information available to an organism through its senses, and the better the information can be integrated, the greater the advantage. The dilemma was getting all of these advantages without busting the energy budget. That is what I am suggesting drove the evolution. of brains and consciousness.
The “understander”, I think, likely began as a model of the body with its location in spacetime. For us, it encompasses our memories and an imagined future.
LikeLiked by 1 person
Right James, evolution should generally be about efficient function given the energy constraints associated with gene propagation. I have a somewhat more specific conception of that larger theme to however.
Consider life without brains, as in plants, fungus, and microorganisms. As consistent with your brainless low energy theme, plants and fungus don’t do much moving. Microorganisms do tend to move within a given medium, but in very limited ways since such a cell or cells will not have central organism instruction (beyond genetic instruction itself I suppose). Brainless life may be analogized with any of our non-computational machines, like various dated televisions or water heaters, though obviously far more complex.
Next consider life which evolved to have central organism processors, which is to say non-conscious brain function. These would essentially be like the computer controlled machines that we build. Thus at this point organisms should have had a single place in the body to process sense information algorithmically for output function. I think it’s telling that the vast majority of our computer controlled machines are stationary, and the ones that do move must generally be in closed environments so that their programming can effectively deal with what arises. Regardless, now sense based robotic life should not only have feasted upon microbial life, but feasted upon each other. Thus brain algorithms should have taken on strategies to eat and/or to not be eaten.
Blind algorithm alone must have topped out however since under more open circumstances there must have been too many contingencies to contend with and so even evolution couldn’t effectively program these biological machines well enough in a general sense. From here theoretically a functionless phenomenal experiencer emerged as a product of certain brain function, and probably in the form of neuron produced electromagnetic fields. Though extremely primitive, a sentient entity should have existed that would in some sense feel anywhere from good to bad on the basis of brain function, and even if initially functionless.
Note that these biological robots should already have all sorts of sense data from which to run their algorithms in the form of light wave information (though not phenomenal vision), pressure wave information (though not phenomenal sound), chemical signals (though not phenomenal taste, or smell) tactile information (though not phenomenal touch), and more. And yet they shouldn’t have been able to deal with novel situations well enough given that they’d lack overriding purpose based function from which to reprogram non-conscious algorithms for many novel contingencies, or agency.
Thus with an originally functionless phenomenal experiencer in the form of certain electromagnetic fields, I’m suggesting that this was cultivated to become an effective agent. This purpose based experiencer would have been given the opportunity to decide certain things (through ephaptic coupling), and some iterations must have succeeded well enough such that new iterations were given more and more resources. Here phenomenal senses and memory from which to more effectively think and decide how to promote sentient based interests should have evolved. This is to say a potentially vast brain computer that generally functions in parallel, would tend to service and be modified by a brain produced phenomenal computer that functions in series. In any case, yes energy would matter, though it may be that no amount of energy alone can instruct life to function effectively under more open environments. Thus the emergence of purpose based function, or sentient agency, and probably in the form of certain electromagnetic fields.
LikeLiked by 1 person
I think Iagree with most of what you are saying if I understand you correctly.
“Next consider life which evolved to have central organism processors, which is to say non-conscious brain function.”.
Yes, there could be some really simple organisms like that. At what point, however, does consciousness begin to play a role? There is some evidence, for example, that insects feel pain.
This paper suggests that some insects have a model of their body much like the human does in my smashed thumb example.
Other studies have also found oscillatory neural patterns in insects associated with learning.
So, it might be a relatively small number of neurons required for consciousness. It is also possible there may be a sort of fragmented consciousness in these lower organisms.
LikeLiked by 1 person
I’ve been meaning to get back to you on this James, though my mind has been wandering. I’m always looking many moves beyond the small stuff to potentially find ways to help academia out of the horrible rut that I perceive it to be in — a rut partly headed by popular figures like Dennett and Frankish, though certainly Chalmers as well.
I agree that there is good reason to believe that brain based creatures in general today tend to have sentient components to them. This has probably served autonomous function in general. The main difference between life today and life a half billion years ago is that modern genes should now have an amazing amount of evolved engineering to them that past life largely couldn’t have. Thus we may underestimate the potential depth of an ant for example given its mere 250,000 neurons. If or when it becomes established that these tiny pests can suffer horribly, this might be considered quite inconvenient to us. Is our treatment of insects ultimately many orders more deplorable than our treatment of livestock? Given that reality seems to have no regard for our convenience, perhaps.
In any case I’m both troubled and inspired by the question of how various status quo interests might be put on trial for the various ridiculous ideas that they’ve effectively promoted. I’d like their positions to be simplified in ways that more effectively illustrate the problematic components of their positions so that sensible alternatives might instead take hold. So far I have two components in mind.
The first is to illustrate that what’s popularly known as “computationalism” is founded upon a supernatural premise. My thumb pain thought experiment should help illustrate this. The point is that in a causal world algorithms can only exist as such in respect to the mechanisms that they causally affect. Thus algorithms alone should not create thumb pain (or even exist as “algorithms”), but rather only when they animate associated mechanisms. With effective promotion I believe that this thought experiments could “go viral”.
Then once these supposed naturalists are illustrated as unwitting supernaturalists, I’d like to help people grasp the components of a truly naturalistic proposal. As you know I favor McFadden’s falsifiable proposal. Perhaps general dialogue could be created about how to effectively test his theory in conclusive ways. Least speculative should be to induce an electromagnetic field in a volunteer’s head near the parameters of those created by synchronous neuron firing to see if the person would notice anything phenomenally strange (given that waves of a certain variety tend to be affected by other waves of that variety). Apparently even McFadden seems to not grasp the potential for such testing to validate or refute his theory. When I’ve asked him about this directly he has failed to respond.
In any case I thought I’d throw this out there for any thoughts you might have on how progress might be made.
LikeLiked by 1 person
We’ve been through these “how do we test it” issues before.
We don’t understand exactly the “language” neurons or the “language” of the EM fields they produce. Any external wave-forms to the ones generated by the brain would have to “speak the language ” of the brain or it could be treated as garbage that might simply get discarded. The effect of one wave-form on one part of the brain might be the same as the effect on a different part. The effect on one individual not the same as on another individual.
Then there are the technical difficulties of performing the test and measuring the results. How many little EM devices do we need to implant? Who do we do the experiment on? I’m not sure I would sign on to having devices put into my brain. Trying to apply fields externally probably can only affect the surface of the brain and leave out deeper parts. That could also produce fragmentary, inconclusive results.
Have you addressed any of these concerns?
LikeLiked by 1 person
It’s good to assess your current concerns about my test. Without such assessments there’s little chance that something effective might get figured out. Fortunately I’ve also been talking with Mike Arnautov about this for a while over here: https://selfawarepatterns.com/2022/05/29/susan-blackmores-illusionism/#comment-161189
I think I may have a less complex conception of how McFadden’s theory works than you do right now. As I see it the light information that’s accepted by your eyes for example, gets processed such that certain neurons then fire synchronously in a way that imparts every bit of the detail associated with what you end up seeing right inside an amazingly complex EM field. This neuron produced field would theoretically be you as the phenomenal experiencer of vision and all other elements of your phenomenal experience. Therefore if a transmitter were put in your head that emits an EM field somewhere in the range of the field associated with your vision, then there ought to be some disturbance between the two fields to thus distort what you see for oral report. Of course we don’t have such EM field parameters mapped out right now since it’s just a theory. But the point of the experiment would first be to run through a vast assortment of EM fields that seem common for synchronous neuron firing, and then if established that certain ones do correlate with altered phenomenal experience (which might become the most transformative discovery ever achieved in science), then work should begin mapping out which EM field parameters are associated with various standard phenomenal elements of existence.
From this perspective I don’t understand the sense in which we’d need to parse any neural or EM field languages in order for our fields to not be ejected as garbage. That simply doesn’t come into play given my understanding of his theory. I don’t perceive some kind of coding firewall to negotiate before an exogenous EM field is then permitted to exist as such in the head. The physics says that either we would be able tamper with an existing phenomenal experiencer through certain EM fields for oral report given wave dynamics, or otherwise failure might be because consciousness does not exist as a neuron produced field of electromagnetic radiation.
Regarding different effects in different parts of the brain, yes we might need to place a transmitter in several different areas of the brain in order to figure out what’s effective. There should only be one such experiencer however (as in there is only one “you” each moment), so it should just be one brain EM field that would need alteration. It could be that different people would be phenomenally affected by the same exogenous field differently, so that’s out there to potentially learn about. In that case however McFadden’s theory would have been validated and we’d just be trying to figure out various details.
Regarding potential testing difficulties, I was just saying to Mike A that I’ve recently simplified my proposed test. Instead of implanting individual transmitters that fire about like neurons, I think we should build an exterior laboratory machine that does the job and then use a single skull implanted transmitter to put such a created field into a the brain of a given subject. Furthermore regarding safety he got me thinking that we should first try this out on non human animals both to check organism safety and perhaps even to help narrow down more and less effective EM parameters to impart in a human brain. I doubt many specialists would be all that worried about the effects of transmitting extremely low energy EM fields in a brain however. Furthermore when done on a physiologically monitored person we should have such evidence of harmful effects. Observe that if there is pain then an alert human subject will be instructed to say so. It’s not like this would be testing the possible long term effects of a drug for example. Beyond implantation it seems to me that this sort of test should be relatively benign. I’d expect certain qualified people who are already scheduled for brain surgery to volunteer, and specifically because they’d decide it would be better to be so compensated rather than not.
LikeLiked by 1 person
“Therefore if a transmitter were put in your head that emits an EM field somewhere in the range of the field associated with your vision”
Keeping in mind the field needs to be weak to be compatible with the brain, where would you put the transmitter?
McFadden theory is dependent on synchronous firing of many neurons so I don’t see how you achieve with just one transmitter. In vision, we have not only the visual cortex (left and right) with multiple components (V1, V2, V3) but also the lateral geniculate nucleus. I would imagine in McFadden’s theory we would be looking at several hundred neurons firing at once at a minimum but I don’t know how you trigger that with one transmitter. It would be like one neuron.
LikeLiked by 1 person
As far as where to inject such an exogenous EM field, the main thing should simply be within the cerebral fluid encased brain that effectively serves as a faraday cage protecting an endogenous brain field from outside tampering. Maybe certain emission points would work be better than others however so this would be a variable to figure out. Still we’re talking about affecting a single field in the head that has field strength associated with perhaps hundreds of synchronously fired neurons. Thus emission source location may not need to be all that specific. It’s something to work out as I’ve said.
Regarding how to effectively add such an EM field to the brain, this is not a settled question. I’m currently proposing that a laboratory machine be built that fires charges about the way the brain synchronously does. Could such an outside EM field not then be transmitted to a place inside the brain? Apparently the EM fields of radio waves become transmitted by mean of antenna. Could such a field be generated for brain injection through an antenna?
If an apparatus outside the brain cannot create an appropriate field inside the brain, then apparently it would be back to my original plan of directly firing charges from within. Even in that case however there should still be shortcuts to potentially take. As I understand it the synchrony in neural firing is essentially just needed to get an associated EM field to the proper volt/meter strength. So whether 50 or 500 fire together, a single stronger charge ought to be able to mimic such firing to thus simplify what’s implanted.
In the end the message here to neuro tech people is that if consciousness exists as an electromagnetic field associated with certain synchronous neuron firing, then we’ll need them to build something that’s able to tamper with those EM fields for oral report. This should either provide evidence that the theory is true, or with enough failed testing that it should be false. If validated however this should become the most transformative bit of evidence that science has ever achieved. I doubt that many of these people would tell us that such testing would be too difficult for them to achieve. I’d expect virtually all to think that such testing was not only possible, but that each of them would have ideas of their own about how to effectively do it.
LikeLiked by 1 person
For me since science says humans are only using 20% of the brains. The larger question would be not a scientific one but rather how can we increase a high level of brain usage in humans to make them behave as if connection with each other as well as the larger conscious Universe is part of our creative process and our birthright. Until humans wake up, science or creating technology to create or even manipulate consciousness really won’t have an effect in the larger picture.
As you can see I do not possess a scientific mind but I am concerned about humanity and moving consciousness forward in human beings.
LikeLiked by 2 people