“Human beings are organizations of – do not let us use the philosophically tendentious word ‘matter’, but rather the neutral and philosophically non-committal term translated from the German Weltstoff – the universal ‘world stuff’. But our organization has two aspects a material aspect when looked at objectively from the outside, and a mental aspect when experienced subjectively from the inside. We are simultaneously and indissolubly both matter and mind.” – Julian Huxley
I used the quote from Julian Huxley in a post years ago. I’ve had good reason to think about it recently on encountering the ideas of Donald Hoffman. Hoffman is a professor of cognitive science at the University of California and has spent years studying perception and the brain. He appears to have done a lot of work particularly in the area of visual perception. Hoffman’s main conclusion after these years of research and study is exactly the opposite of what we might expect. Instead of “brain activity creates consciousness”, the usual and safe scientific view, Hoffman’s radical view, that he calls conscious realism, is that. “consciousness creates brain activity, and indeed creates all objects and properties of the physical world.”
We might expect this coming from a mystic, or maybe an insane person, but not from a scientist. An interview with Hoffman in Quanta Magazine excited a firestorm of comments, mostly dismissive, with some threatening to cancel their subscriptions. Let’s take a look.
Before getting to the radical view of conscious realism, I want to follow through Hoffman’s arguments that lead up to it. I also want to correct some of the misunderstandings many would have when first exposed to the unadulterated form of it. For a single paper by Hoffman that sums up his ideas look at Conscious Realism and the Mind-Body Problem published in 2008. I will be following the arguments in that paper and Hoffman’s quotes that follow are from it.
We begin with the view that Hoffman calls the hypothesis of faithful depiction (HFD). HFD is the common sense view. There is a world external to us. We perceive the world. What we perceive may be unlike the world in many ways but is a reasonable approximation of it. There may be a time delay in perceptions and there may be some infilling of missing data but in general what we see is what there is. If our perceptions were not accurate, we would not survive individually or as a species. Evolution would not permit the development of faulty perceptions.
Sounds almost beyond question.
Hoffman at one point accepted HFD but he writes: “I now think HFD is false. Our perceptual systems do not try to approximate properties of an objective physical world. Moreover evolutionary considerations, properly understood, do not support HFD, but require its rejection.”
In place of HFD he proposes the multimodal user interface (MUI). In MUI perception presents us with a radically simplified representation of the world. The word “representation” is key. The table of our perceptions is not like the real table but is a representation of the table tailored to how we need to interact with it. It does not need to be in our perceptions anything like the actual table itself.
Hoffman gives a compelling reason why what we would be seeing is not anything like reality in his computer desktop analogy. When we delete a file by dragging an icon to recycle bin, we are initiating a series of actions in the CPU, memory, and disk of the computer. The icon and action of dragging simplifies the underlying reality of what happens in the computer and operating system so the complexities are hidden from us. In the same way our subjective experience hides from us the underlying complexities of the world so it can interact efficiently with it. How it represents reality no more needs to represent how reality actually is than the pixels of the icon need to be the bytes of the underlying file on the computer desktop. The computer desktop is an interface and its success depends on how simple and obvious it is to interact with it. If it were complicated (I have seen some really cluttered desktops and I have no idea how people work with them) and non-obvious, then computer users would be losing or deleting critical files all the time. Simplified perceptions is far more critical to our overall evolutionary success than representation of the “real” world. Accuracy in representation is less important than how well the representation guides action.
We can see how this could be related to the unconscious nature of mental function. We have certain things presented to our consciousness but underlying that is a robust series of processes that do not reach consciousness (maybe as much as 90-95% of mental functioning). It makes a lot of sense that what is presented to consciousness need have little relation to the objects in the world or the complex mental processes that generated it as long as what is presented allows good enough evolutionary decision-making that the organism survives. The less, but the only the essential, information presented in many cases would fit the requirement nicely and with greatest efficiency.
To the extent that a user interface succeeds in providing friendly formatting, concealed causality, and clued conduct, it will also offer ostensible objectivity. Usually the user can act as if the interface is the total reality of the computer. Indeed some users are fooled; we hear humorous stories of a child or grandparent who wondered why an unwieldy box was attached to the screen. Only for more sophisticated purposes, such as debugging a program or repairing hardware, does dissolution of this illusion become essential.
Hoffman sums it up: “The conscious perceptual experiences of an agent are a multimodal user interface between that agent and an objective world.”
Notice in particular the term “objective world”. One of first mistakes made by many commenters on the Quanta Magazine article was the assertion that Hoffman thinks the world is all subjective.
He explains this more fully:
If you think that this train thundering down the tracks is just an icon of your user interface, and does not exist when you do not perceive it, then why don’t you step in front of it? You will soon ﬁnd out that it is more than an icon. And I will see, after you are gone, that it still exists. This argument confuses taking something literally and taking it seriously. If your MUI functions properly, you should take its icons seriously, but not literally. The point of the icons is to inform your behavior in your niche. Creatures that do not take their well-adapted icons seriously have a pathetic habit of going extinct. The train icon usefully informs your behaviors, including such laudable behaviors as staying oﬀ of train-track icons. The MUI theorist is careful about stepping before trains for the same reason that computer users are careful about dragging ﬁle icons to the recycle bin.
Hoffman regards conscious realism as a view that compliments MUI but it is possible to accept MUI and not accept conscious realism. He defines it: “Conscious realism asserts that the objective world, i.e., the world whose existence does not depend on the perceptions of a particular observer, consists entirely of conscious agents.”
I must admit that I find this a somewhat labored formulation.
What I think it is saying (and I might be misinterpreting) is that all we can interact with in the world are the icons of the MUI and that conscious agents (we ourselves) are what create the icons.
According to conscious realism, when I see a table, I interact with a system, or systems, of conscious agents, and represent that interaction in my conscious experience as a table icon. Admittedly, the table gives me little insight into those conscious agents and their dynamics. The table is a dumbed-down icon, adapted to my needs as a member of a species in a particular niche, but not necessarily adapted to give me insight into the true nature of the objective world that triggers my construction of the table icon.
Hoffman explicitly says it does not mean he thinks the tables we perceive are conscious. This is not panpsychism.
Conscious realism, together with MUI theory, claims that tables and chairs are icons in the MUIs of conscious agents, and thus that they are conscious experiences of those agents. It does not claim, nor entail, that tables and chairs are conscious or conscious agents.
The rabbit hole goes deeper. It is not just table and chairs that are icons of our MUI. Our science and beliefs about the world – even the particles and waves of physics – these are more icons in our MUI. We cannot escape the MUI to touch “reality”. All we have is the MUI. So when he says “consciousness creates brain activity, and indeed creates all objects and properties of the physical world” he is saying the MUI has constructed the world (a world with a “brain activity” icon) we perceive and that is all we can perceive.
This is, indeed, the world stuff of Julian Huxley, but it is a world that seems in so many ways unsatisfying.
Hinduism and Buddhism both point to the illusory nature of the world we perceive. Tenzin Wangyal Rinpoche in The Tibetan Yogas of Dream and Sleep writes: “All of our experience, including dream, arises from ignorance. This is a rather startling statement to make in the West…It is ignorance of our true nature and the true nature of the world… that results in entanglement with the delusions of the dualistic mind.” I would like to think it might be possible to escape the delusions.
Very interesting indeed! So, how would conscious realism apply to an ‘artificial’ intelligence? The icon/table explanation might be easier to grasp for a robot/AI but would it necessarily be the same architecture/model as ours? (For that matter, is it even the same for two different people?)
There is some mathematics behind the theory that wasn’t included in the main paper I was discussing. There might be some implications for AI in that but I haven’t looked too deeply into it.
Another thing relates to interaction of conscious agents. Hoffman seems to be saying at various points that our consciousness is not a single agent but a multitude of agents interacting. Also, I have noted elsewhere that species that seem more conscious or advanced to us are usually social, meaning that their consciousness has developed to some degree in the context of interaction with conscious entities.
So one thing that has occurred to me is whether AI could be enhanced if was implemented as multiple agents interacting. It might be someone is already exploring that but the details are not known to me.
It seems to me like conscious realism would not make any assumptions about how things are represented to other conscious entities. That would be one answer.
If we step back from conscious realism a little to MUI (a position I feel more comfortable with) then we probably say that representations are probably similar from person to person and possibly to a degree across species to the extent they share the same brain architectures since evolution would likely conserve representations that work. Certainly when look across intelligent species we find similar sensory apparatus – eyes, ears, sense of smell and touch – so we might suspect internally things are similar too. But of course there is no way to know.
My own view is consciousness is a biological phenomenon so it cannot be replicated in circuits. Consciousness, as someone has said, is consciousness of being in a body and that would require a physical and biological body.
Thanks for commenting.
What puzzles me about consciousness is this: Why am I now consciously aware of being in my body seeing through my eyes, but not in another body seeing through other eyes?
Any human body/brain is apparently the physical embodiment of a consciousness. Why do I inhabit this body and not another one?
If I answered as I think Hoffman might I would say that your body and the entire sense of being in it is a product of consciousness. The whole sense of a “world out there” and you being a body is generated by consciousness.
I think Hoffman would say there is something about this representation that is of evolutionary value and there is something and things outside of it but our sense of what it is or could be may not be well-represented by what we are sensing.
I thought I was following this argument, but I became confused after the mention of “conscious agents.” At first I thought the world consisted of stuff that is essentially unknowable, and that our knowledge of it is a representation generated by our kind of consciousness – a kind of neo-Kantianism, as it were, in which perception itself is governed by certain categories of the mind which are apriori to the perception. But the notion that the the objective world consists of “conscious agents” suggests that my first formulation is incorrect. Yet, the nature of conscious agents is never revealed in a way that clarifies what they are or how they might be the objective reality which is not revealed through perception. Any comments would be appreciated.
I understand your perplexity. .
Let me requote myself.
” I must admit that I find this a somewhat labored formulation.”
Then my interpretation:
“It is not just table and chairs that are icons of our MUI. Our science and beliefs about the world – even the particles and waves of physics – these are more icons in our MUI. We cannot escape the MUI to touch “reality”. All we have is the MUI.”
Certainly very similar to Kant but Hoffman claims this is science, not philosophy. For now I will leave this at that but I may comment again in a day or two in this regard.
Thanks for your insightful comment.
This lecture by Hoffman might help. A little after 9:30 he discusses conscious agents. It seems that conscious agents are a sort of interaction between the external objective world, our perception of the world, and the actions we take based on that perception.
Thanks. I’ll take a look.
How are HFD and MUI supposed to be inconsistent with each other?
To quote Hoffman:
“Our perceptual systems do not try to approximate properties of an objective physical world.” in MUI.
I suppose you could try to say that HFD would work as well as MUI from an evolutionary perspective but actually Hoffman says MUI systems out-survive HFD in simulations. The reason, I think, is there is too much overhead in trying to create an approximate representation of the world when the icons of MUI work better.
But HFD was defined by “What we perceive may be unlike the world in many ways but is a reasonable approximation of it.” That would presumably allow for a lot of details going unrepresented. Also, the very name “faithful depiction” suggests only accuracy, not completeness. If “reasonable approximation” really means “nearly complete isomorphism” then I think “reasonable” has gotten lost.
I think Hoffman means “reasonable approximation” as exactly that. It might be simplified with details missing.
In MUI, I think Hoffman officially makes no claim about the relationship between the icons and the objective world. It is in a sense unknowable but it could be (likely is?) somewhat arbitrary much like the letters “cat” do not really have any relationship to the real world object that they stand for in the English language.
One other note. I do not think Hoffman has ever drawn this comparison and not sure he would actually agree with it, but here is one way of thinking about it.
Imagine our view, perception, understanding of the world is like a language. It has rules associated with it but it is a symbolic system independent from the world that describes it and allows us to interact with it but is not in any way a reasonable approximation of it. We might also extend the analogy to note that American English has an history and incorporates words from many older languages including Latin, Greek, and ancient German dialects from people that invaded Britain. So too our perceptual apparatus has an evolutionary history and may share elements with other species, certainly apes but perhaps even birds and bats. This evolutionary history has been a sort of proving ground for the efficacy of the arbitrary symbolic system but it has not made it any less arbitrary.
All of what you just said makes perfect sense. I just don’t see how it generates any conflict between MUI and HFD. It’s like saying, “this thing is square, therefore it can’t be red.” Huh, what? I think there’s some unstated assumption about what “faithful depiction” means. A goalpost-moving, or maybe just very weird, conception of how percepts and thoughts would relate to reality if the former were “faithful depictions” of the latter.
“So, in particular, epiphysicalism entails that the brain has no causal
powers. The brain does not cause conscious experience; instead, certain
conscious agents, when so triggered by interactions with certain other
systems of conscious agents, construct brains (and the rest of human
anatomy) as complex icons of their MUIs. The neural correlates of consciousness
are many and systematic not because brains cause consciousness,
but because brains are useful icons in the MUIs of certain conscious
agents. According to conscious realism, you are not just one conscious
agent, but a complex heterarchy of interacting conscious agents, which
can be called your instantiation (Bennett et al. 1989 give a mathematical
treatment). One complex symbol, created when certain conscious agents
within this instantiation observe the instantiation, is a brain.”
According to Hoffman, the brain, the moon, the sun, the atoms (in short, the entire space-time universe) don’t exist in themselves. Hoffman asserts that all of these things are icons or symbols in a MUI created by hierarchies of conscious agents. There is no public physical moon, only my moon icon and your moon icon and Hoffman’s moon icon, etc. But what about the phenomenon of the ocean tides? Whose moon does that?