Frequently the lack of direct accessibility to the thoughts and experiences others is mentioned as problem for explaining consciousness. What if the problem could be solved?
Imagine this scenario.
Subject A has a silicon brain implant with an external access port that can capture neural activity of his brain.
Subject B has a similar implant in her brain.
If the two access ports were plugged into each other, would Subjects A and B have direct access to the thoughts and experiences of the other person?
Would each subject have two fields of vision?
Could Subject A remember experiences from Subject B’s childhood?
This experiment certainly isn’t a total impossibility. Implants, including Musk’s Neuralink, are being developed to capture brain activity. Sign up here. How long before somebody plugs the implants from two different brains together? The movie Brainstorm from 1983 envisioned this possibility and also had a mechanism for recording the brain activity on spools of tape.
Would the other brain seem familiar or foreign?
Would Subject A see the same or similar colors from Subject B’s brain?
Would Subject A and Subject B merge into one consciousness?
Or, would nothing different happen in the consciousness of the two subjects?
Just asking.
Apparently neuroscientists can now figure out through brain scans what subjects are looking at. Are we not close to the holy grail of answering “what were they thinking?”
LikeLiked by 1 person
I think this is one of the best arguments that brain activity really does cause consciousness if you are debating somebody who believes otherwise.
The problem is that there is a steep learning curve for the software that figures this out and the actual brain activity patterns might be a lot more idiosyncratic than we would expect. So, we are still missing something in the translation.
LikeLike
We actually already have cases like the Hogan twins conjoined at the head, who can share tastes, feelings, some sensory input, and motor control. They remain separate people, but that probably depends on the degree of connections. Of course, in these cases, we’re talking about individuals who developed together.
In the case of an engineered interface, the designers would probably have to make “translation” decisions to make it work, and that would likely lead to speculation that it’s really the interface circuitry each person is experiencing. And unlike the twins, these would be people who (presumably) developed independently, so their sense of self would probably be more resilient. Although connect enough regions and leave the connections in place long enough, who knows what’ll happen.
LikeLiked by 1 person
The question somewhat becomes what is minimum amount of connectivity required for sharing experiences.
Too little connectivity, no sharing. Too much sharing, the consciousnesses merge. Maybe?
There are also the split brain observations which cause phenomena subject to multiple interpretations.
Eric would , no doubt, argue that wired connectivity wouldn’t be sufficient because there wouldn’t be a shared EM field. Thinking on that now, this actually might be a good test for the EM field theory. At least, it might be able to falsify the theory.
My own feeling, which I didn’t really express in the post, is the connected brains might experience a something like an extended blind sight. Each brain would have the sense of something happening in the other brain but it would be borderline or below the level of actual experience because of insufficient integration. There might be some minimum level of integration required for experience.
LikeLiked by 1 person
My view is that there’s no strict fact of the matter on these thresholds. For instance, Ogi Ogas and Sai Gaddam argue in their book “Journey of the Mind” that we’re all part of multiple superminds due to the “qualia sharing” enabled by language. They’d see something like this as just increasing speed and throughput of already existing connections.
LikeLiked by 1 person
” superminds due to the “qualia sharing” enabled by language”
That doesn’t make a lot of sense to me. Have you written anything on that?
I’ve have said that if we both had never seen blue, then we both saw blue and agreed to call it blue, then there would be some level of objective agreement on blue even if we saw different blues. But the naming of “blue” I would think is in a different part of the brain from the actual color otherwise animals wouldn’t have color vision because they don’t have language.
If multiple superminds is just fragmented consciousness, it would make sense but I don’t see how that leads to “qualia sharing” between different brains.
Maybe you can clarify what a “supermind” is?
LikeLiked by 1 person
I did a post on their book a couple of years ago: https://selfawarepatterns.com/2022/04/23/from-molecule-minds-to-superminds/
Anytime some uses the word “qualia”, it pays to focus on what they mean by it.
Here’s a quote from the book on superminds.
LikeLiked by 1 person
Thanks. Sounds kinda mystical the way they put it.
Honestly it sounds like they are just talking about culture. I’ve often thought of this is the exteriorization of consciousness and includes language, soft culture, and hard culture or technology.
But I’m seeing more confusion than gain from attaching a term that “characterizes the inner world of our subjective experience” to something exterior to the inner world.
Sounds also similar to the concept of “word virus” from William Burroughs and some of Iain McGilchrist with the right-left brain stuff.
https://en.wikipedia.org/wiki/The_Master_and_His_Emissary
I tried to read McGilchrist but his book seemed so overwrought with broad generalizations that I couldn’t read it.
When I have a little time, I’ll take a look at your post on it that I either forgot about or missed.
LikeLiked by 1 person
instead of simply having an honest conversation? You are suggesting humans don’t know their own minds.
LikeLike
Of course we don’t know our own minds. Thinking you know your mind is a key indicator that you don’t know it.
Jorge Luis Borges: “Doubt is one of the names of intelligence.”
Shakespeare: “The fool doth think he is wise, but the wise man knows himself to be a fool.”
LikeLike
Through 35 years of therapy of all kind both physical and mental, I have to say it is hard to work on yourself but it is possible to reach a state where you know your mind and heart well enough to make good decisions in your life. I don’t think an implant is necessary to be honest and loving in our relationships not only with others but also with ourselves.
LikeLike
I can see the dilemma you raise, but my concern/s with AI is when will I (or a Government or corporation for that matter) be able to do anything better than you (or your Government and corporations) because it has the next best instalment of Chat GPT (or the equivalent virtual technology) to guide it to succeed resolutely above any other? It will be fail safe. It could also be impossible for humans to discern whether or not videos / pictures are real as seen already in many tic-toc and facebook snippets. On an individual level, Meritocracy could be dead in little time. The average C plus student with a bent on such technology could subtly acquire the skills to allow the technology to write their answers for them, but not in a way that makes him or her a plagiarist, but a student advancing. And in their subsequent correspondence appear a struggling but agreeable student worthy of support.
Another example, a composer of Classical Music who seeks fame in their field, could use such AI to compose the best music of their preferred artists. Or a talentless songwriter who adores Nick Cave will start writing like Nick or at least writing to such an advanced level that he could be deemed as ‘great’ as Nick and no one else the wiser except perhaps Nick.
Even on a collectivist level; a Regime or Government could learn to harness such technology to implement the policies as advocated by the program (the part of the Overton window in policy range) to win more votes in the subsequent election. On a local level instead of having ‘focus groups‘, this AI could do the focusing for them and arrive at outcomes better than they intended, because, to put it frankly, the AI knows what is assured to succeed.
LikeLiked by 1 person
In Scott Aaronson’s blog, I made this comment:
The more insidious risk is the slow integration and hybridization of humans and AIs. We may never understand how the brain creates consciousness but I see little reason to suspect we will not eventually be able to control the human brain. Neurolink is one start in that direction. It will be promoted with the good intentions of helping people with disabilities. Soon it will become a silicon “limitless” AI implant turning anyone who can afford it into a genius. Once we hook the brain into the network, we will become the AI.
Remember the Krell!
https://en.wikipedia.org/wiki/Krell
LikeLike
How does all this play out in terms of McFadden’s theory? Theoretically everything that someone sees, hears, smells, thinks, and so on could be detected from their brain’s EM field and so interpreted by a computer. Thus someone’s visual field might be displayed on a computer screen to some approximation. Similarly what a person hears might be recreated with a loudspeaker. For taste or smell the computer might display associated chemical data for what’s being tasted or smelled. English words that are thought might be displayed on a computer screen. If someone is in pain there might be a number quantifying it in terms of some range of severity. Things like this would be possible should McFadden’s theory be true.
If we tapped into two different peoples’ consciousness and then joined the lines, this shouldn’t create people with double consciousness’. Theoretically each person’s EM field constitutes their own consciousness and joining these lines shouldn’t change either’s EM field.
If a person wasn’t just rigged up with EMF detection however, but also EMF transmitters, then hypothetically something could work. For example let’s say EMF detection were good enough to reasonably recreate someone’s visual field to be displayed on a computer screen. Also let’s say that an EMF transmission device were in someone else’s brain, and this device could recreate the detected visual EMF in the first person’s brain to essentially also exist in the second’s brain. In that case the second person should see what the first person does to the degree that the same EMF is created and might just as well have been created by his/her own neurons. Of course that person’s own neurons should also be creating something visually, so the two should smear and possibly constructively and destructively interfere. I don’t know if closed eyes would quite be sufficient to counteract this effect. Regardless, all elements of consciousness could theoretically be caused by means of exogenous EMF transmissions.
LikeLiked by 1 person
Actually I thought this thought experiment was somewhat trivial but, the more I think about it, the possible outcomes are complicated and could be really revealing about how the brain works.
My original thinking when I thought up the experiment was that the brain might have low and high bandwidth signals. The low bandwidth signals might be fast but contain minimal information – for example, “something moved” in the blind sight example. These signals might be preconscious holdovers from the earliest brains, but serve a role in preparing other parts of the brain for the high bandwidth signals. So, in normal sight, there would be the “something moved” signal but then a follow-up signal with in effect the “image” of what moved. So, in the thought experiment only the low bandwidth signals would be transmitted from one implant to the other and there would be only a marginal awareness of the thoughts of the other subject.
However, there are other possibilities.
For example, it could be possible that signals from Subject A would be processed by Subject B in such a manner that Subject B might generate its own conscious experience (through EM field or however) through its own processing. But in that case, Subject B would be generating its own interpretation of the signals somewhat like the visual cortex is using the signals from the eyes. Subject B’s brain would be treating the signals from Subject A as if the other brain were an additional sense organ. With enough training, the two subjects might be able to recreate each other’s thoughts but not actually sharing the thoughts.
Interestingly, I think the octopus has brain-like masses in its tentacles that might be a natural analog to the experiment.
The second possibility aligns somewhat with the fragmented consciousness model in which different parts of the brain are really like different brains generating their own interpretations of the signals received from the other parts.
LikeLike