Why Algorithmic AI Will Never Be Conscious (for materialists only)

The Treachery of Images -This is not a pipe https://en.wikipedia.org/wiki/The_Treachery_of_Images

1- Representations are unreal because they do not occupy a place in spacetime.

2- Algorithms are manipulations of representations.

3- Consciousness is real because it has a location in spacetime.

4- Algorithms cannot be conscious.

Elaboration

Okay, somebody will want to say representations are real. After all we can see the pipe in the Magritte painting. But even if the representation is real, it is not real in the same way as the actual object that it represents.

A representation only has meaning because it is a pointer to something else that is real. That is why representations are totally portable. A representation could be rendered in innumerable substrates – pencil drawing, painting, computer monitor, photograph. Each of the representations would occupy a place in spacetime, but what each represented would not be instantiated in that place. A drawing in charcoal of the pipe from the painting would only have charcoal on paper at its place in spacetime. The original pipe would either be in Magritte’s atelier or in the original painting if it were a pipe imagined by the artist. We can have thousands of pointers to or representations of the original pipe, but only one original pipe.

Algorithms are manipulations of representations. A simulation of rain is not wet. What is real are the events occurring while the simulation is running – basically fluctuations in electricity on chips and circuits in current technology.

Consciousness is located in the brain. It exists in spacetime. Only a dualist or idealist would dispute this.

Manipulations of representations cannot be conscious.

This entry was posted in Consciousness. Bookmark the permalink.

16 Responses to Why Algorithmic AI Will Never Be Conscious (for materialists only)

  1. One of my employees, who is remote this week, had his office PC isolated from the network today because it has malware on it. I’m pretty sure the representations on it he can’t currently access have spatiotemporal location. Hopefully he has backups in other spatiotemporal locations so their causal roles can be preserved. If they’re not real, he might be very relieved.

    Liked by 1 person

    • James Cross says:

      Not the representations. The instantiation of the representations. What he had on his PC was metal, circuits, and such. Which you have just proved by the existence of backups in other locations.

      Liked by 1 person

      • So the instantiations of the representations are real, but not the representation themselves? Ok, I’m not a platonist, so I guess I’d agree with that. Except it seems like any instantiation of an AI we’d be tempted to think is conscious is going to be working with instantiations of the representations, all of which will have spatiotemporal locations and extent, as well as causal efficacy.

        Liked by 1 person

        • James Cross says:

          If you don’t like the term “real”, then substitute your own. I am saying it, if it is real, it is real in a different way.

          To use a different example:

          A Three is a III is a 3 wherever you find it however it is written or instantiated, but there is no actual 3 in the world. 3 is an abstract mathematical concept.

          There is only one dining room table in the world that is in my kitchen. Even an identical copy of it would not be the same because it would occupy a different place in spacetime and have a different past and future.

          A photo of my table would be like the instantiated 3, a representation of the table, except it would be pointing to an actual object rather than an abstract concept. There can be multiple copies of the representation, including the identical copy which could function as a representation or actual but different table, but only one original table in my dining room.

          Algorithms deal with things like 3’s and photos, instantiated for sure in the metal or electric charge in computer memory. Consciousness is more like the dining room table. It is an actual physical object(s) not abstract nor a representation of an actual physical object.

          The idea that algorithmic AI could be conscious is equivalent to Cartesian dualism. Or, it is equivalent to a kind of mathematical idealism a la Max Tegmark.

          So, if you are a materialist, you can’t believe algorithmic AI can be conscious.

          Liked by 1 person

        • I was using “real” in the same sense as your argument (or at least that was my intent).

          But I think the relevant point is that the representations in an AI would be real in the same sense as the representation in my head of my coffee cup. And an AI is a physical system, with all its representations physical and real in the sense of having spatiotemporal locations and extent. (I added causality above, but we don’t need it to see the problems here.)

          So sorry, if consciousness isn’t algorithmic, I can’t see that this argument gets out of the gate on establishing it.

          Liked by 1 person

        • James Cross says:

          What kind of AI are you talking about?

          Purely algorithmic AI has no real spatiotemporal location. That is why it can be ported from one substrate to another. If it were real, it could only exist at one spatiotemporal location at a time. It can produce simulated consciousness but not real consciousness.

          I made this comment on my previous post. What clouds the issues of consciousness is the complicating factor that our brain can not only have direct sensual perceptions of the world, but also can generate and manipulate abstract representations and symbols. So, we are fooled, especially when we try to analyze philosophically, into thinking consciousness itself must be abstract representations and symbols.

          You have written a good deal about consciousness being an illusion and that introspection isn’t reliable. What we believe what in our head to be abstract representations are not. They are concrete, physical representations expressed in our brains in a manner we do not fully understand yet. The coffee cup in your head isn’t an abstract representation of a coffee cup. It is actually much much closer to being an actual coffee cup.

          That is why I’ve been calling it a physical model or analog or maybe a proxy., although all of those words have limitations.

          Liked by 1 person

        • James Cross says:

          Let me try another analogy.

          Let’s say we want to simulate Mt. Helens pre- post eruption on a computer. We run the simulation on a supercomputer and it creates a topographical rendering remarkably accurate to the actual eruption, Few people would argue that a real geological process took place on the computer.

          Yet this is exactly what the argument for conscious algorithmic AI is saying for consciousness and the brain.

          On the other hand, if we dug a hole, put a heater under the bottom of the hole, filled the hole with rock, heated the rock, and generated an eruption, it would entirely different. Even if the result were far inferior to the computer simulation, it could be argued that a real geologic process took place. The same is so with AI. We must be working with real materials with the properties to generate consciousness to make an artificially conscious AI. Simply running a computer simulation doesn’t cut it.

          Liked by 1 person

        • “What kind of AI are you talking about?”

          I was talking about any AI anyone actually builds and uses. They’re all 100% physical 100% of the time. Even if not in use, the code exists as patterns in transistor states in SSDs, polarized particles on disk platters, or whatever other tech is in use. And when executing, they’re part of the physical causal structure of whatever computer system is running at the time.

          I’m not sure what you mean by “purely algorithmic AI” but unless we’re talking about some platonic object (which I don’t think exist), or a supernatural spirit AI or something, we’re talking about physical systems.

          Liked by 1 person

        • James Cross says:

          Yep, and the ones they actually build are not conscious and will not be. They also don’t need to be conscious to fully simulate human beings. Yes, they are physical. So what? A big rock is physical too. They are exactly as you describe much like very sophisticated adding machines hooked up to actuators that can do things when the right number pops up on the machine.

          The basis of my question was whether you were talking about algorithmic AI consciousness or another form that might actually be conscious. As I hinted in the volcano analogy, we need to understand the bare minimum in material and forces that produce consciousness in the real world, assemble it correctly, then we might have conscious AI even if the result was inferior to simulated AI on most criteria.

          Consciousness studies is like genetics before the discovery of DNA. But that doesn’t mean we won’t find the DNA eventually.

          Liked by 1 person

  2. Stephen C. Pedersen says:

    Hmm… I’m really out of practice, but I have a metaphor that might be of use. If I draw a map, is it conscious? The map is a representation, but the place it points to is real? Is this relevantly similar? Who is the intelligent one, the map or the person who drew it? So at two sides of the argument here, the place and the inventor, all we get is a static shadow on a cave wall. Am I anywhere close? When I read this, the Mona Lisa instantly came to mind, in conjunction with the map metaphor. Da Vinci being an artis (map creator?), Mona being the person, and the painting being a mere representation. Now AI’s create any image, but are these images just an elaborate game of fetch, where its a map (AI) creating an image governed by its own rules, but given Goedel’s incompleteness theorem, it can never truly be an original artist because it can never venture out its own parameters because it has no life, lived experiences. Apologies, if I went on an unrelated diatribe, your argument was great!

    Liked by 1 person

    • James Cross says:

      The map isn’t conscious. Current AI is just algorithms manipulating symbols.

      I’m still working through some additional ideas related to your comments that I may comment on later or in a different post.

      Thanks for commenting.

      Liked by 1 person

  3. jimoeba says:

    But even if the representation is real, it is not real in the same way as the actual object that it represents”. Is consciousness in the brain, or is the brain a representation of consciousness? The brain can be reduced, whereas consciousness cannot, so which is fundamental of primal origin?

    Liked by 1 person

  4. My apologies for being so late here James. I’ve been preoccupied. In any case I certainly agree with your four step argument. It doesn’t really matter if I agree however. It would seem that hundreds of arguments like this have been made for decades and yet computationalists/functionalists/illusionists are ironically still perceived as academia’s strongest causalists. Make no mistake, our club is vastly outnumbered and losing ground. We’ll need simple enough arguments to make it difficult for them to interpret our terms uncharitably (such as positing that there are indeed spatiotemporal locations for “representations”, simply by using that term in a different way than you were using it). And if we’re right that they’ve fooled themselves into selling magic, we’ll need to plainly display that magic so that people in general might grasp that element of their platform. Though only strong empirical evidence of wrongness should dissuade the most devout, perhaps extremely simple logic and plain illustrations of what happens when such logic is violated, could at least slow the spread of their popular position?

    There’s something that I’ve been working on in this regard as well. Consider a three step process for any functional cycle of computation. It begins with (1) any form of potential information (for example such as pressing a key from my computer keyboard). This potential information will become actual information if it (2) becomes algorithmically processed into new potential information (and certainly my computer does tend to process that sort of thing into new information). Our opposition posits the existence of consciousness by means of some form of this step. (From your post I see this as “algorithms of representations”.) Thus if paper with the right marks on it were algorithmically processed into more paper with the right marks on it, then their position holds that something here should experience what you do when your thumb gets whacked.

    Apparently we need to help people in general understand specifically why this is wrong. My answer is that in a causal world processed information can never exist as such in itself, but only potentially so in respect to it informing something appropriate. So I observe that a third step will be required for any complete computational cycle. Similarly fuel will not inherently exist as “fuel” in itself, but only in respect to what it’s causally set up to fuel. So for example if the processed information from my key press were to inform my screen by displaying an appropriate letter, then that would reflect a full computation in that regard. Or this processed information might simply inform my computer such that future processing appropriately becomes altered.

    What this observation mandates is that certain potential processed brain information, must be informing something to become actual processed brain information such that what’s informed resides as consciousness itself. As you know I suspect this to be a neuron produced electromagnetic field. I’d love to consider anything else that processed brain information might inform to exist as consciousness, though haven’t been able to think of a single reasonable alternative.

    Liked by 1 person

    • James Cross says:

      “Apparently we need to help people in general understand specifically why this is wrong”.

      My analogy has been to marks in the sand instead of marks on paper. A dedicated computationalist would have to argue that the beach itself was conscious if the right marks were made. Medium matters.

      Liked by 1 person

      • Right, that’s exactly what they must argue. And apparently people in general don’t understand how ridiculous that happens to be. So I’m saying that we must directly explain why it’s non-causal. First observe that marked sand is not the wrong stuff inherently, but rather that marks in sand as potential information would need to inform something appropriate in order to become actual information in any given respect. So if marks in sand were to inform the right sort of thing, then this informed thing would reside as something conscious.

        Then from here let’s go back to the brain itself. What in the brain might Boolean neural function be informing to exist as consciousness? We know that every time neurons fire, that they also produce an extremely small electromagnetic disturbance. It’s of course these disturbances that let us know about neuron firing given things like EEG monitoring. So I can conceive of neural potential information, informing an EM field which exists as consciousness. Similarly if the image of the right marks in sand were scanned into a computer that went on to create the right EM field based upon that information, then theoretically that informed field would reside as a consciousness that might feel what you do when your thumb gets whacked. I don’t know of anything else reasonable that processed brain information might inform to exist as consciousness however. What else might brain information be informing to exist as consciousness?

        Like

Leave a comment