Dr. V tackles the question of AI and consciousness, primarily through a Platonic-Kantian-Husserlian lens. He emphasizes the unity of consciousness—a unitary substrate that holds all past sensory phenomena together such that we are able to perceive things like melodies (which are not merely series of successive notes, chords, etc., but those things in their entirety), and he doesn't see how AI, at least in its current form, possesses that unity.
Suppose my mental state passes from one that is pleasurable to one that is painful. Observing a beautiful Arizona sunset, my reverie is suddenly broken by the piercing noise of a smoke detector. Not only is the painful state painful, the transition from the pleasurable state to the painful one is itself painful. The fact that the transition is painful shows that it is directly perceived. It is not as if there is merely a succession of consciousnesses (conscious states), one pleasurable the other painful; there is in addition a consciousness of their succession. A succession of consciousnesses need not amount to a consciousness of succession.
In the example given, there is a consciousness of succession. For there is a consciousness of the transition from the pleasant state to the painful state, a consciousness that embraces both of the states, and so cannot be reductively analyzed into them. A consciousness of their succession is a consciousness of their succession in one subject, in one unity of consciousness. It is a consciousness of the numerical identity of the self through the transition from the pleasurable state to the painful one. Passing from a pleasurable state to a painful one, there is not only an awareness of a pleasant state followed by an awareness of a painful one, but also an awareness that the one who was in a pleasurable state is strictly and numerically the same as the one who is now in a painful state. This sameness is phenomenologically given, although our access to this phenomenon is easily blocked by inappropriate models taken from the physical world. Without the consciousness of sameness, there would be no consciousness of transition.
[ ... ]
May we conclude from the phenomenology of the situation that there is a simple, immaterial, meta-physical substance that each one of us is and that is the ontological support of the phenomenologically given unity of consciousness? Maybe not! This is a further step that needs to be carefully considered. The further step takes us from the phenomenologically given unity of consciousness to an underlying immaterial soul substance which is the ‘seat’ of consciousness and the ultimate subject of consciousness. I don’t rule this move out, but I also don’t rule it in. I don’t need to take the further step for my present purpose, which is merely to show that a computing machine, no matter how complex or how fast its processing, cannot be conscious. This is because no material system can be conscious.
Another example is provided by the hearing of a melody. To hear the melody Do-Re-Mi, it does not suffice that there be a hearing of Do, followed by a hearing of Re, followed by a hearing of Mi. For those three acts of hearing could occur in that sequence in three distinct subjects, in which case they would not add up to the hearing of a melody. (Tom, Dick, and Harry can divide up the task of loading a truck, but not the ‘task’ of hearing a melody, or that of understanding a sentence.) But now suppose the acts of hearing occur in the same subject, but that this subject is not a unitary and self-same individual but just the bundle of these three acts, call them A1, A2, and A3. When A1 ceases, A2 begins, and when A2 ceases, A3 begins: they do not overlap. In which act is the hearing of the melody? A3 is the only likely candidate, but surely it cannot be a hearing of the melody.
This is because the awareness of a melody involves the awareness of the (musical not temporal) intervals between the notes, and to apprehend these intervals there must be a retention (to use Husserl’s term) in the present act A3 of the past acts A2 and A1. Without this phenomenological presence of the past acts in the present act, there would be no awareness in the present of the melody. This implies that the self cannot be a mere bundle of perceptions externally related to each other, but must be a peculiarly intimate unity of perceptions in which the present perception A3 includes the immediately past ones A2 and A1 as temporally past but also as phenomenologically present in the mode of retention. The fact that we hear melodies thus shows that there must be a self-same and unitary self through the period of time between the onset of the melody and its completion. This unitary self is neither identical to the sum or collection of A1, A2, and A3, nor is it identical to something wholly distinct from them. Nor of course is it identical to any one of them or any two of them. This unitary self is co-given whenever one hears a melody. (This seems to imply that all consciousness is at least implicitly self-consciousness. This is a topic for a later post.)
What I find interesting is how Dr. V makes his argument without ever once using the word memory. But has Dr. V inadvertently made an argument for how the tune-identifying app Shazam works? Most apps these days can handle chronologically sequential data, like musical notes (or medical data, or economic trends), and find patterns in that chronological sequence. Granted, Shazam isn't conscious of anything and therefore has no unity of consciousness, but it is nevertheless capable of identifying many tunes after only a few notes of music.
A philosopher of mind named John Searle (1932-2025) once made an argument now known as "the Chinese room." Searle's point was to show how consciousness can be simulated without actually being there. In this scenario, you've got a Chinese-knowledgeable person who comes up to a closed room. He writes a question or statement on a piece of paper and slips it under the room's door. What he doesn't know is that, inside the room, there's a guy who doesn't know any Chinese, but who has a rulebook or program that shows the responses to the patterns of the Chinese characters slipped under the door. The guy inside the room finds the appropriate pattern for the response, then outputs that response. From the perspective of the guy outside the room, it seems that "the room" is conscious and has successfully communicated with him, but we observers know that "the room" (i.e., the guy in that room) has no knowledge of Chinese.
This thought experiment, Searle and Searlians contend, shows that consciousness can be simulated without actually being consciousness. But there are at least two responses to this: (1) What if the Chinese room proves too much? What if it proves that we ourselves aren't conscious at all but merely "zombies" that process reality in a way that we label "consciousness"?* And (2) Going in a completely different direction: Couldn't it be that the Chinese-room scenario is indirectly harboring consciousness in the form of the rulebook for Chinese? Some conscious minds had to create the rulebook, right? So consciousness was at least indirectly involved in the outside man's interaction with the inside man.
If we follow argument (1), well, that's the Shazam argument. You don't need consciousness or a unity of consciousness to process auditory data and find patterns. If Shazam can do that without being conscious, why should our ability to perceive melody be seen as proof of consciousness? If we follow argument (2), we see a reflection of the sticky situation we're in today, which mirrors the rebuttal to Searle's Chinese room: consciousness has already been smuggled into the scenario. People can say with assurance that "AI isn't conscious" (a statement I agree with), but in truth, it's people's consciousnesses that are helping to form AI. Consciousness factors into everything as we inevitably assume the godlike role of creating things (eventually, beings?) in our own image and likeness. We ourselves can do parkour-style standing backflips. We now have robots that can do the same, and they can recover from the flip with an inbuilt "sense" of balance. Robots can cook, clean, etc., and soon, they'll be able to do all of the physical labor that we can do, just better because the robots will be faster, stronger, and unable to experience fatigue, frustration, and anger. (The extent to which we can "program emotion" is an interesting question for a different discussion.)
So—is AI conscious? Not yet, and probably not for a long time. But we are using consciousness to build aspects of consciousness into the machines we're creating: balance, dexterity, pattern-recognition, reactions to changing environments—these are all features of consciousness. It won't be long, I fear, before the answer to the question Is it conscious? will no longer be a simple yes or no.
* * *
As an unrelated side note, did you see the grammar error here?
Passing from a pleasurable state to a painful one, there is not only an awareness of a pleasant state followed by an awareness of a painful one, but also an awareness that the one who was in a pleasurable state is strictly and numerically the same as the one who is now in a painful state.
__________
*The pro-consciousness response to this is that I have access to my own consciousness, which is proof enough to me that I am a conscious being. That may be sufficient for the philosophers who take first-person subjective insights to be evidential or probative, but if we're talking about scientific standards of proof, well, that sort of proof only holds weight in a third-person objective context. And unfortunately for you (and me), your consciousness is directly available only to you. So, what's been proved, objectively speaking?





No comments:
Post a Comment
READ THIS BEFORE COMMENTING!
All comments are subject to approval before they are published, so they will not appear immediately. Comments should be civil, relevant, and substantive. Anonymous comments are not allowed and will be unceremoniously deleted. For more on my comments policy, please see this entry on my other blog.
AND A NEW RULE (per this post): comments critical of Trump's lying must include criticism of Biden's or Kamala's or some prominent leftie's lying on a one-for-one basis! Failure to be balanced means your comment will not be published.