So right now I am thinking about a dog, but no presently existing dog. My thinking has intentional content. It is an instance of what philosophers call intentionality. My act of thinking takes an object, or has an accusative. It exhibits aboutness or of-ness in the way a pain quale does not exhibit aboutness of of-ness. It is important to realize that my thinking is intrinsically such as to be about a dog: the aboutness is not parasitic upon an external relation to an actual dog. That is why I rigged the example the way I did. My thinking is object-directed despite there being no object in existence to which I am externally related. This blocks attempts to explain intentionality in terms of causation. Such attempts fail in any case. See my post on Representation and Causation.
The question is whether the Martian scientist can determine what that intentional content is by monitoring my neural states during the period of time I am thinking about a dog. The content before my mind has various subcontents: hairy critter, mammal, barking animal, man's best friend . . . . But none of this content will be discernible to the Martian neuroscientist on the basis of complete knowledge of my neural states, their relations to each other and to sensory input and behavioral output. To strengthen the argument we may stipulate that Marty lacks the very concept dog. Therefore, there is more to the mind than what can be known by even a completed neuroscience. Physicalism (materialism) is false.
The argument is this:
1. Marty knows all the physical and functional facts about my body and brain during the time I am thinking about a dog.
2. That I am thinking about a dog is a fact.
3. Marty does not know that I am thinking about a dog.
Therefore
4. Marty does not know all the facts about me and my mental activity.
Therefore
5. There are mental facts that are not physical or functional facts, and physicalism is false.
But as I wrote on February 13 of this year:
Substance dualists like to say that there's no clear connection between brain states and subjectivity. This divide between first- and third-person ontology (more simply, subjective and objective reality) is at present inexplicable, and presents itself to scientists and philosophers as "the hard problem" of human consciousness: why do we experience? One of the substance dualists' favorite arguments is that, when one thinks of a horse, no actual horse appears inside the human brain: no horse-shaped electrical pattern, no homunculus-like horseling running around inside the gray matter, nothing recognizable as (and relatable to) a horse.
[...]
And yet, despite the continued existence of the hard problem, the dualists' "no horse in my head" argument strikes me as a weak objection to physicalism. Look at a Blu-ray disc, for example. All you see is a disc that, when you tilt it in different directions, seems to reflect a shifting rainbow pattern. Yet you know that, in conjunction with a TV and a Blu-ray player and all the proper settings and connections, that disc is a key component in displaying a movie-- sight, sound, director's and actors' commentaries, etc. When I look at the disc, I see no movie, no director, no actors, and yet I know that the information corresponding to those concepts is contained on the disc.
Why, then, should we be troubled about the absence of literal horse-images in our brains? As the Blu-ray shows us, other non-conscious phenomena have similar properties: they contain information that isn't evident until an array of devices makes it so.
I take the issue up again here, where I quote a spirited exchange between two noted philosophers of mind: Arnold Schwarzenegger and Britney Spears.
UPDATE: In a subsequent post, I anticipate a possible objection to the above.
_
I think that's exactly the right objection, Kevin. You have to know more than just the state of the brain; you have to know the full context of how it came to be that a certain state of the brain represents that dog.
ReplyDeleteIn Gödel, Escher, Bach: An Eternal Golden Braid, Douglas Hofstadter muses on the relationship between records and players. Where is the information? You can push almost all of it into the player if you like.
In Bill's example he isn't thinking of a dog ex nihilo; he is using a brain that already has a mapping between a certain state and the concept of "dog". You don't have a full accounting of the system until you know how that mapping was established. That he insists he is not thinking of any particular dog just now is a sneaky move, because he's just re-using a representational mapping that was put in place earlier - either by his own experience or by way of his genome.