Sunday, September 18, 2005

toward a Hominidal theory of mind: redux

Having posted yet another of my longish comments over at Dr. Vallicella's fine blog (see here), I'm reposting my essay on the nature of mind here, to allow newcomers to comment on it without having to email me. On the assumption that people of above-average intelligence will want to read this essay of mine, I won't waste time explaining to the big-brained how to append comments: figger it out your damn self!

Haw haw.

A quick word of thanks to Dr. Vallicella and the various commenters on his blog who have helped me begin to organize my own thoughts on the age-old "mind-body problem."

I title this post "toward a theory" because by no means will it satisfactorily establish anything about the nature of mind. I'm not a philosopher of mind, nor am I an accomplished meditator experienced at plumbing the depths of my own consciousness, nor am I well-versed in neuroscience, psychology, or other cognate fields.

With all that going against me, I now sally forth to lay out my own tentative position on what the mind is.


This essay deals not so much with the question of a mind's particular properties and functions as with its basic nature. Nevertheless, it might be a good idea for me to lay out a brief sketch of what I consider to be the properties and functions of the mind.

The mind is the source of mental phenomena. This means the mind is the source of feelings, ideas, sense-impressions, and memories (we ight also include language). It's also the source of reason and other forms of cogitation. It's the source of things like attentiveness and focus. In the everyday world, we often hear that "the body follows the mind." This could be taken to mean that the mind is the source of behavior, which in one sense means it's the source of one's will, one's desires. But some behaviors that result from the mind's burblings aren't necessarily traceable to will. The mind is, arguably, the source of those unwilled behaviors as well. Notice that I didn't say the mind is the same as the brain-- an issue I'll touch on briefly near the end of this essay.

Now-- onward.


There are several competing theories of mind, but of the greatest interest to me are those rival theories that, on the one hand, declare mental phenomena to be non-material, and on the other, declare mental phenomena to be entirely a function of matter. The general terms for these two rival positions are substance dualism and materialism. My own position, which I'm still fleshing out, I'll label as nondualistic materialism or materialistic nondualism. This may correspond roughly with established "-isms" like epiphenomenalism and neutral monism.


The philosopher René Descartes is often cited as one of the main proponents of substance dualism. Descartes posited two major categories of things: res cogitans and res extensa, which we might loosely translate as mental phenomena and physical phenomena. From the Cartesian standpoint, neither is reducible to the other.

One reason, perhaps the strongest reason, for believing this to be the case is the existence of so-called qualia, which might be thought of as elements of personal experience.

In philosophical Taoism, we learn that "The Tao that can be talked about is not the eternal (or true) Tao." There is no discursive approach to (ultimate) reality. To know it, you can't read about it. You can't hear someone else's account of it. You have to experience it for yourself. This is true whether we're talking about the taste of chocolate, the pain of an ear piercing, or a kiss.

The Western philosopher would say that the Taoist is talking about qualia. Qualia are radically subjective (the singular form of the word is quale, pronounced something like "kwah-lay"); some would call them "private." The philosopher would agree with the philosophical Taoist that qualia are essentially intransmissible, ineffable. One can try to communicate qualia linguistically, and our imaginative faculties do provide the ability to "reach out" and try to empathize, but it's rare indeed to capture the reality of an experience just from words alone.

What's it like to experience a nine-gee turn in a fighter jet, for example? We can try to imagine it, but unless we actually have the experience, we can't know what it's like.

So there are objective facts, such as "Chuck Yeager pulled a nine-gee turn in that jet he was testing yesterday," and there are qualia-- only Chuck Yeager and other pilots who've pulled nine-gee turns can know what that's like.

I said qualia are radically subjective, though, and this is important. Of the pilots who have pulled nine-gee turns, can we say that they all experienced the turns in exactly the same way? We can't, and because we all have different sets of accumulated experience, there's good reason to believe our experience of the same phenomena can never be exactly the same.

So-- with qualia being radically subjective, ineffable mental phenomena... it's possible that facts and qualia are substantially different things. The substance dualist would say that they are, and this is one important reason why the substance dualist rejects the idea that mind has a material basis.


I think there are major problems with the argument from qualia. First, I question their radically subjective nature. Second, I think that, if we agree that qualia are radically subjective, we can then do nothing useful with the concept. Third, there's the question of the trustworthiness of qualia. Let me discuss these objections in turn.

First objection: I have reason to believe that qualia contain an objective element because people, as a matter of everyday discourse, "relate to" each other. We've often heard it said after someone's expressed a problem: "Oh, I can relate to that," or "I know how that feels." This isn't accomplished through telepathy; it's accomplished, I think, through the fact that experience, far from being merely subjective, has something of the objective about it.

If qualia were absolutely unique from person to person, how would we be able to say "I can relate to that"? Emotions like sympathy and empathy would make no sense in the face of my qualia's supposed radical subjectivity. It would be impossible to ask anyone to "put themselves in my shoes" if we were certain that our own experience was absolutely unique.

While I don't want to deny the uniqueness of anyone's perspective, I think it might be better to conceive of qualia in terms of Wittgenstein's "family resemblances." By this I mean that, while my experience of stubbing my toe won't be exactly the same as yours (even if we stub our toes in exactly the same way), the experiences nevertheless contain common, overlapping traits. We all flinch after touching a hot iron because we feel the iron's heat in mostly (not roughly) the same way. Despite minor variations in our personal wiring, our qualia will be similar enough that we can speak reliably to each other about our experiences. Similarities in reports of "peak" experiences lend credence to the idea that it's not merely an objective external reality to which we are responding, but also that the human sensorium contains certain more or less "standard" features.

In fact, we usually trust that similar situations produce similar experiences, evoking similar thoughts and emotions. This is how the producers of Coca Cola market their soda's formula to billions of people, and it's how Hollywood relies on plot formulae to churn out its films. If Coca Cola and Hollywood proceeded on the assumption that everyone's internal reality was unfathomably subjective (and by extension, unfathomably diverse), they'd never make a profit off their products. The contention that experience is essentially a private phenomenon fails to explain why we constantly make these sweeping assumptions about others' behavior and inner life.

Imagination also plays a role in experience. How many of us have ever actually ridden on a flying horse? I'd wager none, but does this stop us from imagining that swooping, whooshing experience and perhaps having a small taste of what it would be like? When we read Harry Potter novels, are we all plunged into completely different imaginary worlds? I think not. It's not merely the objective text that prevents this, but something internally standard as well that keeps us on roughly the same plane as we experience the unfolding of JK Rowling's plot.

I don't mean to detract anything from the Taoist's original contention. I agree: experience has to be experienced. But experience isn't as radically subjective as all that.

In fact, I'll end this first objection by noting that the issue of qualia's intransmissibility might not be some sort of metaphysical limitation: it might be a merely technological one. If we are able to develop devices that allow us to "plug into" someone else's mind and experience what they do-- i.e., see what they see, feel what they feel, etc.-- the case for qualia's "privacy" will be severely weakened. (I wrote on this a while ago: see here, and scroll down to the conversation about chocolate.)

Second objection: This is related to the first. If I take seriously the idea that qualia are radically subjective, I can't use qualia to construct arguments applicable to people other than myself. This is, in my view, a major flaw in the argument from qualia, and it hovers dangerously close to out-and-out solipsism.

The substance dualist doesn't really want to go the solipsistic route: he believes there's an objective reality, that there's more than just res cogitans. But if qualia are so intensely private, and there's no way to bridge the gap between "first-person and third-person ontology," then how is substance dualism not, in effect, a kind of idealistic monism or even solipsism? One has plenty of reasons for believing there's something going on inside one's own head, but one can never be sure about anyone or anything else. How practical a stance is this?

Third objection: This is related to the previous two objections. If I am truly locked inside my head, unable even to verify the objective existence of a world beyond my skull, I have nothing against which to check the reliability of my sense data, my qualia.

As Dr. Vallicella and others maintain, a quale's esse is nothing more or less than its percipi: a sense-datum's being is nothing more or less than its being-perceived. Fine. But suppose I'm an amputee who has a recurrent "phantom limb" experience. Such cases are common: a patient reaches over to scratch an itch that, effectively, isn't there. With no recourse to objective reality, how can one differentiate between truth and illusion? In the instant one reaches over to scratch, one is unable to make the distinction. Similarly, hallucinogens, which alter brain states and produce twisted images and perceptions, show that qualia can be chemically manipulated.

The inherent unreliability of qualia makes any argument based on them somewhat suspect, because the substance dualist seems to be at a loss to explain where qualia come from, and he provides us no means by which to substantiate them both to ourselves and to each other. The substance dualist believes there's a res extensa, but can't argue from the inside of his skull to the outside world. The argument from qualia, multiply flawed, leads nowhere.


My own stance is more in line with materialism (also called physicalism). All the properties we associate with mind, all the things we call "mental activities"-- are epiphenomenally related to matter. We can think of the history of mind as a chart:

Matter > Life > Mind

I'll tell you a story. It's not one that everyone agrees with, but bear with me.

First, there was matter. As matter interacted with itself, certain complex and repeating patterns began to appear. Some of those patterns exhibited emergent behaviors, such as self-replication. Thus it was that life arose epiphenomenally from matter. Life was an emergent phenomenon. Then, as patterns of life gained even greater complexity, mind arose from life. The End.

Please note that the above chart doesn't make clear that mind is a subset of life, and life is a subset of matter. The chart might give you the mistaken impression that matter disappeared when life arrived, and that life disappeared when mind arose. Perhaps a better illustration might be a three-tiered pyramid, with matter at the bottom, life in the middle, and mind on top. The lowest tier cannot disappear, and the pyramid metaphor makes clear that matter is the basis of life and mind.

Other analogies are illustrative. Just as computer software requires hardware to be run, so it is that mind requires matter to exist. However, the software/hardware analogy doesn't address the epiphenomenal nature of mind, because software didn't evolve out of hardware, whereas I would contend that mind did evolve out of matter. Nevertheless, the analogy points out the most important thing: if there's no matter, there's no mind there. Matter is logically prior to mind: it has to come first. More: matter is necessary for mind.

My stance is informed by the progress we continue to make in science, both with regard to what neuroscience tells us about the brain and body, and to the work being done on artificial intelligence (AI) by luminaries like Ray Kurzweil (with whose thought I'm only now becoming familiar; just a few weeks ago I had little idea who he was).

Science, as yet, can't conclusively prove that mind is inseparable from matter. As I see it, science is building a case for materialism, and is doing so on multiple fronts. Perhaps Kurzweil is right to speak of what is currently happening in the field of AI as "reverse engineering" the human brain.

However, I don't see mind as reducible to matter, if by this one is trying to claim something like "the mind's activity can be comprehensively explained through simple physics." No, it can't be thus explained. In the computer analogy, one can see that software and hardware need each other, but that they operate according to qualitatively different rules. As Robert Pirsig notes in his book Lila, a software specialist doesn't have to know everything about hardware to do his job. The same is true, in reverse, for the hardware specialist.

I would submit that, while mind isn't reducible to matter, it does absolutely depend on matter for its existence. The problem of "interiority," of subjective experience (qualia again) isn't yet solvable, but might be in the future.

I can't prove mind's absolute dependence on matter to you, but I can lay my argument out this way. Science's breakthroughs in the fields of human cognition (neuroscience, psychology, etc.) and in AI are all predicated on materialistic theories of consciousness. It's obvious to scientists that substance dualism is, scientifically speaking, utterly useless. You can't form theories leading to developments in artificial intelligence if you're already convinced that mind isn't material. It's a non-starter.

And as the science advances, each advance represents another piece of evidence for the materialist's case. The substance dualist's arguments, on the other hand, first focus on what hasn't yet been explained, and then posit non-material consciousness (whatever that might mean) as the best, and perhaps only, way to make sense of the inexplicable. For the substance dualist, the case is open and shut. As far as he's concerned, there's simply no way to establish that mind isn't immaterial. It's a strategy that opens the door to a lot of hocus-pocus, in my opinion. This is similar to a "god of the gaps" approach in evolutionary theory: the theory hasn't explained everything, therefore those gaps in the explanation indicate the work of God.

Except the gaps keep getting smaller, don't they? Religion, whenever it tangles with science on science's own ground, always ends up in retreat. History is replete with examples of this. Substance dualism, a stance generally favored by the religious, has done nothing but make claims. Denial and refutation seem to be the only things on offer from the dualist's camp. The only proactive contribution to the discussion has been the positing of immaterial mind.

A scientist, however, would like to see and explore this "mind independent from matter." Is this possible? If the dualist claims that the mind, by its very nature, is not apprehensible by science, then this is little different from Carl Sagan's argument about "the dragon in my garage" from his book The Demon-haunted World. To wit:

Now, what's the difference between an invisible, incorporeal, floating dragon who spits heatless fire and no dragon at all? If there's no way to disprove my contention, no conceivable experiment that would count against it, what does it mean to say that my dragon exists? Your inability to invalidate my hypothesis is not at all the same thing as proving it true. Claims that cannot be tested, assertions immune to disproof are veridically worthless, whatever value they may have in inspiring us or in exciting our sense of wonder. What I'm asking you to do comes down to believing, in the absence of evidence, on my say-so. The only thing you've really learned from my insistence that there's a dragon in my garage is that something funny is going on inside my head. You'd wonder, if no physical tests apply, what convinced me. The possibility that it was a dream or a hallucination would certainly enter your mind. But then, why am I taking it so seriously? Maybe I need help. At the least, maybe I've seriously underestimated human fallibility. Imagine that, despite none of the tests being successful, you wish to be scrupulously open-minded. So you don't outright reject the notion that there's a fire-breathing dragon in my garage. You merely put it on hold. Present evidence is strongly against it, but if a new body of data emerge you're prepared to examine it and see if it convinces you. Surely it's unfair of me to be offended at not being believed; or to criticize you for being stodgy and unimaginative -- merely because you rendered the Scottish verdict of "not proved." (italics added)

I think Sagan has a point. The substance dualist walls himself off from even the possibility of having his view empirically contested, insisting that the mind is so radically other, causing physical changes through an incomprehensible-- yet somehow possible-- process, that it is totally immune to scientific proof or disproof. In the meantime, the dualist offers no positive arguments for his thesis. All the while, science continues to reveal the intimate relationship between physical processes in the brain-body suite, and mental/cognitive processes. At the same time, science, unlike substance dualism, remains open to the possibility that its own assumptions about the nature of mind may be quite wrong. Substance dualists, by contrast, too hastily trot out the dangers of mule-headed scientism, but do so while adopting a stance immune to discussion.

As I wrote earlier:

To date, science has not discovered a disembodied consciousness, which makes me lean toward the high likelihood that consciousness is not disembodied. Cut off a person's head-- are they still conscious? In what way? How do we know? Where is their consciousness, if no longer "in" their body? A substance dualist can provide no useful, verifiable answers, whereas a naturalist can say "cessation of physical function entails cessation of consciousness"-- which is likely true, given the millennia-long lack of reanimated corpse and ghost attacks.


I've dealt with substance dualism, so let's move on to materialist critiques of Ray Kurzweil's brand of "strong AI" materialism, a school of thought that considers truly sentient (not merely ostensibly sentient) machinery possible.

One prominent critic of Kurzweil is John Searle, himself a materialist. Searle believes Kurzweil's "strong AI" is founded on a false assumption: that non-biological, functionally equivalent parts can be assembled to produce something conscious. Searle isn't a substance dualist, but he does argue that only biological entities can have minds.

Searle's most famous argument against the possibility of conscious machines is his 1980s-era "Chinese Room" thought experiment. There are several versions of this argument, which generally goes something like this:

Imagine a room in which you find a man. He's got a complicated rule book with him, whose purpose is to help him interpret symbols-- Chinese characters, in fact. The room has two slots, one for input and the other for output. Someone outside the room slips in a piece of paper with Chinese characters on it. The man in the room then consults the book, which tells him to respond to the Chinese characters he receives by writing out a different set of Chinese characters on another slip of paper, and passing these characters out through the output slot.

Searle's point: outside the room, it looks as though an actual, written conversation in Chinese is happening. But inside the room, the man manipulating the slips of paper has no clue what he's reading; he's merely using the rule book (analogous to a computer algorithm) to determine how to respond. At no point is the man (or his rule book) demonstrating anything like true "understanding."

Searle is attacking the syntactic nature of computer programs and contending, via his Chinese Room argument, that consciousness isn't present in this setup, and never can be.

Kurzweil has, of course, responded to this charge, and it's worth quoting here. Here is what he says in the context of a debate about the meaning of IBM's Deep Blue, a computer program that played chess against world champion Garry Kasparov:

It is not at all my view that the simple recursive paradigm of Deep Blue is exemplary of how to build flexible intelligence in a machine. The pattern recognition paradigm of the human brain is that solutions emerge from the chaotic and unpredictable interplay of millions of simultaneous processes. And these pattern recognizers are themselves organized in elaborate and shifting hierarchies. In contrast to today’s computers, the human brain is massively parallel, combines digital and analog methods, and represents knowledge as highly distributed patterns encoded in trillions of neurotransmitter strengths.

A failure to understand that computing processes are capable of being—just like the human brain—chaotic, unpredictable, messy, tentative, and emergent is behind much of the criticism of the prospect of intelligent machines that we hear from Searle and other essentially materialist philosophers. Inevitably, Searle comes back to a criticism of “symbolic” computing: that orderly sequential symbolic processes cannot recreate true thinking. I think that’s true.

But that’s not the only way to build machines, or computers.

So-called computers (and part of the problem is the word “computer” because machines can do more than “compute”) are not limited to symbolic processing. Nonbiological entities can also use the emergent self-organizing paradigm, and indeed that will be one great trend over the next couple of decades, a trend well under way. Computers do not have to use only 0 and 1. They don’t have to be all digital. The human brain combines analog and digital techniques. For example, California Institute of Technology Professor Carver Mead and others have shown that machines can be built by combining digital and analog methods. Machines can be massively parallel. And machines can use chaotic emergent techniques just as the brain does.

My own background is in pattern recognition, and the primary computing techniques that I have used are not symbol manipulation, but rather self-organizing methods such as neural nets, Markov models, and evolutionary (sometimes called genetic) algorithms.

A machine that could really do what Searle describes in the Chinese Room would not be merely “manipulating symbols” because that approach doesn’t work. This is at the heart of the philosophical [sleight] of hand underlying the Chinese Room (but more about the Chinese Room below).

It is not the case that the nature of computing is limited to manipulating symbols. Something is going on in the human brain, and there is nothing that prevents these biological processes from being reverse engineered and replicated in nonbiological entities.
(italics in original)

Kurzweil's "strong AI" tends in the direction you might imagine: the merging of people and machines. His position in the mind-body debate might be described as "functionalist," insofar as he believes that non-biological materials can reproduce brain functions and eventually be their equally conscious equivalent.

Imagine a world in which "wet circuitry" exists. Your brain is severely traumatized, so you're taken to the hospital and a chunk of your brain is scooped out and replaced with this wet circuitry, which has been designed to replicate your brain functions. Voila-- cognitive abilities restored (even though you're missing a bunch of engrams that were lost in the brain trauma)! Is such a world possible? Kurzweil would say yes, and so would I. Will we live to see it? No, of course not. But the important question is whether such circuitry will possess the functional equivalents of mental phenomena like intentionality and so on. Kurzweil and I would say, Why wouldn't it?


Despite my materialist leanings, I don't believe that the mind is exactly the same as the brain. We're still trying to understand the biology of consciousness, and there's reason to believe that consciousness, while primarily rooted in the brain, is also distributed through the body. I'm not wimping out and suggesting that the mind is also something other than brain and body; I'm merely suggesting that the physical elements of mind might include more than just the brain.


There's a lot more to this mind-body debate than I've covered here. My own point of view is close to Kurzweil's, but it's also in agreement with Searle insofar as both Kurzweil and Searle see human consciousness as arising epiphenomenally from matter. All three of us are materialists in that sense. But I side with Kurzweil over Searle in believing that functionalism has merit: non-biological elements can and will be assembled to produce machines that can pass the Turing Test and be perceived as truly conscious. Searle's Chinese Room argument attempts to show how a machine might cheat the Turing Test, but I think cheating the test is impossible, and assume Kurzweil does, too. The substance dualist won't be convinced, of course. A machine could pass the Turing Test, exhibit all the signs of conscious activity, and still the dualist will insist, in the face of strong evidence, that the jury's out. This insistence will sound increasingly puny as progress continues beyond the threshold of human intelligence.*

Substance dualism strikes me as an untenable position for the reasons I've already given. The argument from qualia is unconvincing because of the very nature of qualia: if they are indeed radically subjective, then no meaningful claims can be made about other minds. More than that: I don't think qualia are entirely private and inaccessible, because human beings are wired in similar ways to have similar experiences and modes of experience (taking into account cultural mediation, etc.). We daily assume that we can relate to each other, which undermines contentions about radical subjectivity.

Substance dualism also fails to produce any constructive arguments on its own behalf. It can't present us with an examinable immaterial mind, nor can it do more than make untestable, unprovable claims about this mind. In the meantime, science, proceeding on materialistic assumptions, continues to make progress in fields like neurophysiology and AI. How is this possible if those assumptions are fundamentally wrong? The substance dualist's notion of "mind" truly is little more than Sagan's dragon in the garage: veridically worthless.

I see mind as rooted in and inseparable from matter. No matter, no mind. Cut off a person's head, and that person's no longer conscious. Introduce certain hallucinogenic chemicals into someone, and you give them hallucinations. Watch a person age and experience dementia, and you'll see firsthand how the brain's deterioration exactly parallels the mind's deterioration (cf. also the story of Phineas Gage, the poor bloke who suffered a massive head injury and literally became a different person afterward). Mind is built upon matter.

But I also believe that mind retains its own distinctiveness and isn't simply reducible to matter. The hardware/software analogy makes this clear: mind is inseparable from matter, but follows qualitatively different rules from it. And just as an ocean wave relates nondualistically to the ocean, possessing its own distinctiveness while fully integral to the greater whole, so it is that mind relates nondualistically to matter. There's no need to radically dichotomize the two. This, then, is what I'm calling my theory of materialistic nondualism.

*There are solid ethical reasons to be wary of Kurzweil's vision. A good critique is Bill Joy's essay "Why the Future Doesn't Need Us," to be found in the collection Taking the Red Pill: Science, Philosophy, and Religion in the Matrix, edited by Glenn Yeffeth. Joy sees frightening difficulties associated with the three major technologies of our time: genetic engineering, nanotechnology, and robotics. All three technologies can produce catastrophes because they all deal with (potentially) self-replicating products. Joy gives us a spooky quote from George Dyson: "In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines." The point being made is blatantly Darwinian: humans will lose if forced to compete with machines for space and resources.

UPDATE: The fusion of people and machines has been happening for a while. Here's one of the latest examples: a man with robotic arms controlled by his mind. Intentionality meets the machine! A qualia-related quote from the article: "Eventually tiny sensors in the fingertips will allow Sullivan to feel texture and temperature." Feel! I'm not sure a substance dualist can explain how this is possible. Make this case into a thought experiment: how intrusive will the prostheses of the future be? Artificial eyes? Artificial optic nerves? Artificial brain neurons? Artificial brains with intentionality?


No comments: