Features 21stC home page

Conversations: a new feature of 21stC

Debate, discussion, and argument are at the heart of academic communication. Knowledge would never change if scholars did not engage each other's ideas and argue for, against, or in spite of them. This constant roiling of ideas--bouncing around, smacking into one another, sometimes attracting, sometimes repelling--is what the academic thought process is all about. Thesis, antithesis, synthesis: that's supposed to be how it works, right?

In this issue, 21stC begins a new feature that invites prominent scholars in different disciplines to discuss a question that each will view from a very different professional perspective, with a concomitant degree of differing opinion and verbal interchange about possible answers. Careful readers of 21stC will recognize this literary form from an earlier discussion of environmental ideas by Professors Melnick, Pearl, and Schama in Issue 1.2. We liked it well enough to adapt it as a new feature that will appear from time to time. Imagine "Conversations" as 21stC's version of a coffee-house conversation in print. The only thing we don't supply is the caffeine.


A C O N V E R S A T I O N O N C O N S C I O U S N E S S

Wetware meets software: the mind and its artificial models

With computers outstripping some human abilities and ideas of artificial intelligence in the air, can we still speak of cognition and consciousness as uniquely ours? 21stC assembled a panel of leaders in philosophy, computer science, and psychiatry to discuss what machines might tell us about the mind

By AKEEL BILGRAMI, JARON LANIER, and HAROLD SACKEIM

21stC: Consider Deep Blue, the chess computer. A human did beat it, but it took the best human chess player alive, and it prompted a lot of speculation about whether our machines are becoming capable of outdoing us--whether they're somehow more complicated information processors than we are.

Jaron Lanier: I think the question as it's been put forward in most of the popular press is a little bit of a red herring. Most of the lay public are not so much worried about whether computers can do certain tasks better than people, because of course they already can. What they're mostly concerned about is more of a philosophical question: the nature of experience and whether there is something about personal subjectivity that might be distinct from mechanism. That concerns people on a spiritual level.

Speaking directly to the question of comparative capability, it's important to understand that the reason people find chess fascinating is because it's something we're not good at doing. It's a task that we have been obsessed with for quite a long time precisely because it's one we can identify that we're particularly ill-suited for. I think everyone who is familiar with this line of research would agree that a machine that can do everything a human brain can do would probably look different enough from computers as we now understand them that we'd end up coining a different word. I think also there's no reason to doubt that some potential mechanism could do all the things we do, but that's very far away.

21stC: If we granted that supposition, would that machine's functions also include sentience? There are those who say there is no felt quality that can't be reduced to algorithms.

Akeel Bilgrami: That's a very controversial subject, and I certainly don't have a settled view on this, but it does seem that if there are felt qualities which remain unreduced and unsimulated in principle, then there's a question as to whether those felt qualities are in any way public items. That is, they're entirely private sensations, and I don't know that anything else could be said about them except that they're there.

Harold Sackeim: I think the question you're posing is, if we saw an artificial being crying, how would we know that the artificial being was depressed? Is it completely private or not? How do we establish that something else is feeling?

Bilgrami: And does it make a difference to anything else that this private phenomenon is there? Is it an idle wheel that spins but makes absolutely no difference to how we attribute other properties to that subject? If it's a difference that makes no other difference, there's a real question as to how we could study it, over and above saying it exists. There's no subject which has just one proposition in it, that something exists. We have to be able to say things about it. So, that's really the question to ask: Are there things we can say about it in detail, and it be such that it's not reducible to anything else?

On a different matter, Kasparov himself said that after the first few games, when he realized what its particular virtues and abilities were, he had to play this machine differently from how he would play a human being. I was wondering if that was significant: Maybe it will do everything that we can do, but in a very different way.

Lanier: If one of the research goals of artificial intelligence is to increase our understanding of what people do in their brains, then certainly we should be concerned with how computers accomplish what they accomplish. Of course, we don't know very much about how brains accomplish what they accomplish, so it could turn out that we happen to have hit upon the same method on some level. There are strong reasons to believe that what the computer is doing is profoundly different from what brains do.

Bilgrami: A lot of other human activities, such as facial recognition, seem to be not done in terms of a sequential system like a digital computer. They're done by parallel distributed processing. Now, that would suggest that the computer analogy is not as interesting as other models, wouldn't it?

Sackeim: I'm not so sure about that. When we get to fairly concrete cognitive acts, it would be wrong on our part to assume that we can't model them through artificial means. Pigeons are excellent at learning, from photographs, who are members within families. We don't know how the pigeon does it, but I assume we can build a machine that will make those types of discriminations with fair degrees of reliability.

My own view as to where the tension lies has more to do with issues about the nature of consciousness: planning, goal-directedness, affectivity. Part of the reason the chess-playing computer grabbed public attention is that it was learning--not just executing a program but responding to changes in its environment and altering its strategy. The difference between that type of behavior and larger goal-directedness--thinking "I want to produce a novel" or "I want to solve a scientific problem, and I have different problems I can choose from"--is the crux. If you translate that into biologic terms, the question from a neurobiologist becomes "Will computers ever develop frontal lobes?" That's the part of humanity that has distinguished us from other species and has undergone the greatest development.

Sifting the grounds for skepticism

Sackeim: In the neurosciences, pursuing the neurobiology of emotion--what the transmitter systems are within neuroanatomic pathways, how we can alter people's moods-- that's something that we do fairly well. I don't know whether any of that work addresses the philosophic questions. We can certainly conceive of a machine where we physically assault it and there's some external expression of pain, but in no way would we necessarily say that that was the necessary and sufficient conditions for saying that the computer was feeling pain.

Bilgrami: But is that entirely because it's not made of cells, but of chips? Is that the reason why we would be skeptical: that the realizing hardware is different?

Sackeim: In a human, we have circumstances where there can be reactions to painful stimuli when individuals are unconscious, and we don't think that they are experiencing pain. But it's often the case that an awareness of the state, a self-consciousness, is prerequisite. The question then becomes "Under what conditions would we say that a computer had consciousness of some internal state?"

Bilgrami: If awareness is just having these second-order states about its first-order states, then my worry is "What is the philosophical ground for being skeptical?" It's not so much as if I'm not skeptical; I just want to know what the grounds are for the skepticism.

Lanier: I have an argument that also relates to the question of what method is used to design better tools for ourselves. The computer is serving as a central metaphor, it seems, for civilization at this point. It's the only thing everyone seems to be able to agree on, that somehow computers are good and useful and should be around children, should be around all of us. Therefore the question of how we design our computers should be viewed as important. I believe that this question of machine consciousness has a direct impact on how we design computers and so, here's a place finally where this question does make a difference.

The original argument for machine consciousness was framed by Alan Turing in the thought experiment called the Turing test: You have a computer and a human in isolation booths, attempting to fool you. The claim Turing makes is, at the point where you can no longer distinguish them, if you still insist on a distinction, it's as if you were repeating the sins of the Catholic Church and condemning Galileo by insisting on some sentimental central status for people that's actually not justified. Here's the critique I would make: All the test really measures is whether you can distinguish a person from a computer. Turing assumes that at the point that you can no longer distinguish them, it must mean that the computer has changed and become more humanlike. But indeed, the opposite is possible, and the human might have become more computerlike. Turing made an error of identifying humanness with the word "smartness." We don't know whether the computer has gotten smarter or the person has gotten stupider.

Now, that's an epistemological problem; unfortunately, that problem does translate in a practical sense into our ability to design machines. The idea of artificial intelligence has actually been used already in commercial products. They've all been failures. When you observe people using the products, they start to change themselves to simplify their patterns of behavior to constrain their lives in order to make the machines appear smart. The Newton has a calendar, which is capable of noticing patterns in one's life. If you call a person regularly, it will start to remind you to call that person on schedule; if you tend to have lunch with a certain person in a certain place, it will pick up on that. People start to narrow themselves and become more like this representation that was written in advance. If users believe a computer is smart, they're more likely to defer to it rather than provide critical feedback. If you perceive of a computer as a dumb tool with a user interface, if it fails for some reason, rather than granting it stature as an autonomous being, you're likely to say "This tool is poor" and demand that it be changed. By interrupting that feedback loop by granting the computer autonomy, you prevent the progress of tools for the benefit of human users. So, for that reason among others, I think that the idea of artificial intelligence is actually damaging in the practical conduct of computer science.

Logic, "psycho-logic," and the buzz

Bilgrami: Even if we grant that we are some kind of machines ourselves, just the product of evolution, not the product constructed in the ways that computers are constructed--that is to say, our minds are to some extent like software, and they are realized in wetware or hardware--the ultimate description that we get in our cognitive, physiological, and biological sciences [is that] the sciences traffic in the non-normative vocabulary of causation. There's a vocabulary of law, but we say this event follows this event, and it does so in virtue of instantiating some law. It may be a fairly hedged law, full of ceteris paribus clauses and so on, but nevertheless it's statements of those kinds, whereas we tend nevertheless to think that normative ways of speaking, too, are essential to thinking. If you believe all men are mortal, and you believe that Socrates is a man, then you ought to believe Socrates is mortal. Now, that "ought" is not strictly a term that science traffics in at all. These evaluative remarks, it seems to me, are not found under the scientific description of how things are and what explains them by laws. So that's the real duality, not whether we can be machines or not. We are some type of machine, but we're the kind of machine which essentially is subject to our own rational principles, which is not what the world of nature is subject to.

21stC: Red in tooth and claw, and devoid of syllogisms?

Bilgrami: Right. Devoid of the normative aspects. You can say all creatures who believe all men are mortal, and who believe Socrates is a man, in fact do believe this. But that's not the point. The point is, even if they all had to, we also want to say something further: that they ought to.

Sackeim: I think that begs the question, because what I hear you saying is that there's a psycho-logic which is different than logic.

Bilgrami: Yeah, or different than physics or biology.

Sackeim: Right. But the issue becomes "What are the boundaries, and how do we know when something fits within that psycho-logic or not?" It's readily understood among humanity that when you lose something you care about, you are hurt. There's nothing that's given in any logic that that should be the case, but it's a fact about humanity, and it would be an odd creature that wouldn't show that.

21stC: Or a diseased creature, perhaps. We speak of psychopathology where someone loses that insight.

Sackeim: Right. So there are fuzzy boundaries as to where we can define the descriptors, but how do we decide that some alien thing matches those descriptors or not, is inside or outside?

Lanier: The whole problem with this question is that one can't begin to address it without adopting assumptions and methodologies that already answer it. It tends to polarize to extremes, in which conversation is very difficult. I have found that once someone is within the machine-consciousness world, they're incapable of reaching any different conclusions using that vocabulary. Likewise, for someone coming from some subjective origin, phenomenologists, these sorts of people, they're incapable of seeing the other point of view. I think it's impossible to delve into this question without encountering a sense of personal spirituality and things that are truly metaphysical. We try to have sober conversations about it and stay within the bounds of a discipline, and in fact we can't. The folks who end up believing in machine intelligence do so with a weird fundamentalist fervor. There's a religious sensibility to both sides. I think it touches on the deepest sense of human identity, of whether there is something mysterious and transcendental about the existence of a subjective point of view. There's this core question of a difference between subjectivity and objectivity, and that's a very personal question.

Bilgrami: But subjectivity is sort of an omnibus term. It means so many different things. Consciousness is the same way. It seems to me, one notion of consciousness is that one is aware of one's thoughts. It's just the having of higher-order thoughts: metacognition, as they call it. I have a thought, and then I have a thought that I have that thought. Some people say that's all consciousness is, and surely computers can have that second order.

Lanier: I've stopped using the word "consciousness" for that reason, because it's been colonized by the AI community, so I can't use it anymore. It's been destroyed.

Bilgrami: Right. So another notion of consciousness is this much more mysterious thing. There's this buzz that comes on when we get up and seems to go away when we go to sleep, or when we faint. There's just this constant qualitative wakingness. Maybe I'm pretending to be deaf to the issue, but I don't understand what this kind of subjectivity is supposed to be. I know Searle makes a lot of it. But my sympathies are with people like Dennett, because I don't know what this extraordinarily subjective qualitative feel--

Lanier: You might not have it. I mean, there's no reason to think everybody has it. I have it, though.

Bilgrami: Yeah, but you're not supposed to say that! You're supposed to say, "This is what it is like to be human."

First-degree cybercide?

Sackeim: When you think in practical terms of determining when to pull the plug--and that's really what it comes down to on the grossest level--the determination is based on some judgment as to whether that version of consciousness is there. When that fades, it's determined that you're brain-dead.

Bilgrami: But I think "Why should I pull the plug?" means "Why should I take this human life away?" That can be spelled out in all sorts of ways that don't mention this mysterious thing. I understand that if you leave it out, you're leaving out something; I don't even want to deny that there's something there. I want to know, what difference does it make to anything else we say about the person or the subject? I guess I take a very orthodox functionalist view of this: that it's explainable by the functional account we give.

Sackeim: Are you asking what good it does?

Bilgrami: No. I'm saying that it doesn't seem to me to be a tractable question to ask, and therefore not an interesting question to ask: "What is this subjective buzz that we all have?" Well, let's say it's there. The whole point of it is that you can't say anything else about it, because if we do, we have made it less mysterious, and that is precisely what it's not supposed to be. It's supposed to be superlatively mysterious.

Lanier: In the popular culture the immensely divisive debate which mirrors the debate we're having now is on abortion. I think, ultimately, it's the same question: how we should understand ourselves. Should we care about what we're able to do? Is functionality a part of our definition? Is there some core sacred element that's mysterious and unreachable that should be a part of it? The inability of the two sides to find common ground is precisely what we see in the more civil discussion on machine consciousness.

Sackeim: I think you're asking, "When will we come to the point when we throw out a computer and it will be called murder?"

Bilgrami: But suppose we ask your question [about] some human beings who we sometimes think have more than one consciousness. Correct me if I'm wrong, since you probably know much more about the clinical practice of this than I do, but I would have thought that the clinical cases are such that a psychiatrist or minister or therapist dealing with such a case finds that one of them seems to have goals, attitudes, hopes, wishes, desires, and beliefs which seem inconsistent with another. Now, I haven't mentioned the word "consciousness." I've talked of beliefs, goals, desires, and so on. "Consciousness" is fine if it's just a term we use to describe these other things, but that's not mysterious enough. Over and above all that, there's supposed to be this buzz. That's the question philosophers get excited about and say, "Well, none of what you said required that." We can talk sensibly, as I'm sure clinicians do, about multiples, and ask "Should they be unified? Should they be different?" We can ask all these questions without raising this superlatively mysterious question that seems by its very nature to be intractable.

Sackeim: But you can turn it around. If you don't have the buzz, can you have beliefs, attitudes, feelings, and so on? Is the buzz the glue that permits that to occur?

Bilgrami: The point I'm making is, you can say it's the glue, but once you've said it, you can't show in any detail, "This glue is working this way with this glue, or this way." It's just some general glue. I don't know what difference it makes to say there is or isn't this glue. So go ahead and say it's the glue, but once you've said it, you've not created a subject. You've not created anything we can talk about, so it's an idle wheel.

Sackeim: One way around that would be to say when we present information in a world where people are conscious of that information, the impact it has on their lives, what they can do with it, is quite different than when they're not. It's not just a unifying piece, but it is something that has impact on behavior. And so it serves a purpose.

Bilgrami: If it has an impact on behavior, then you can study it. You can say it causes this kind of behavior. Now what causes it? Can we understand pain as that which, given a certain input--the puncturing of a skin, let's say--[and] given all the beliefs, desires, etc., leads to certain avoidance behavior or get-help behavior, and so on? That's a relatively unmysterious thing. It is caused by X, and it causes Y. It mediates inputs and outputs. Now, I haven't said anything about consciousness. I've said things we all say--psychologists, philosophers, biologists. Now, you say "No, no, no, but you're missing out on the buzz, or this  feeling."

Sackeim:Without the buzz, you wouldn't avoid, because buzz is what makes you avoid things.

Bilgrami: Right; so the buzz is always there. I'm not claiming it's not there, but I'm saying, everything else we can say about it are these other things. And then you say it's there. Now, I agree with you: It's there, but that's the only proposition we can make about consciousness.

Lanier: No, I disagree. I think we've already identified a number of differences that consciousness makes. One is, if you don't believe in it, you design poor computer tools. Two is, if you don't believe in it, you write philosophical papers with different outcomes. And I think there is a third one, which has to do with the way we treat computers as metaphors. Because, if we believe that we're solely mechanisms, if we don't hold out for some special buzz, then we're more likely to treat whatever machines exist at the moment as a serious model for ourselves, when perhaps we shouldn't. That, I think, leads to a certain type of aesthetic, which might be called the nerd aesthetic. When you're in an environment where there are a great many people who believe in machine intelligence--for instance, if you visit the MIT Media Lab or the Silicon Valley--people stop thinking in terms of a joyous, hedonistic lifestyle. They start to think of everything as a task. Instead of enjoying music, they try to think about how you can model music and have more efficient music generators. There's a turning away from sensuality.

Bilgrami: This is part of the problem of saying consciousness is everything that we think science can't study. I'm not denying that there are things which purely causal to-ings and fro-ings in a machine--or for that matter in our brain--can't capture. Normative things we can't capture. I'm not even denying that consciousness involves the kind of subjectivity that is there in aesthetic enjoyment or anything like that. I'm just saying there's a very specific problem that Dennett and others have raised, which Descartes faced in a particularly excruciating way, and [it's] not the problem of what nerds lack, or what the sciences can't grapple with when they leave out norms. This is a specific, deep philosophical issue, so deep that it doesn't seem to be something we can say anything about. That's how deep it is, so "Whereof one cannot speak . . ." I realize many philosophers would say I'm being way too antisubjectivist.

Lanier: I think the problem isn't so much that nothing can be said, but rather that nothing intermediate can be said, that only extreme positions can be taken. Wittgenstein was more subtle in pointing out the limitations to dialogue, whereas Dennett is simply an extremist on one side, and Searle is an extremist on the other side--as am I, I'll happily admit.

21stC [to Lanier]: It seems that one of the things you've said resembles one of the things William James said about free will. He defined, as his first act of free will, to believe in free will. Can we substitute "consciousness" for "free will" here?

Bilgrami: Now, that's interesting. I think that that's a much more interesting issue than this buzz version of consciousness. I agree that there is a sense in which what an agent has is a first-person point of view. An agent cannot just see itself as an object; it cannot be a spectator upon itself while remaining an agent. I can see myself as I see you, but when I do that, I'm not seeing myself as an agent, because I'm just seeing myself as I see things that are, from my point of view, another. The first-person or agent's point of view, it seems to me, is simply sui generis. Our very notion of agency turns on the fact that we can't study ourselves entirely in that third-person way and remain free agents. And that strikes me as the way in which we stand outside of what science studies, i.e., nature. All it shows is that determinism is true and freedom's there, which means they've got to be compatible. They're just two different perspectives, the third- and first-person perspectives.

Sackeim: The free will issue is critical to the humanness of machines. Ultimately, when we have machines that do what they want to do, and not what we want them to do, we're going to be more ready to ascribe consciousness to them. That type of criterion is the more stringent one. The core issues have to do with mechanism vs. freedom accounts of our behavior: When do we ascribe free will to a machine?


A crash of symbols

Bilgrami: The digital computer, presumably, is to be understood in terms of manipulating symbols a good deal of the time. But parallel distributed processing seems to emphasize the symbolic level much less. It emphasizes much lower-order neuronal activity. I wonder whether that, in some way, makes the idea that there's this level of representation, symbolic manipulation, much less true to what's going on in parallel processing and those ways of explaining human mentality.

Lanier: I'll go to the practical level and bring it back to your field, which is to say that within the brain, there does seem to be a differentiation in function between components--if you look at the cerebellum, for instance, you won't see many symbols being processed--and that the brain is not a single kind of machine at all. Right now, our study of it is crude enough that we're just struggling with simple models like neural networks. But as our understanding progresses, we'll cease to see the brain as a unified mechanism.

Sackeim: The most general view is that processing is often extraordinarily widely distributed and involved in billions of cells for very simple tasks. I think the tension you see in brain science is that we often go back to a homunculus view, where we have a lot of difficulty explaining how the right software package is being run for the task. As the environment keeps shifting its demands, we seem to be able to shift resources. How does that get accomplished? At a local level, where essentially the environment is pushing the brain, and as it pushes, there's an adaptation--or is there a guy that says, "OK, let's stop running the Microsoft program; we'll start running somebody else's program." That's why we have [terms] like executive activities. But we don't have a very good grasp as to how that gets accomplished. It's likely that it isn't the homunculus. If we have to give up that version of consciousness, that it is locally determined by a function of variations at hand, how then do we ever envision a machine, a non-biologic organism, which spontaneously reallocates its resources? Spontaneously redetermines what software is running, so to speak. Where is the metacognition? Is there a guy that's thinking, or is metacognition itself a reflection of a parallel distributed network?

Lanier: Well, I'll make a prediction: As the study of the brain continues and we're able to understand and predict the function of every last neuron, we'll discover there are multiple ways of interpreting it, and we can equally well approach it in some sort of symbolic, abstract framework or in a purely continuous PDP-style framework. Our own experience of ourselves, the way we parse ourselves, won't be very relevant to understanding the brain, even when it's utterly exposed in detail. And I think, in a funny way, the study of brain science might come around and make the notion of something special called consciousness more credible.


Related links...

  • Tucson conference on consciousness, April 1996

  • "Toward a science of consciousness," collaborative account of Tucson conference in New Scientist; requires free registration

  • Journal of Consciousness Studies

  • Searle on the Chinese Room Puzzle and the Turing test, Think 2.1 (1993)

  • Dennett mailing list, Thinknet

  • Hayes, P., Harnad, S., Perlis, D. & Block, N. Virtual Symposium on Virtual Mind, Minds and Machines 2:3 (1992): 217-238

  • Brief consciousness bibliography from "No Dogs or Philosophers Allowed," public access cable TV program featuring philosophical debates


    AKEEL BILGRAMI is chairman of Columbia's philosophy department. JARON LANIER, a visiting scholar in computer science at both Columbia and NYU, is an inventor, author, musician, and pioneer in the concepts and hardware of virtual reality. HAROLD SACKEIM is clinical professor of psychiatry at the College of Physicians & Surgeons and the New York State Psychiatric Institute.

    PHOTO CREDIT: Lena Lakoma


  • 21stC home
page 21stC is. . . special metanews next feature