John Searle’s Chinese Room Argument: An Explanation and Defense

As the post suggests, I will explain John Searle’s Chinese Room argument, and then defend it against common criticisms.

The explanation will also serve to clarify some common misconceptions about it. For one misconception that I continue to read in print, the argument is not meant to show that we could never build computers that are conscious. So if you happen to think that’s what the argument shows, please read on.

To anticipate, what the Chinese Room argument shows is that running a computer program is not sufficient for understanding what the program runs. I will cover more on this, and in particular how to understand “is not sufficient for understanding what the program runs”. But first, let’s present the argument.

Suppose we have a machine that takes in Chinese characters as input and, following a set of rules for these symbols, also produces Chinese characters as output. Suppose further that these rules are so good, that the machine behaves just as if it understands Chinese perfectly well. That is, if you enter as input Chinese characters that express, when translated into English, “Hi, how are you?” it will give as output a response of Chinese characters that when translated into English is a good answer, such as “I’m fine, thanks”.

In other words, the machine passes the Turing test for understanding Chinese: the test that there is no distinction between conversing in Chinese with a native Chinese speaker and conversing in Chinese with this machine. Indeed, such a thought experiment is hardly even one–today’s devices will often answer questions, or tell you when they do not know an answer, or comment about their day, or even make sassy comments if you question whether they really understand English. And it goes without saying that there are similar devices in China that respond to Chinese rather than English.

Next, consider that you are in possession of the rule book by which Chinese symbols as input correspond to Chinese symbols as output. You have a pad of paper on you, and paper with these Chinese characters is given to you as input, and you get to write and return the Chinese characters that correspond with what the rule book says to write as output.

Here’s the punchline: do you now understand Chinese just for lining up what is inputted via the rule book with what you then output? The only way that you might understand what is inputted and outputted is if you already understand Chinese. Searle puts this “thought experiment” in first-person terms, so that when it comes to the part about whether or not he understands any Chinese for following these rules he notes that he does not understand the Chinese symbols, because he does not understand Chinese (indeed he tells us that he cannot even tell the difference between Chinese and Japanese characters (2)).

The conclusion of this thought experiment is that manipulating symbols via a step-by-step rule book is not sufficient for understanding what the symbols mean. But this is all that computers running programs do: they run step-by-step rules (i.e. algorithms) over inputted symbols. Computers do not understand Chinese by running a program that passes the Chinese-understanding Turing test any better than John Searle does.

If you’re a fan of deductive-style arguments, to clearly see the reasoning in each step, then Searle’s thought experiment may be broken down into the following valid argument:

Assumption (Searle calls this assumption, or something like this, “Strong AI”): Let’s assume that running a program is sufficient for whatever runs the program to understand what the program runs, particularly if it passes the Turing Test for understanding what it is running.

Premise 1: Running a program is just a matter of applying rules to symbols given as input and returning those symbols to which the rules have been applied as output.

Premise 2: I can run a program as described by Premise 1 myself, where my running the program passes the Turing test of convincibility for understanding what I am running.

Premise 3: But I don’t understand what I am running.

Conclusion: Assumption must be false.

This argument is valid: The truth of the premises guarantees the truth of the conclusion. If running a program is just a matter of manipulating symbols via rules, then I have done so in a way that passes the Turing Test for understanding what is running, but without any understanding on my part, in direct contradiction to the assumption.

Before getting to some objections to the Chinese Room argument, I want to point out that Searle’s Chinese argument is not meant to show that we could never build a computer that is in fact conscious (or, for what is a proxy here, that can in fact understand the program it is running). Searle has his own theory of mind called “Biological Naturalism” where the mind is a higher-level feature of neurons and other components of the brain, in the same way that solidity is a higher-level feature of mean-molecular-motion, or digestion a higher-level feature various low-level digestive organs, enzymes, and so on. In Searle’s view, (human) brains cause consciousness and, with this, the understanding of language as we have been using. (His view is also not that nothing else causes consciousness: for Searle there might be plenty of other things out there that cause consciousness, even if running a computer program is, by itself, insufficient. Searle’s motivation here is just starting with what we already know–that the brain plays a primary causal role in, for example, understanding language.)

Given that human brains cause consciousness and given that the Chinese Room Argument is sound, then in order to create a computer that can understand the program that it runs, we could give the computer the causal mechanism that the brain has in causing consciousness. If you ever hear Searle speak on consciousness or the Chinese Room, he will usually at least once mention “the brain-stabbers” over at UCSF, who need to figure out “how the brain does it”–how the brain makes consciousness (1). If you are a scientist trying to discover how consciousness works, then this is the research project that logically follows from the insufficiency of a mere program to make consciousness, as well as the fact that the brain causes consciousness. And it should go without saying that discovering what that mechanism is exactly would be a tremendous and unprecedented scientific discovery, one that would open up an avenue toward various other discoveries in cognitive science and psychology. For one possibility, we may be able to prolong conscious life, transport it to other bodies, or create taylor-made consciousnesses. This is just to say that making a robot that not only implements some program but also understands what it implements would probably be far down the list of things we would want to potentially achieve if we discover such a brain mechanism for consciousness.

With the explanation out of the way, let’s next turn to some common objections to the Chinese Room argument.

The Systems Objection

The Systems objection says that, while it is true that you don’t understand Chinese just for running the program, the entire machine–which includes the rule book, pen and paper, and the notes–understands Chinese.

In response to this objection, Searle updates the scenario so that he considers that he has memorized the rules of the program, so that when he is handed a string of symbols and he reads it, he knows what to write as output. In this case, the entire system is identical with John Searle himself. And since he still understands nothing that is inputted or outputted, the Systems objection fails.

The Virtual Mind Objection

This objection says that the mind is a virtual part of the program. Take, for instance, a calculator. A calculator calculates numbers. But a typical desktop computer typically also has a virtual calculator, one that is not really a calculator, but does everything that a real calculator does. Computers also have virtual folders and files. But we are not bothered that such things are not really calculators, folders, and files. The mind may therefore be a virtual part of the running of the program. (Please feel free to comment if you feel I might be misunderstanding something about this objection in what follows and if you can make it clearer.)

I don’t understand this objection very well. For one, I tend to think that calculator functions in desktop computers really are calculators. Because of this, I’m left puzzled about what a “virtual calculator” might mean. Nonetheless, I find this reply puzzling for seeming to multiply things in the world beyond what should be reasonably warranted.

To illustrate, consider again the scenario where you internalize the rule book for speaking Chinese. The Virtual Mind objection seems to be saying that there is a virtual mind that now understands Chinese. But this virtual mind in no way correlates with your own understanding of Chinese. So this objection readily raises the question: to whose mind does this virtual mind confer understanding? The view seems to be that some new entity, which is not identical to the one that runs the program, now understands Chinese. But why think that there is such an entity? There’s absolutely no evidence for such a thing.

Searle’s basic response to the Virtual Mind objection is that it is circular: it just assumes that the system must understand Chinese. (2)

My problem with the Virtual Mind objection is first that the argument looks logically flawed when it is pieced together, and second that it is about a form of understanding that is totally irrelevant. For the first point, deductively, the argument seems to say something like:

Premise 1: Something x runs the program P.

Premise 2: There is a virtual thing y created from running the program P that is part of x.

Premise 3: y gives x no understanding.

Premise 4: But y gives some part of x, a virtual part, understanding.

Conclusion: Therefore, because of Premises 1 – 4, Searle does not disprove that x fails to have understanding.

The problem just seems to be that Premise 3 logically conflicts with Premise 4. To fix this, Premise 3 may be rewritten as “y gives x, as whole, no understanding.” However, put explicitly like this, the premises just raise the question all over again about what this “virtual” part is that understands.

And overall, the Virtual Mind objection is a very weak objection I think. This is because, second, we can avoid the objection altogether, because we aren’t really concerned with whatever “virtual understanding” that the objection invokes. After all, what we are concerned with is the difference between a typical person’s understanding no Chinese at all and understanding Chinese after a lifetime of learning it. It might have been tempting 50 years ago to think that following a program that passes the Turing test for understanding Chinese could have been an appropriate way to gain such an understanding for the thing that runs the program (as a whole), but it is not, as Searle’s Chinese Room thought experiment clearly demonstrates.

And so, we may avoid the objection altogether by rewriting Searle’s Chinese Room argument to have as an additional premise something like: conferring understanding on some unknown part of x is to confer a sort of understanding that we are entirely unconcerned with. And the Chinese Room argument is no worse for avoiding it.

Additionally, if we were to build a robot with the Chinese Room Program embedded within its program, what good is a response that admits that the robot understands no Chinese, but rather that maybe some unknown virtual part of it does? (But again I might be missing something to this objection, so feel free to comment if you have insight.)

The Brain Simulator Reply

This reply says that we might build an intricate structure that simulates what the brain does in those who understand Chinese. For example, we may simulate it with water pipes with valves, or with billions of people who run around in a way that resembles neurons. In that case, since there is no functional difference between what the brain does and its simulation, the simulation would have to understand Chinese.

The issue here is immediately what “simulates what the brain does” amounts to. Let’s suppose that having a certain range of overall electrical current in the brain is necessary for its understanding Chinese. If so, then building a brain simulator out of plastic pipes would not replicate what is necessary for understanding. Could such a current, nonetheless, be simulated? I have trouble discerning what that would mean. You may have to in effect replicate all the causal features that having such an electrical current entails. But, if the simulation has all the same causal features, then how would it be distinguishable from an actual electric current? (This is why I tend to think notions like “simulation” and “virtual” tend to tread on shaky ontological ground.)

A shorter answer to this reply is just that it once again begs the question by assuming that such a system must be conscious because the simulation is functionally identical to the brain. That is, saying that such a simulation is “functionally identical to the brain” is already to give the simulation understanding, because it is already a given, for Searle, that the brain functions to cause understanding.

What if Searle is Wrong? Answer: Panpsychism is True

This section is more speculative. First, under what condition would Searle be wrong? This much is clear: Searle would be wrong if my running the Chinese Room program gained me an understanding of Chinese.

But in this case, panpsychism would be true. Panpsychism is the view that everything (and perhaps every relation between things) is conscious. How does panpsychism follow from Searle’s being wrong? Because every natural process may be considered to be manipulating symbols via a computer program. Take a thermostat: it (typically) displays the temperature of the room, and pressing either a down arrow or an up arrow on the box will lower or raise the temperature of the room respectively. Well, if Searle is wrong, then the thermostat must understand that the room is cold when you press the up arrow.

Also take a car. The ignition switch has one side that is clearly an ‘on’ side, and another that is an ‘off’ side. So now the car understands that it is on when it is on, or running when your foot is also on the gas.

Take a pen. Searle himself often gives this as an example of something that runs an algorithm. A pen that is setting on a table, for example, is running the algorithm “stay put”. But, if the Chinese Room argument were incorrect, then the pen would have to understand that it is staying put. And since a program is just a set of algorithms, this readily leads to the conclusion that everything everywhere is running all number of programs. So, if Searle were wrong, which again he isn’t, because in the real world you do not understand Chinese for running a program, then panpsychism would have to be true. That is, for any program that an object or relation between objects runs, such objects and relations (or perhaps worse, some virtual part of them we know not of) would understand what that program runs. I do not find this to be a happy result, just because in particular I do not think that a pen understands that it is setting on a table. I would suspect that a lot of cognitive scientists would also find such a view unacceptable, and I am entirely confounded about how to avoid such a form of panpsychism if Searle were wrong.

Therefore, Searle is right.

Resources

(1) “let’s get these brain stabbers to get busy and figure out how it works in the brain. So I went over to UCSF and I talked to all heavy-duty neurologists …”–From Searle’s Ted Talk (2013) https://www.ted.com/talks/john_searle_our_shared_condition_consciousness?language=en

(2) Wikipedia, accessed 1/31/2019. Archive: https://web.archive.org/web/20190131224347/https://en.wikipedia.org/wiki/Chinese_room Realtime Link: https://en.wikipedia.org/wiki/Chinese_room#Systems_and_virtual_mind_replies:_finding_the_mind

(3) The Rediscovery of The Mind (1992). Chapter 9 pp 200-210 https://books.google.com/books?id=eoh8e52wo_oC&printsec=frontcover&dq=the+rediscovery+of+the+mind&hl=en&sa=X&ved=0ahUKEwiDoqS_wZjgAhWDIDQIHYojAx8Q6AEIKjAA#v=onepage&q=Chinese&f=false

(4) Mind: A Brief Introduction (2004) https://books.google.com/books?id=oSm8JUHJXqcC&q=Chinese+Room&source=gbs_word_cloud_r&cad=6#v=snippet&q=Chinese%20Room&f=false

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s