The Chinese Room Argument
(Syntax vs Semantics)

Pablo Nogueira

Updated: 6 July 2012

John Searle presented his Chinese Room thought-experiment (an argument involving a hypothetical situation) [1] as a case against the computational theory of mind. I would like to argue that the experiment is an unsatisfactory rendition of a serious objection to the theory.

The Chinese Room argument in short:

A Chinese-illiterate human operator with a rule book is locked inside a room that has two slits on the wall. The operator receives through the input slit pieces of text written in Chinese. The operator follows the book's rules that specify which Chinese symbols, if any, to write as a reply. The operator slips the text of the reply through the output slit and waits for more text on the input slit.

To a Chinese person outside the room the human operator is carrying a meaningful conversation in Chinese. However, Searle rightly points out that despite the operator's skillful use of the rule book, the operator doesn't understand Chinese. He only shuffles symbols around as instructed by the rule book.

Searle draws the following analogy: a machine is like the operator in the Chinese Room. It follows rules in a rule book (a program). It doesn't understand what the rules and the symbols are about (their meaning or semantics). It manipulates symbols (syntax) mechanically as instructed by the program. Therefore, syntax manipulation is not sufficient to explain understanding. If a machine passes the (Chinese) Turing Test, it actually doesn't know what it's talking about.

(Searle uses the word ‘semantics‘ instead of ‘understanding’ and goes on to say that minds ‘have semantics’ or ‘mental contents’, that brains have ‘causal powers’, etc, but I'm not interested here in those terms nor that side of the discussion. My understanding is that Searle uses these terms intuitively, so ‘understanding’ refers to our intuitive idea of how we understand something and is related to awareness.)

The trouble with thought experiments is that they introduce noise and opportunities for endless debate about terminology and about the inconsequential inaccuracies or particulars of the setting. Is a rule book an accurate metaphor for a program? How is a single reply chosen from the myriad of possible replies to a complex question? How does the operator look up the right rules? What's the meaning of ‘meaning’ (and of ‘understanding’)? Does ‘semantics’ mean ‘meaning’? etc.

At bottom, Searle is contesting Turing: we can't say that a machine understands Chinese just because it passes the Chinese Turing Test. If we look inside the machine, we see a mechanical syntax manipulator which by itself doesn't explain Chinese understanding.

There's a so-called Biological Objection, which misses the point that it is immaterial how symbols are embodied in reality. They can be squiggles on paper or bit patterns in computer memory.

There's a so-called Systems Objection:

The operator doesn't understand Chinese. Understanding happens in the rules.

Searle's reply to the Systems Objection is disappointing:

Suppose the operator memorises the rules. The operator still doesn't understand Chinese.

Certainly, but that's not the point. Programs are stored in machine memory anyway. The question is whether the program understands Chinese. Machines are designed to run programs, a task which doesn't require understanding or intelligence. John Haugeland's concept of an Automatic Formal System—i.e., a formal system which can play its rules by itself and is embodied in a physical device—neatly captures the idea [2]. Technically, the machine is a dispensable character of the play.

We may say by extension or analogy that a machine understands, meaning that a machine running a program that understands, understands. But the question is whether the program understands.

Do the rules (when used by the operator) understand Chinese? According to Turing, if they convince the Chinese person outside the room then we must concede they do. According to Searle, who looks inside the room, understanding Chinese is not explained by the operator using the rules.

There is an implicit assumption of validation in both replies. It's assumed that the rule book produces valid Chinese replies, i.e., symbols that mean to the Chinese person interpreter, not only in the sense that they mean something but that they mean coherently in the context of the conversation.

The claim ‘if a Chinese-speaking program speaks Chinese then it understands Chinese’ is only an implication, the premiss hasn't been realised. Moreover, understanding is not an inexorable consequence of valid input-output behaviour, as demonstrated by research in Artificial Intelligence. Maybe it's possible to write a program that passes the Chinese Turing Test but doesn't understand Chinese. When it comes to attesting that the program understands Chinese, Searle wants to look inside the room, and we all should [more].

The Arithmetic Turing Test and the Arithmetic Room

Two versions of the Chinese Room argument:

  1. A Chinese- and Japanese-illiterate human operator is using Google Translate to translate Chinese text to Japanese text. The operator may cut-and-paste ad nauseam without ever understanding Chinese or Japanese.
  2. A pocket calculator doesn't understand the concepts of addition and subtraction.

According to the Systems Objection, understanding of Chinese and Japanese happens in Google Translate, and understanding of addition and subtraction happens in the rules of arithmetic. Notice that a pocket calculator passes the Arithmetic Turing Test: it convinces any external observer that it knows how to add and subtract, and to someone not familiar with calculators and the programming of arithmetic rules, the calculator gives the impression of understanding what it's doing.

The rules of addition and subtraction are like Chinese to someone who doesn't understand what they mean. The rules themselves don't understand what they're about. ‘Understanding’ is not an applicable concept.

Something is missing in the rules of arithmetic and a Chinese-speaking program. Scale is not the issue, both programs are constructed using the same principles laid out in technical form in the Church-Turing Thesis. We might conjecture powerful programs that behave intelligently, but ‘awareness’ doesn't spring up out of the blue.

Syntax, Semantics, and CToM

[A program is] what turns the general-purpose computer into a special-purpose symbol manipulator … The program is an abstract symbol manipulator which can be turned into a concrete one by supplying a computer to it. After all it is not the purpose of programs to instruct our machines; these days, it is the purpose of our machines to execute our programs. [3]

Any programmer knows his programs aren't aware of what they do. Some programmers may choose to use ‘understanding’ as an intentional idiom. Some like to claim that programs do understand because they show valid behaviour and the intentional idiom is reduced or explained by the computer program's implementation.

Perhaps Searle's intention was to pinpoint that by looking at symbols shoved according to statically defined rules we don't gain insights on what we commonly call ‘understanding’ or ‘awareness’. To articulate the point with a super-rule book that can understand Chinese is as good as a thought-experiment scenario as talking about HAL 9000. It's fiction.

Relatedly, by Turing-testing a machine (or operator inside a room) we don't gain any insights on what we commonly call ‘intelligence’.

We don't understand how semantics is possible in a purely syntactical world and thus some say semantics is an illusion and syntax is all there is. Like Behaviourists who said consciousness is an illusion and behaviour is all there is. The question is how to explain and how to build understanding from syntax.

Why does the computational theory of mind look reasonable? Because according to it understanding Chinese must be explained by looking inside the workings of the brain, where we see electro-chemical processes, electro-magnetic waves, modularly organised neural networks, etc. The brain looks very much like a piece of hardware. Since it's possible in principle to correlate the behaviour of a program on screen with the workings of the hardware, even to the quantum-mechanical level (in principle, maybe, in practice our knowledge of all the levels would have to be complete), it should be possible to correlate understanding Chinese with the workings of the brain. Right, but how? And how are understanding and awareness explained in this scenario?

Despite claims to the contrary, nobody knows. The computational theory of mind is not a theory but a hypothesis. Reasonable, yes, but hypothesis nonetheless.

Some random thoughts:

References

[1] Searle, J. (1980). Mind, brains and programs, Behavioural and Brain Sciences 3, pp. 417-424.
[2] Haugueland, J. (1985). Artificial Intelligence: The Very Idea. MIT Press.
[3] Dijkstra, E. W. (1989). On the cruelty of really teaching computer science. Communications of the Association for Computing Machinery vol. 32, number 12.