Home > Philosophy essays > The Chinese Room – John Searle

Essay: The Chinese Room – John Searle

Essay details and download:

Text preview of this essay:

This page of the essay has 1,703 words.

The Chinese Room is a response raised by John Searle in regard to functionalism and the Turing test for machine intelligence. Searle argues against the ability of purely computational processes creating some kind of mind (Searle, 1980). Searle centres his ideas of the mind around intentionality and understanding, something that he sees a solely syntactic entity (a program) as unable to achieve. I contend that while Searle makes a strong argument, he fails to disprove the possibility of artificial intelligence due to his ineffective replies to the initial counterarguments he received to the Chinese room. Additionally, I believe the Chinese room could not even pass the Turing test as its purely syntactic nature renders it inflexible and unable to grasp concepts easily understood by humans, therefore the Chinese room does not even disprove the possibility of “strong AI”.

Turing’s test for artificial intelligence centres on an adapted “imitation game.” In Turing’s test, an “investigator” communicates with two players, one computer and one human, and attempts to determine which player is human. If the investigator’s attempt to determine which player is a human is wrong, the computer wins. If the computer is convincingly able to pass as human, then it can be considered to be a “thinking machine” (Turing, 1950, pp. 433-460).

In the Chinese room argument, Searle distinguishes between “strong” and “weak” artificial intelligence, with his argument solely against “strong AI”, namely the possibility of a computational system to possess a literal mind with the correct programming. (Searle, 1980, pp. 417). Searle asks the reader to imagine that a man is put into a room that contains three sets of Chinese symbols, of which he has no understanding, alongside some English instructions. The man in the room is instructed how to correlate symbols, and what symbol to draw in response to the third symbol set. Unbeknownst to him, the man has actually been given a script, a story and questions, with the symbols he sends back in response to the third batch being answers. Searle is conversing in Chinese with people outside the room, who ask him the questions to which he responds. Searle argues that the Chinese room imitates computer programming, and manages to pass the Turing test. For those outside the room, it appears the man understands Chinese when he replies perfectly to a question he is given, and yet all he is doing is symbol manipulation (Searle, 1980, pp. 418-419). Searle’s argument is that a computer cannot understand, missing “intentionality”, something key to the existence of the mind. Searle asserts that a computer program is purely syntactic, its lack of semantics meaning it cannot be a thinking, intelligent being. It is important to note that Searle himself does not completely argue against the possibility of a thinking, understanding machine, Searle’s argument is that computation alone does not suffice for intelligence (Searle, 1980, pp. 422).

Searle’s paper attempts to dispute a number of responses to the Chinese room, I argue that two of these counterarguments to Searle are still effective. The first, the “many mansions reply”, contends that Searle’s argument assumes artificial intelligence is solely related to analogue and digital computation and while this was the contemporary state of technology, future technology would allow for the causal processes vital to intentionality. Searle’s response is that this reply attempts to redefine strong artificial intelligence into whatever artificially creates cognition – reaffirming that his position is not against the possibility of an understanding machine, it is against the ability of computational processes and computational processes alone to become mental procedures (Searle, 1980, pp. 422). Searle lacks argument against the many mansions reply, partly due to his belief that some kind of future system could have intelligence. I believe the many mansions reply is effective in proving the capability for artificial intelligence, as while it may not meet Searle’s criteria for “strong AI”, there is a possibility for debate over what exactly “strong AI” should be defined as. If a conscious system occurs, what do we define it as other than artificial intelligence? It is surely more than “a tool in the study of the mind” and yet does not fit Searle’s definition of “strong AI” (Searle, 1980, pp. 417). Artificial intelligence is acknowledged to be possible to some degree, and a conscious, manmade machine is surely nothing less than artificially intelligent.

The “systems reply” argues that while the man in the room does not understand Chinese, something about the room as a whole understands Chinese, much like how a single neuron has no consciousness, but an entire neural network does. He is a small part in a wider system, and that wider system is what understands Chinese. To counter, Searle imagines a scenario in which the man memorises all of the rules and symbols, internalising the system, then leaving the room. He could maintain the performance levels that he had within the room, and in Searle’s eyes “there isn’t anything in the system that isn’t in him”; with the man coming no closer to understanding Chinese, the system cannot understand either (Searle, 1980, pp. 419-420). Lecours raises a simple, important response to this – that it is not feasible that a normal human could memorise this level of system. In order for the Chinese room to pass the Turing test, it must have a similar level of intricacy to the genuine human mind, possessing the “formal structure of an entire personality” (Lecours, 2010, pp. 16); therefore, Searle’s memorisation response feels ineffective as it is highly unrealistic. Lecours endorses the virtual mind idea, when understanding is created that is distinct from the system. If the memorisation response were feasible, Lecours argues the brain would now contain the formal structure of something functionally identical, or at least very similar to another brain; this, combined with consistent, convincing answers would prove the existence of a mind, albeit not the mind of the individual that memorised the system, but a separate virtual mind residing within their brain (Lecours, 2010, pp. 11-19). Lecours poses a problem for Searle’s analogy, as either he acknowledges that someone who can perfectly memorise such a sophisticated system has the brain capacity for two minds, or, he says that his own example is impossible, meaning his answer to the systems reply is redundant (Lecours, 2010, pp. 19).

I find this argument convincing. I argue there is some level of understanding within the system, mainly due to the understanding that it took to write the program or rulebook, and the underlying understanding of the English language required of the conscious individual in the room. A wider understanding exists due to individual contributions within the system. Searle’s memorisation response is simply not plausible, he has not really disputed the claims of the systems response and thus has not disproved the possibility of artificial intelligence.

While it may at first seem like the Chinese room would pass the Turing test, I believe it does not match up to the standards of a human mind. The Chinese room, at least in Searle’s iteration, is entirely pre-set, and answers would begin to feel robotic and arbitrary. The system fails to grasp concepts and language in a human sense, remaining unconvincing. Ben-Yami establishes that the system would struggle to answer questions regarding the concepts of time and colour. For example, the room would never be able to answer the question “what is the time?” accurately due to the syntactic nature of the system, so the presence of only syntax puts into question the ability of a computation to pass the Turing test, as something as simple as telling the time is generally regarded as a basic human skill (Ben-Yami, 1993, pp. 169-170). The question at least requires variety in answers depending on time of day, and so an arbitrary answer is unlikely to be convincingly human over time.

Ben-Yami sees two available solutions to this issue – in one case, the individual in the room is given a perfect set of instructions, he is instructed to read the time and write the corresponding set of symbols in response to being asked the question at that time – this defeats Searle’s argument, as the man inside the room would begin to learn Chinese, and his output would also begin to hinge on his understanding of the concepts. Searle has already expressed the importance of the individual’s lack of understanding, and so it can be concluded that if he begins to understand, the system does too. The second resolution would be to refer the question to a third party, who provides the answer for the man to pass on. In this case, the system and processes have been extended, to the point at which the crucial part of the process is being undertaken by a Chinese speaker, thus it feels correct to attribute understanding to this process (Ben-Yami, 1993, pp. 170).

Ben-Yami asserts that while Searle is right that pure syntax is insufficient for the existence of understanding, he is wrong in assuming that semantic ideas are unavailable to computers; programs are syntactical, but the computers that actualise these programs have access to the external, semantic world, as seen with the ability of modern digital computers to access and accurately display the time. This is no different to how language acquired meaning through external influence and contact with our environments (Ben-Yami, 1993, pp. 171). Searle does not disprove the possibility of artificial intelligence as he doesn’t accurately portray computer systems as a whole, wrong in his belief that computers are solely limited to formal rulings – the Chinese room, relying on only syntactical rules, would in reality not pass the Turing test.

In conclusion, Searle’s Chinese room argument does not prove that artificial intelligence is impossible. Searle himself does not contend that some kind of thinking machine is possible, he argues that solely computational processes are incapable of understanding, and therefore are unintelligent. Searle correctly asserts that syntax alone cannot cause understanding, however he does not prove a computational machine is incapable of intelligence, as interaction with the external causes a program to understand and comprehend. The Chinese room cannot even pass as “strong artificial intelligence” (Searle, 1980, pp. 417), failing the Turing test as it fails to even simulate basic human understanding of abstract concepts such as time.

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, The Chinese Room – John Searle. Available from:<https://www.essaysauce.com/philosophy-essays/2018-11-15-1542286533/> [Accessed 18-11-24].

These Philosophy essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.