Home > Philosophy essays > John Searle’s Chinese room & Systems / Robot / Brain Simulator Reply

Essay: John Searle’s Chinese room & Systems / Robot / Brain Simulator Reply

Essay details and download:

Text preview of this essay:

This page of the essay has 1,028 words.

In this essay, I will be outlining John Searle’s Chinese room thought experiment. Further, I will address the three major objections raised to his argument labeled the Systems Reply, Robot Reply, and Brain Simulator Reply. After addressing and carefully discussing these, I will discuss Searle’s replies to these objections and state whether or not I find his objections to be satisfactory. To finish my paper, I will be offering my personal views on why I don’t find the Chinese room to be a compelling reason to give up the idea that cognition is best understood as a type of computation.

In his paper “Minds, Brains and Programs,” Searle discusses the two extremes of artificial intelligence: strong AI and weak AI, and chooses to target the former claiming that computers can or eventually will have cognitive abilities. Searle invokes the Chinese room thought experiment to challenge the definition of strong AI to ultimately conclude that brains cause minds and syntax is not equivalent to semantics. He asks readers to imagine an English-speaking individual unequipped with Chinese literacy being locked in a room. In this room there are three batches of Chinese writing, script and symbols with English instructions, respectively, and a set of rules in English that correlate the first two batches to generate the third. If a Chinese speaker passes notes with Chinese characters under the door for the English-speaker to respond to, they will be able to respond with the resources provided in the room. These responses are indistinguishable from that of a native Chinese speaker. This scenario is analogous to how a computer performs its programmed functions. With the use of this experiment, Searle aimed to prove that although a computer may be able to be able to imitate a human well enough to pass the Turing Test, this doesn’t mean the computer is intelligent. Searle dismisses this case under the grounds that this person is merely following instructions and not understanding the language.

I will be addressing three main objections to Searle’s case. The first is commonly referred to as the Systems Reply. This reply states that even though the individual doesn’t understand the language, there is some part of the system that does. Essentially, while the individual does not understand the language, he is a part of an entire system. This system consisting of him, the rule books, and all the given resources do in fact understand the language. As a reply, Searle states he feels “somewhat embarrassed” to reply to such a theory, but says that an individual can memorize the entire system, yet still not understand what the language is. He concludes that it is a ridiculous thing to say that the two subsystems in conjunction would understand the language as a whole.

The second challenge to the case is dubbed the Robot Reply. We are asked to imagine a computer placed inside a robot. The computer acts as a functioning brain, while a camera allows the robot to ‘see,’ and attached arms and legs would allow the robot to move about. This robot’s computer brain would not merely manipulate symbols to produce output, but it would allow the robot to eat, drink, and do other human-like things. It is argued that this robot would have “genuine understanding.” Since humans gain understanding of words through experiences and connections with the outside world, it seems relatively logical that a robot can, too. The robot reply mirrors the Chinese room in the sense that since it seems logical a robot can gain understanding and attach meaning to words, the person in the room who interacts with the environment would be able to create this same understanding. Searle replies to this objection by saying the robot’s worldly interaction is simply syntax and no semantics. He offers a twist on the objection, and asks us to imagine the Chinese room being put inside the robot instead of a computer. The Chinese symbols the person manipulates and gives out will motorize the robot’s arms and legs. The person has no idea what he is doing at all, and the robot is simply moving because of its wiring and programming. Since the person is the one sending these signals to the motor, the robot has “no intentional states.”

The final objection I will be addressing is the Brain Simulator Reply. This objection proposes the idea that the Chinese room is analogous to a brain and could replicate the exact neuron firings and brain functioning as a native Chinese speaker. In other words, the machine can simulate the responses a brain would. Because the brain and the room are theoretically operating the same way, this reply raises the predicament that if we deny that the machine has been able to understand the language, we would also have to deny the native Chinese speaker having any understanding of the language. Searle replies that this response does not imply understanding. He raises the water pipe example, where it is explained that certain calves have to be shut on or off in order for an output to come out at the end of the pipeline. The output does not have to be understood in order for you to reach it.

After explaining main challenges and Searle’s replies to these challenges, I find that majority of his responses are arguably valid. However, I find that his response to the Systems Reply is slightly weak. He states that even if the man in the room understands nothing of Chinese, and neither does the room, “because there isn’t anything in the system that isn’t in him” (419). While this argument is somewhat reasonable, it holds a bad inference. It’s like saying that I don’t weigh as much as a bowling ball, and therefore neither does my head, because there is nothing in my head that isn’t in me. Rephrasing his argument in these terms makes it seem somewhat ridiculous. I can see how this argument holds as an appropriate response to the Systems Reply, but because of the fact that it doesn’t hold to other generalized inferences.

2018-3-15-1521103980

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, John Searle’s Chinese room & Systems / Robot / Brain Simulator Reply. Available from:<https://www.essaysauce.com/philosophy-essays/john-searles-chinese-room-systems-robot-brain-simulator-reply/> [Accessed 19-12-24].

These Philosophy essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.