Currently, the rate of technological growth is exponential. Computer processing speeds double every 18 months, semi-autonomous cars are hitting roads, and most people have a built-in "intelligent assistant" whom responds to voice commands in order to operate their cell phones. But even with all of this progress, is a robot still capable of achieving consciousness? This question is a main focus in the philosophy of artificial intelligence, which is answered with arguments like the Turing test and the Chinese room thought experiment. The philosophy of artificial intelligence could be described as finding out where “machines” fall in regard to “intelligence” and “consciousness,” all of which is dependent on how one defines each of the terms “intelligent,” and “conscious.”
In this paper, the Turing test will be the focal point used to highlight both sides of the artificial intelligence argument which are: strong artificial intelligence (strong A.I.) and weak artificial intelligence (weak A.I.) One of the most prevalent tests of machinery intelligence is the Turing test, named after its founder, Alan Turing. As you will see, those whom believe in the validity of the Turing test typically constitute proponents of strong A.I. Therefore, they also believe consciousness is achievable in machinery. If a piece of machinery were to be labeled as “strong A.I.,” it should possess the full range of human cognitive abilities which includes self-awareness, sentience, and consciousness. Conversely, there is weak artificial intelligence. This category consists of robots, which operate on digital computer programs, but do not have conscious states, a mind, or subjective awareness. Although weak A.I. may exhibit intelligent behavior, there is a lack of a mind which prevents it from experiencing the world qualitatively. Currently all artificial intelligence today is weak artificial intelligence.
When it came to the Turing test, Turing chose not to ask, “Can machines think?” But instead the question posed is, “When can a machine be mistaken for a real thinking person?” This question permits a more focused discussion because it is dependent on whether a digital computer can do well in a certain kind of game that Turing describes as “The Imitation Game.” Suppose that there is a person, a machine, and an interrogator. Although an interrogator is not entirely necessary to administer the Turing test, the test in its original form includes one and for means of explanation, implementing the interrogator is helpful for a basic understanding. The interrogator is in a room separated from the other person and the machine. The object of the game is for the interrogator to determine which of the other two is the person, and which is the machine. The interrogator just knows the other person and the machine by the labels “X” and “Y,” but does not know which exactly is X or Y. At the end of the game, the interrogator says either, “X is the person and Y is the machine” or, “X is the machine and Y is the person.” The interrogator is allowed to put questions to the person and the machine. If the machine answers in a manner that causes the interrogator to conclude that it is in fact a person due to an advanced level of intelligence and sophistication, then it passes the Turing test. It is a game of “imitation” because when the interrogator can not distinguish the machine from a human in how it responds, then that machine counts as having a mind of its own, and is consequently “conscious” and “intelligent.” Therefore, it is how well a machine can imitate a living being.
However, there are problems with the Turing test and there are arguments against its validity. One of such arguments is how language skills are being used to reflect intelligence. In the Turing test, the only interaction with the machine is through language. Anything visual or physical is not permitted. On one hand, this method of interaction seems fair because we should not label machines unintelligent just because they do not look like humans. On the other hand, is language really expressive enough to capture all types of intelligence that we humans have? Some have offered to test a combination of skill sets like language and motor skills. Although there is some debate as to whether or not if language does capture most to all of our intelligence, language is how we test for intelligence in humans, and it makes logical sense to do the same if we are testing for intelligence in machines. If, one day, we were to find a definite type of intelligence that could not be captured by language, then yes, we may want to develop a new test for intelligence.
An additional criticism of the Turing test has to do with the nature of the “intelligence” in question. The Turing test only captures human intelligence since it is administered by a human. This leaves the machine competing against the interrogator whom is human, and another human. There is a possibility the machine possesses a type of intelligence that did not resemble ours at all. And therefore, the Turing test would fail to recognize it. Hence, the question at hand is, “Could human intelligence possibly be general intelligence? Would we as humans be able to recognize non-human intelligence, if it even exists?”
An additional argument is a thought experiment by John Searle, called the “Chinese Room.” Searle’s thought experiment consists of a hypothetical situation. Suppose there is a computer that behaves as if it understands Chinese. It can take Chinese characters as input and by following instructions of a computer program, and it produces other Chinese characters as output. Searle then proposes that this computer performs so convincingly that even a human Chinese speaker is convinced that the program is itself a live Chinese speaker, when asked questions of any sort. Searle is stating that the computer passes the Turing test. Searle then asks does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese? Searle calls the first position “strong A.I.” and the second “weak A.I.”
Searle contends that there is essentially no difference between the computer and himself if he were given Chinese characters through a slot in a door, where he had to process and give certain Chinese characters as output with the help of a book that mandated what to output when given a certain input. If the computer had passed the Turing test this way, he could do so as well, just manually. Each simply follows a system producing a behavior which is interpreted as demonstrating intelligent conversation. However, Searle would not be able to understand the conversation because he doesn’t speak Chinese. Therefore, he argues, it follows that the computer would not be able to understand the conversation either. The idea is that you can’t get an understanding of the meanings of words, from just manipulating symbols which is merely what the computer is doing according to Searle. The Chinese room argument holds that just because a computer has a program, it doesn’t not have a mind, and inherently does not have consciousness, because although it may act intelligently or behave human-like, it ultimately lacks a true understanding of what it is outputting. Searle argues that without "understanding," we cannot describe what the machine is doing as "thinking" and since it does not think, it does not have a "mind." As a result, he concludes that strong A.I. is false.
Some proponents of strong A.I. admit that the man in the room does not understand Chinese, but hold that nevertheless running the program may create something that understands Chinese. These critics manipulate the claim that “the man in the room does not understand Chinese” to the conclusion that “no understanding” has been created. There is the possibility that there is an understanding by a larger, or different entity. This argument is known as the strategy of The Systems Reply and the Virtual Mind Reply. These replies hold that the output of the room reflects understanding of Chinese, but the understanding is not that of the room's operator. Thus Searle's claim that he doesn't understand Chinese while running the room is admitted as being true, but his claim that there is no understanding is denied.
The problem of utmost importance with this debate is how one defines “consciousness,” which is still highly debated. Most can agree that consciousness rests in the brain, but as of today, we don't fully understand the mechanisms that provide consciousness. Hence, without even being able to completely understand the embodiment of our own consciousness, it would be fairly difficult to endow machines with a consciousness of their own.
One could argue that through sensors, robots and computers can experience, or at the very least detect stimuli that are interpreted as sensations. For example, a small humanoid was made that could understand a question and then recognize its own voice. There were a group of robots programmed to think two of the group had been given ‘dumbing pills’ to prevent them from speaking, while one had been given a placebo. In practice, this meant that mute buttons were pushed on two robots, leaving one able to speak. But the robots didn’t know which one of them had retained the ability to speak. The group of androids was asked which two had been given the pills. While all of them tried to answer, only one was able to respond aloud by saying, “I don’t know.” The robot that spoke recognized its own voice and said: “Sorry, I know now. I was able to prove that I was not given a dumbing pill.” Its response demonstrates a basic level of self-awareness. However, this self- awareness is not enough to declare it has a conscious. There are numerous other instances of consciousness such as inner speech, visual imagery, emotions, and dreams. Both computers and brains us electrical activity to process information, but humans experience the world subjectively. Our first person perspective is why we have unique, inner qualitative sensations that are only accessible to us. These are all elements we can experience, but machines can not.
Personally, I believe the Turing test is more of a test of intelligence, not consciousness. For a robot to pass the Turing test it is an exemplary level of intelligence due to the sophisticated programming that allows it to mimic thought to the degree that it is mistaken as a human, but if it is mimicking genuine thought, I find it hard to be enough to label the machinery as conscious. These programs give a machine the ability to recognize and respond to patterns. But ultimately, it is simply responding to commands. Hence the operations are said to be “syntactical,” meaning the machine is only recognizing symbols and not the meaning of those symbols, not “semantical” which would mean the machine has an understanding of the words its was saying. This is the prime difference: computers are sensitive to symbols whereas the brain is capable of semantic understanding.
Though computers and robots are more advanced than ever, they're still just tools. They can be useful, particularly for tasks that would either be dangerous to humans or would take too long to complete without computer assistance. But robots and computers are unaware of their own existence and can only perform tasks for which they were programmed.
It does not matter how fast the computer is, how much memory it has, or how complex and high-level the programming language. The Jeopardy and Chess playing champs Watson and Deep Blue fundamentally work the same as your microwave. Put simply, a strict symbol-processing machine can never be a symbol-understanding machine. The influential philosopher John Searle has cleverly depicted this fact by analogy in his famous and highly controversial “Chinese Room Argument”, which has been convincing minds that “syntax is not sufficient for semantics” since it was published in 1980. And although some esoteric rebuttals have been put forth (the most common being the “Systems Reply”), none successfully bridge the gap between syntax and semantics. But even if one is not fully convinced based on the Chinese Room Argument alone, it does not change the fact that Turing machines are symbol manipulating machines and not thinking machines, a position taken by the great physicist Richard Feynman over a decade earlier.
Therefore, I find myself alongside the reasoning of weak A.I. because it is simply too far out of reach today in terms of technology, and our understanding of the conscious to be able to grant such an endowment to artificial intelligence.