Once the computer was successfully developed in the 1950’s, it allowed cognitive psychology to become a dominant approach in the field of Psychology. Cognition needs to be modelled to aid cognitive scientists in understanding how the brain works, to predict human behaviour and to create machines that can perform human tasks (Gentner, D., & Forbus, K, D). The computer gave cognitive psychologists an analogy to compare human cognitive processes. An analogy is an algorithm and algorithms are actions used to manipulate and change representations, they are formal and well defined. Algorithms are similar to computers as they can be likened to software because software contains instructions for the processing of data (Freidenberg, 2010), much like human cognition. Two types of models are important in the history of cognitive processes in relation to computers and these are: The connectionist model and the Bayesian models of concept learning. The main thesis is that humans have a more extensive background of knowledge and learning that computers would not be able to match. Boden, 2004 agrees as he states that no A.I system can learn to speak like humans, or behave like humans as it does not have the background knowledge humans have due to its extensive learning. However, Turing (1963) theorised that a computer can think and act like a human to such an extent that a human observer could not distinguish the performance of a computer from that of a human.
The Turing test was a strong indicator of the Artificial Intelligence approach. The artificial approach attempts to mimic human functions and attempt to build devices that can mimic human thought processes (Eysenck, 2008). Artificial intelligences methods include developing and testing computer algorithms. The Turing test, otherwise known as the ‘Imitation game’ was criticised by the Chinese room experiment by John Searle which states an unintelligent machine is capable of passing the Turing test. The Chinese room experiment is a theoretical construct by John Searle who states
“Suppose I am given a rule book in English for matching Chinese symbols with other Chinese symbols. The rules identify the symbols entirely by their shapes and do not require that I understand any of them (date #).”
This suggests that like in the Turing test after going through rules and identifying correct symbols in response, there is no understanding of the Chinese language only a manipulation of symbols with no meaning attached (LACurtis, K. 2011). The A.I approach also crystallised Marr’s computational theory.
Marr (1982) was a cognitive scientist who followed the connectionist and semantic networks. The computational theory displays visual perception and identifies it as a complex information processing task (Marr, 1982). This approach is split into a Tri-level hypothesis; the algorithmic level, the implementational level and finally the computational level. Churchland, Koch and Sejnowski, (1990) have highly criticised the tri level hypothesis as being “fundamentally simplistic” as each level can be further sub-divided into further levels. According to Warren, Marr explained in further detail Gibson’s theory of perception, however he left out all natural constraints. At the algorithmic and implementational levels Marr failed to understand that vision is more complex and harder than the A.I approach first theorised (Freidenberg, 2010). Warrington and Taylor, (1978) found that brain damaged patients who have problems with object recognition cannot turn viewpoint dependent 2.5D sketches into 3D objects. Object recognition is achieved when the viewed image matches a representation of a known object stored in the brain, a top down approach. This supports the computational theory of visual perception, as different stages of processing are used to achieve the overall view of how object recognition is being perceived. Lappin et al, (2011) stated individual differences, such as brain damage were not taken into consideration in the approach as it was only a generalised theory. Dawson, (1998) found information processing occurs in both connectionist and classical systems and this implies Marr’s tri-level hypothesis can be applied equally to both approaches.
The connectionist approach differs from cognitive science as it theorises that knowledge is represented as a pattern of activation distributed through a network and is more global than a localised symbol. In regards to processing, connectionists suggest it occurs parallel through simultaneous activation of nodes in the network. Jerry Fodor (1980) argued that connectionism threatened to obliterate progress in the classical approach made from Alan Turing that the brain operates purely on formal operations and follow syntactical rules for their internal manipulation, Searle (1990) states that the syntax by itself is neither constitutive of nor sufficient for semantics. So the connectionists focus learning from environmental stimuli and storing information in connections between neurons.
Picard (1997) defined emotions in artificial intelligence as “Affective computing relates to, arises from or deliberately influences emotions”. Projects have been based off this, for example the Kismet project. The kismet project can express emotions through facial expressions and carry out social interactions (Brezeal, 2002). Kismet has a cognitive system consisting of perception, attention, basic drives and behaviour. Trevarthen, 1979 found that the expressive cues used by Kismet are effective only at regulating affective and intersubjective interactions. Sloman & Croucher, 1981. Argued that in realistic robots, human emotions will emerge from various types of interaction between different mechanisms not from a dedicated mechanism for emotion which differs from (James, 1890 and Damasio. 1994) who claim that emotions are developed by sensing patterns in a psychological state, for example, blood pressure, hormones. However, Fellous and Arbib, (p.239) argues that if a robot was to tell you in great detail why it is upset you will be more likely to believe it has emotions than if it only shows tears or shaking its head in response. However, there has been previous arguments that emotions are independent because their emergence does not require or reduce to cognitive processes (Ackerman, Abe, & Izard, 1998) Kismet also contains a programme called facesense which is used as a training tool for Autism. What Picard did not theorise was that there are large individual differences in human cognitive processes and their ability to recognise and portray emotions, for example, autism and Asperger’s syndrome, also downs syndrome are neurologically different regarding their emotions compared to neurotypical humans. Emotions are linked to making a human conscious, being aware of things. So how can a computer portray a conscience? There has been research into computers and conscience.
Consciousness and emotions are a main interest when developing computers, whether a conscious can be recreated. After completion of the Turing test, Alan Turing wrote “I do not wish to give the impression that I think there is no mystery about consciousness… but I do not think these mysteries need to be solved before anyone can answer the question of whether machines can think.” However neuroscientists have theorised that consciousness is actually generated by various components of our brain, named as the neutral correlates of consciousness (Aleksander, 1995) However, Bickle, 2003 stated that consciousness can only be realised in physical human terms because consciousness has properties that fundamentally depend on humans. Consciousness has been a major cause of debate for cognitive scientists, Chalmers, 2011 in his article claimed that only the right kinds of computations are sufficient for the possession of a conscious mind. If a computer could be conscious, it would cause for ethical complications. “What if a computer was aware and conscious of what we was doing to them, there would have to be ethical rules in place.” (Franklin, 2001) In Meji University in Japan, Junichi Takeno is investigating self-awareness in robots, he has asserted that he has developed a robot capable of discriminating between themselves in a mirror and an identical image of it (Takeno, Inaba and Suzuki, 2005). He also states that he constructed an artificial consciousness by forming relationships between emotions, feelings and reason by connecting the modules in a hierarchy (Igarashi, Takeno, 2007) Takeno proposed his SBT (Self body theory) by stating that humans feel their own mirror image is closer to themselves than an actual part of themselves. (Torigoe, Takeno, 2009)
Language is a complex task and it has been thought that although learning is motivated by social interaction from birth, which the process of learning words relies more on the computational ability of the human brain. (Hoff and Naigles, 2002) found that the more input of the maternal caregiver, the faster language was learnt. This found that toddlers learning of language was not related to the nature of social engagement between them and the mothers. Chomsky, 1968 argued that language was innate and is categorised into surface structure and deep structure, however Karmiloff and Karmiloff-Smith (2001) criticised his theory and stated that researchers need to take into account of each of them to be able to explain their part of the story. Computers do not contain an innate knowledge of language and only know what has been input into them. However, scientists at Liverpool University have developed a set of algorithms so if the computer does not understand a word or sequence of words it is given, it will learn similar to humans in relation to language and look up the word to put it into a context the computer will understand. Bollegala, 2015, said that learning accurate word representations is the first step towards teaching language to computers. Cognitive processing theory has played a massive part in understanding language, it has addressed how children learn how to differentiate words out of a stream of sounds and found that toddlers’ brains are data crunching. (Saffron et al, 1999) used made up words inside random syllables to see whether children, adults and infants could differentiate the words from the random syllables. They found that infants listen to non-words longer which is new to them so they find it interesting. (Aslin, Saffron and Newport, 1998). This then led to them theorising that our brains are similar to a computer as they automatically use probability to pick out words from a sound stream so therefore computer analogy is sufficient because…..
Cognitive neuropsychologists find functions provide valuable information about cognitive scientology. Markoff, 2014 promised that a new chip IBM produced which functions like a brain, forming equivalent of one million neurons and has the cognitive capacity of a bee may in the near future be implanted in a human to assume functions in injured patients whose particular cognitive functions have been compromised. However, this has since been criticised as neurons are not a digital organism and should not be compatible with a digital computer. Computers have also influenced research on memory within humans, within this research extensive evidence has shown that computers and human brains are similar in relation to memory functions. The computers RAM (random access memory) and the humans STM (short term memory) are both capable of recalling immediate actions, only slight difference is when you turn off a computer the RAM is gone, furthermore this could also be likened to when the humans STM turns into LTM while we are sleeping. Another memory process which has striking resemblance is the computers hard drive and humans LTM (long term memory) this is where all data is located and information can be retrieved from them. However, research has shown that computers have a single memory storage in the form of CACHE in the CPU, (computers processing unit) Pinel theorised that humans cannot have a single memory storage as all information is processed and transferred slowly from one side to the other. The closest resemblance to this is the neuronal network attractors (Mitchell, 1993). The cerebral cortex wakes the NNA up and reactivates memory. It was using this knowledge that Ramirez et al, 2013 was able to implant a memory into a mouse’s brain.
To conclude, the main thesis of this was that humans have a more extensive background of knowledge and learning that computers would not be able to match, much research agrees with this thesis, however only a few pieces of research would disagree and hypothesise that computers have matched, if not outgrown humans in cognitive processes. However, due to much of this research being based off a theoretical construct and produced in laboratories there is not a lot of strength in these. Cultural differences also come into play, as western culture may be more advanced in intelligence and technology, it would be interesting to see cultural differences being raised in some of this research. Kismet only used western cultures to portray emotions through facial expressions, if a person of Asian descent or African background used Kismet, would the computer recognise their facial expressions effectively? Cognitive psychology compares the human brain to a computer or an artificial program. This suggests that human brains are also information processors and they study the internal processes from the response humans make to a stimulus from the environment, the analogy between computers and human cognitive processes is limited as research and experiments are largely based in controlled conditions, which lack ecological validity. For example, on the Kismet project over 400 participants went to a laboratory to recreate facial expressions so the Kismet could recognise these facial expressions with emotions. However in a real life setting peoples expressions could be different to a forced expression in a laboratory setting, and there is cultural differences within facial expressions. Recreating consciousness also comes under the same criticism. Freidenberg, 2010 said “How can we create something we do not know fully exists, what cannot be proven.” The human brain is plastic and can expand with knowledge and raw data, whereas computers rely on serial processing and the data that researchers give them, it cannot gain its own knowledge. Yet the computer can do more complex tasks at the same time that is difficult for the brain. Computers and brains are different in their own right and both can do things that the other cannot do.