The idea that machines and technology can, and are, becoming “intelligent” is a scary thought. Throughout the history of technology, there has been a steady increase in the capabilities of software, and much research has gone into how these capabilities can be used to “better” our lives. In our lives today, we all use some form of artificial intelligence daily. Some of these activities include: using cellular apps (Google Maps, Siri, Cortana), playing video games, and listening to music. Although the use of artificial intelligence has made human life more efficient and effective, it has also insinuated reliance and unsuspected ignorance into our minds as well, and we do not even see it coming.
The history of artificial intelligence dates back to the 1950s. Approximately fourteen years after the development of the electronic computer in 1941, Norbert Wiener theorized the idea that “all intelligent behavior was the result of feedback mechanisms”. These feedback mechanisms occur in the body similar to the idea of cause and effect. The idea was that a system of intelligence could be created that would be able to self-learn and adapt. By learning and growing, the system would never make the same error twice, and experience would be the primary teacher. Artificial intelligence was introduced a year after the first program was created, and to my surprise, was not named by the actual creators of the first program. The individual who created the term “artificial intelligence”, John McCarthy, is the one who is known as “the father of AI." After theorizing the concept of AI, McCarthy decided to organize a conference at Carnegie Mellon University and Massachusetts Institute of Technology, to continue his research along with other researchers. At this conference, held in the summer of 1956 and known as “The Dartmouth summer research project on artificial intelligence” McCarthy was able to discover two challenges with his original idea of AI: The creation of a system that could solve problems by way of limiting the search, and the creation of a system that would be able to learn by itself. The mission statement that was developed during the conference stated, “Every aspect of learning or other feature of intelligence can, in principle, be so precisely described that a machine can be made to simulate it.”
As a result of the Dartmouth summer research project, The General Problem Solver was born. Created by Herbert Simon, J.C. Shaw, and Allen Newell, the general problem solver originated as a theory of human program, specifically “a program that stimulates human thought”. The basis of the general problem solver was to use general logic and algorithms to solve common sense problems. Initially, it could only be used in “well-defined” problems, basically proving theorems that had already been created. Although with the introduction of the personal computer in the 1980s and the evolution of smart devices, artificial intelligence has become practically a daily necessity in our lives today.
By the current year 2017, artificial intelligence has grown much larger than simply The General Problem Solver. Technologists now differentiate between AI and ML (Machine Learning) when considering intelligence. While artificial intelligence is the broader way of describing a machine being able to complete a task using human input, machine learning is specifically a division of artificial intelligence in which computers can self-learn without the need for human programming. In simplest terms, the goal of machine learning is to provide the machine with data and it be able to learn how to compute it by itself. By allowing the machine to learn, it will become smarter, allowing its capabilities to grow. Using quantum computing, scientists would be able to complete calculations that take a regular computer years to figure out, within seconds, due to a significant amount of processing power. The NASA Ames Research Center calls this innovation “quantum artificial intelligence”, and considers it too complex for regular computers today.
In breaking down artificial intelligence, there are three main levels that scientists use to describe its progression. These three levels are known as assisted intelligence, augmented intelligence, and autonomous intelligence. The first level, assisted intelligence, refers to the basic tasks of a computer. In the production realm, this can refer to machines that are used in assembly line systems, in which one machine utilizes the same function repeatedly. The second level, augmented intelligence, refers to the integration of automated technology into human scenarios. This term signifies the process in which the machine is learning, from human input, how to complete tasks. Another way of looking at this level is the building of a relationship between the machine and human. Lastly, the final level of artificial intelligence is autonomous learning. This level is achieved when the machine is able to complete the entire task without any human integration.
The presence of artificial intelligence in our lives has proved its convenience. Many people would agree that completing every-day tasks such as driving around town, finding nearby businesses, remembering tasks, and searching for additional information would be a hassle without cellular applications such as Google Maps, Siri, and Cortana. Smartphone artificial intelligence within these apps collects data based on our requests and continues to learn about our preferences. Simply an updated version of machine learning, smart devices of today have the ability to anticipate our preferences based on repetition and behavioral patterns and help us gain access to the services that we need. Another example highlighting the prevalence of artificial intelligence in society is video games. This can be shown in a multitude of games, specifically games in which players’ behaviors are analyzed and difficulty is adjusted accordingly. For example, in some scenario based games, the predicaments that the character finds themselves in are influenced by the type of character that is picked. Game developers use algorithms along with AI to make the system more adept to fit the player. This makes the playing experience more personal and enjoyable, and in turn encourages more people to buy the game.
In terms of business decisions, artificial intelligence has made understanding the consumer a much easier task. Using analytics coupled with consumer-based research, artificial intelligence has been helpful in the transition from a physical to online business market. In understanding consumer reaction to business, many shoppers nowadays prefer to make their purchases online and have their items shipped to their location, rather than picking it up at the store. There are a multitude of reasons why this is occurring, for example proximity to the store, lack of time or access to the resources necessary to get to the store, product availability, etc. Another reason, probably the biggest, is due to problems with retail staff/store layout. As companies learn of these incidences, consumer research by way of artificial intelligence has helped many companies mobilize their websites, making it no longer necessary to visit the physical store location. Understanding this consumer perspective allows companies to better service their current customers, save potentially unsatisfied customers from leaving, and appeal to new customers who simply prefer to do their shopping online.
According to Gustav Lundberg (1987), author of, “The AI Business: Commercial Uses of Artificial Intelligence” there is a relationship between research and artificial intelligence. Lundberg agrees that consumer research allows for extended ways in which artificial intelligence can be utilized. He discusses the parallels between the perception of artificial intelligence and the perception of consumers in the market. Therefore, Lundberg agreed in 1987 that artificial intelligence would be expanding in the future, and I agree that it has expanded and will continue to grow in years to come.
Along with its many pros, the unsuspecting ignorance associated with artificial intelligence is one of its major problems. The unsuspecting ignorance is the idea that if machines are able to complete all of the simple tasks that humans can, all of the people that used to work in jobs completing those tasks are no longer necessary. In the current state of America, job creation is completely necessary, as our population is steadily increasing. Having jobs in jeopardy due to machines is a movie plot turned reality. Coupled with ignorance, the amount of trust that we as people willingly put into smart systems makes us vulnerable to a loss in control. For example, if artificial intelligence is able to complete tasks that we as humans are still unable to figure out, we will no longer be able to understand how the system operates. While this may not seem to be too much of an immediate threat, the threat would come in the form of hackers and cyber criminals, who might be able to understand it. Lastly, the root problem with artificial intelligence is the fact that we, the people who created it, are not perfect beings. It would be impossible for flawed-beings, as we are, to create a system that learns from us, and not end up being flawed as well.
Reliance is a one of the major potential downfalls when thinking of the progression of artificial intelligence. As machines get smarter, humans tend to rely on them more. The problem then becomes, as later generations come along they in turn, forget how to be self-reliant. Another potential downfall is how artificial intelligence will be able to make decisions in circumstances that require morality or careful thinking. Artificial intelligence lacks the conscience that humans do, and as a result also lacks sympathy. All decisions made by artificial intelligence are based in fact, which could be a problem when it is integrated into the emotionally-driven lives of humans. If we become so attached to having technology simplify our lives, we could be in bad shape when that technology is no longer accessible.
In conclusion, the growing concept of AI has its pros and cons. Looking back from the 1950s, it’s evolution has grown significantly in terms of the current year 2017. In my opinion, the key to healthy and safe technological progression, and avoiding the cons mentioned above, will be found by setting limits. The potential dangers of AI, the “rise of the self-aware machines”, and all other problems can be avoided by listening and accounting for the concerns of the population. Similar to conducting consumer-based research, understanding how society feels regarding AI and reacting accordingly will be the answer to unlocking more technical advancements through and beyond AI. Some of the potential I see with AI technology could be helping to end war conflicts, eradicating disease and poverty, etc. These social implications should be the main focus when considering what AI is capable of now, and in the future.