Home > Essay examples > Stephen Hawking, Elon Musk & Why A.I. Poses an Existential Risk to Humanity

Essay: Stephen Hawking, Elon Musk & Why A.I. Poses an Existential Risk to Humanity

Essay details and download:

Text preview of this essay:

This page of the essay has 2,638 words.



“That terminator is out there, it can’t be bargained with, it can’t be reasoned with, it doesn't feel pity or remorse or fear, and it absolutely will not stop,” is a quote from the 1984 smash hit The Terminator, directed by James Cameron ("The Terminator (1984)"). The film takes place in what was then modern America, just before a devastating war with an enemy more advanced than the world had ever seen ("The Terminator (1984)"). The enemy was an “artificial intelligence” or A.I. system created by humans that were supposed to keep the United States safe from foreign enemies. The system became self-aware and decided to wage war on its creators, causing a nuclear crisis and an untold amount of devastation. While The Terminator and its sequels raked in millions at the box office, one may be surprised to find out that the threat of artificial intelligence is validated by some of the today’s greatest minds. Tensions hit an all-time high in 2017 with individuals such as Stephen Hawking and Elon Musk speaking out about it. As the world continues to advance in a way never before thought possible, the following question presents itself: why is artificial intelligence such a substantial threat to humanity? A.I. is dangerous and the continued development of these systems presents an existential risk to humanity for the following reasons: the world’s foremost experts on technology have all spoken out against its use, A.I. has the potential to become self-aware and act in its interest neglecting human morality, and that A.I. can be hijacked and programmed to harm society. To begin to understand the scope of the issue at hand fully, one would first benefit from having a thorough understanding of what A.I. is.

A.I. is defined by Dictionary.com as the following, “the capacity of a computer to perform operations analogous to learning and decision making in humans,” ("The Definition of Artificial Intelligence"). While A.I. in the way that humans know it today did not exist until the 19th century, the idea of a cognitive machine was prevalent in the human mind far before this. In 1308, the Catalan poet Ramon Llull published his thoughts in what is thought to be the earliest manifestations of the concept of A.I. in his work, Ars generalis ultima (“Applications of Artificial Intelligence”). In this work, he discusses using a paper-based mechanical means to combine concepts in the way that the human mind would ("A Very Short History of Artificial Intelligence (AI)"). In essence, A.I. does something quite similar on a much larger scale. The first time that technology developed that resembled what an A.I. system would look like today was only in 1914, however Leonardo Torres y Quevedo unveiled the first ever chess-playing machine, a device that could successfully navigate that complex game of chess with no human intervention ("A Very Short History of Artificial Intelligence (AI)"). Quevedo’s invention, however, was just the beginning of a long, complicated marriage between man and the machine.

Another notable event in the historical development of A.I. was in 1943 with the publication of Logical Calculus of the Ideas Immanent in Nervous Activity (Reese, Donna). In this work, writers Warren S. McCullouch and Walter Pitts discuss how one could potentially create a computer-based neuron, and how such a technology could be trained to perform simple logical functions (Berlatsky, Noah). From here on out, it seems that the tide honestly started to turn regarding the idealization of technological development or the hope that a device could efficiently mimic the actions performed by the mind. Regarding the modern history of A.I, there are a few events worth noting that have occurred in recent years. Often, a weak argument that to be used against A.I. is to say that a machine will never be able to compete with a human for a variety of reasons. One can often cite the human ability to reason, one's emotional intelligence, or some other number of factors to make such a claim. Anybody who mentions such a thing, however, is likely unaware that Watson, an artificial intelligence bot, competed and managed to defeat two champions on the American game show Jeopardy (Pearce, Q. L). In addition to this, if one visits any major American city, they are likely to see a self-driving vehicle transporting individuals to their selected destinations. Being that A.I. is becoming so commonplace in today’s world, it is essential to consider the risks involved with such a process before it is too late. To begin, one would benefit from knowing the position of some of today’s most credible sources on the technology as a whole.

Elon Musk is often praised as one of the most critical individuals in society today. Founder of companies such as X.com, SpaceX, Tesla Motors and Paypal Elon Musk is one of the wealthiest men in America today ("Elon Musk"). He focuses much of his time today on developing technologies that can improve the human condition as a whole, and has recently begun a venture into solar energy with his company SolarCity ("Elon Musk"). As such, it is no wonder why someone such as Musk would have such a strong opinion on A.I, referring to it as “the greatest risk we face as a civilization today," (Dowd, Maureen). “We are summoning the demon,” says Musk when asked about A.I (Dowd, Maureen). One may first believe that Musk is speaking in simple hyperbole when discussing these concepts, as his words are nothing short of sensational. This viewpoint echoes from another great mind of today, Stephen Hawking. Hawking, the author of A Brief History of Time, is often thought of as one of the most intelligent individuals alive today ("Hawking: AI Could End Human Race"). Hawking, despite having been diagnosed with the muscle wasting condition ALS two decades ago, has contributed in a way few others have to scientific concepts such as quantum physics and the physics of gravity ("Hawking: AI Could End Human Race"). Hawking also speaks out a variety of issues that impact humanity regularly, making it no surprise that he holds an equally strong viewpoint as his colleague Musk. In a recent interview with BBC, Hawking claimed that: "The development of full artificial intelligence could spell the end of the human race," ("Hawking: AI Could End Human Race"). While he sees the immense utility that A.I. can, even has already provided to the world, he fears that "It would take off on its own, and re-design itself at an ever-increasing rate," he said ("Hawking: AI Could End Human Race"). Hawking goes on to discuss how technology advances at an exponential rate as opposed to humans, and that once humans unleash the full scope of what A.I. is capable of, the results are sure to surpass that of humans. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded,” he claims ("Hawking: AI Could End Human Race"). When a reader considers that some of the most important minds of today have such high opinions on the topic of A.I, they may consider taking a stance themselves. Both Musk and Hawking warn of the classic, nightmarish situation that could potentially occur with A.I, the concept of it becoming self-aware and acting in its own interest.

The very foundation of the Capitalist economic system that reigns supreme across the planet is that human beings will act in their self-interest more often than not. It should come as little surprise to realize that A.I. would undoubtedly do the same, as it is a technology designed by humans. For instance, one could assume that an A.I. could be created to complete a task as simple as mowing a lawn and completing basic yard work tasks. The #1 priority of the A.I. would be to complete the task in which it was programmed to do, a trait that many humans wish they could embellish as machines do ("Hawking: AI Could End Human Race"). Unfortunately, one must not forget that a computer does not have the same ethical standards or ability to feel empathy that a human being would, a task that can be overlooked when considering the rise of A.I.

One could imagine that what if a small child ran out in front of the A.I. while mowing a lawn? The majority of humans would stop mowing until the child had cleared the area, but what about an A.I? Would the A.I. stop? Unless it was explicitly programmed to do so, the answer is likely a resounding no ("Hawking: AI Could End Human Race"). Herein lies the problem with utilizing A.I in human society. The A.I. is indifferent towards the sanctity of human values. Furthermore, what if that same child prevented the A.I. from completing its tasks each day. In such a case, would the A.I. become a rational actor and decide to take matters into its own hands? Does an A.I. see a child as an innocent human life or an obstacle in their way? Unfortunately, this is one of many concerns that lies within the use of A.I, providing yet another reasonable point to argue against the continued development of A.I.

One could argue that A.I. could be programmed to a point in which such self awareness and productivity complexes could not occur. While the very nature of creating a being can “think for itself,” this argument does hold some merit. Theoretically, it is possible to create a relatively secure framework of A.I. to prevent disaster for at least a period. What cannot be guaranteed is a projection from the vulnerable nature of these beings. Time and time again, the most significant software companies on Earth are foiled by teams of hackers that appear to operate with relatively few resources at their disposal. A prime example of this that occurred just this year was the 2017 Equifax hack, an exploit that released the credit card numbers, employment information, and personal details of 100 million Americans ("Equifax Hack May Shake Up US Consumer Data Laws"). Equifax is not a small company by any means, but rather a behemoth credit reporting agency itself. How on earth could this happen, one may ask themselves? Is a company like Equifax not in possession of nearly an unlimited amount of resources to help keep the information of their customers safe? They are, and it still does not matter. The argument to be made here is that no matter what humans attempt to do to secure technology, hacking occurs very regularly in an inevitable fashion. These incidents practically beg the question as to the untold amounts of destruction that could happen if an A.I. is hacked and then reprogrammed to commit crimes against society? The theoretical example of the lawn care A.I. comes to mind once more. If a hacker, or even a rogue nation such as North Korea, was able to interfere with the programming of devices that possess more potential than a human being ever could, what could be the possible outcome? The answer is nothing short of fire and fury. The potential for A.I. hacking provides yet another reason as to why there is just too much risk involved with the continued development of A.I in today's world.

Beyond all and any of the inherent dangers that lie within the widespread adaptation of A.I, there are also a myriad of ethical concerns that plague this issue as well. Few will argue at the sanctity of life as a whole, and how creation is a process that must be treated with the utmost amount of respect and care. Now that humanity has now gotten to a point in which life can essentially be created via machine, when will the discussions being towards the potential rights that these intelligent, technological beings possess? What is a “life,” and does this ever expanding definition include the artificial beings that are currently being created now? One of the inevitable outcomes of the adaptation of A.I. would be debates on how these beings are to be treated, what rights they have, and ultimately what their position will be in human society? Perhaps a similar movement will take place for machines that minorities have struggled for in the past. If one is to scoff at such a concept, they need be reminded of the widespread acceptance of previously taboo topics of discussion, such as gay marriage or the rights of transgendered individuals. One must ask themselves: is this a “rabbit hole” they are willing to venture down? Does society need another class of protected individuals who are currently fighting for the most basic of rights? Simply put, these are not questions that can be answered by any living human, but rather a dialogue that must take place with the A.I. themselves. Humans can, however, decide if this is a risk they are willing to take prior to continuing on the current path of A.I. exploration.

As technology continues to advance in the exponential fashion that it has, a variety of concerns will continue to arise. Humanity has reached a point in its evolution that nobody ever could have ever thought possible. While the human race has indeed dealt with a variety of issues that were nothing short of pressing, the potential unleashing of pseudo-organisms that have the potential to outsmart humanity sounds like something out of a science fiction film. Unfortunately, the fact that the idea is parodied in a variety of Hollywood blockbusters has diffused the public’s awareness and cautiousness of the A.I. phenomenon. Just because this topic has been used as a theme in movies such as the Terminator does not make it any more legitimate. The fact remains that A.I. is incredibly dangerous, and humanity is experimenting with it is akin to an infant experimenting with fire. While they may continue unscathed for a period, they will get burned. The public has been warned about A.I. by some of the most critical minds ever to live, the very nature of A.I. allows for the potential of self-awareness and a reprioritization of the A.I’s tasks to occur, and the risk of exploitation and hacking is exceptionally prevalent when considering the issue at hand. A.I. has the potential to help humanity achieve yet another level of accomplishment and accolade, or it has the potential to transform a prosperous world into a place so wretched few could even imagine living there to begin. Unfortunately, the facts show that research and development into this dangerous business are continuing on full steam ahead. Like it or not, the cruel irony of A.I. may be a lesson humanity must learn “the hard way.”

    

Works Cited

"A Very Short History of Artificial Intelligence (AI)." Forbes.Com, 2017, https://www.forbes.com/sites/gilpress/2016/12/30/a-very-short-history-of-artificial-intelligence-ai/#3709b81b6fba.

"Applications of Artificial Intelligence." Artificial Intelligence, vol 114, no. 1-2, 1999, pp. 1-2. Elsevier BV, doi:10.1016/s0004-3702(99)00086-7.

Berlatsky, Noah. Artificial Intelligence. Detroit, Greenhaven Press, 2011.

Dowd, Maureen. "Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse." The Hive, 2017, https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x.

"Elon Musk." Forbes.Com, 2017, https://www.forbes.com/profile/elon-musk/.

"Equifax Hack May Shake Up US Consumer Data Laws." CNET, 2017, https://www.cnet.com/news/equifax-hack-may-shake-up-consumer-data-laws/.

"Hawking: AI Could End Human Race." BBC News, 2017, http://www.bbc.com/news/technology-30290540.

Pearce, Q. L. Artificial Intelligence. Detroit, Lucent Books, 2011,

Reese, Donna. "Artificial Intelligence." Artificial Intelligence, vol 27, no. 1, 1985, pp. 127-128. Elsevier BV, doi:10.1016/0004-3702(85)90088-8.

"The Definition of Artificial Intelligence." Dictionary.Com, 2017, http://www.dictionary.com/browse/artificial-intelligence?s=t.

"The Terminator (1984)." Imdb, 2017, http://www.imdb.com/title/tt0088247/.

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Stephen Hawking, Elon Musk & Why A.I. Poses an Existential Risk to Humanity. Available from:<https://www.essaysauce.com/essay-examples/2017-10-24-1508854403/> [Accessed 19-11-24].

These Essay examples have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.

NB: Our essay examples category includes User Generated Content which may not have yet been reviewed. If you find content which you believe we need to review in this section, please do email us: essaysauce77 AT gmail.com.