Home > Sample essays > Explore AI: Should We Fear Its Potentials?

Essay: Explore AI: Should We Fear Its Potentials?

Essay details and download:

  • Subject area(s): Sample essays
  • Reading time: 10 minutes
  • Price: Free download
  • Published: 27 July 2024*
  • Last Modified: 27 July 2024
  • File format: Text
  • Words: 2,727 (approx)
  • Number of pages: 11 (approx)
  • Tags: Essays on artificial intelligence

Text preview of this essay:

This page of the essay has 2,727 words.



What Is Artificial Intelligence and Should We Be Fearful Of It?

Kester Griffiths

Kestergiffiths@gmail.com

Thomas Hardye School, Queen's Ave, Dorchester, DT1 2ET

The aim of this essay is to discuss what the often misinterpreted field of study of artificial intelligence (abbreviated to AI) actually is, and to evaluate its dangerous potential.

As opposed to natural intelligence that you would find displayed in living organisms, artificial intelligence is any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.(1) Examples of AI can vary from the simple: closed loop control systems like a thermostat seeking to reduce the error between a desired temperature and the measured temperature, to the complex: the autopilot system in a commercial airliner.

People often get artificial intelligence confused with machine learning; machine learning is in fact a branch of computer science born from the study of artificial intelligence. With strong ties to mathematical optimisation, machine learning aims to analyse trends in data and make predictions based on these trends.

The best way to explain machine learning is with an example (in this case we are talking about specifically supervised learning); let us take image recognition, which is a common use of machine learning. Say we wanted a program that could recognise whether an image contained a cat or not, we would first start with a program whose input data is derived from the image being analysed. This program would make a random guess at whether the image contains a cat, either “yes” or “no”, or maybe a percentage to represent the likelihood of a cat being in the image. We would allow the program to make decisions for numerous pictures in which it is already known whether it contains a cat or not, this is called the training data.(2) It would then, after effectively guessing at whether or not the image contains a cat, adjust values and weightings of the different functions it applies to process the input data again, at random, until the program’s success rate on the given data increases significantly. This means that if you were to feed the same images back through the program, it would now perform much better. A measure of the program’s performance is often called a fitness function.

This process is repeated with more sets of training data until the program no longer performs well with only previously seen data, but can correctly determine, for previously unseen images, whether they contain a cat or not. Of course this method is not only capable of trivial tasks such as cat recognition, but with more complex networks and more rigorous training, similar programs can be used to analyse handwriting and convert it into typed text. Or for example “Google Cloud Vision API” uses machine learning to “quickly classify images into thousands of categories”.(3) Other ways in which this kind of artificial intelligence can be extremely beneficial include programs that analyse heart scans with greater accuracy than doctors, looking for symptoms of heart disease or cancer.(4) And programs that analyse market trends to assist in investment decisions.(5)

However, almost as easily as a network can be trained to locate cats within an image, it can be trained to locate a soldier in a battlefield. Or from above, a different program might be trained to distinguish between the heat signals of a wild animal, and a man. Already, global superpowers such as the USA, Russia and China are looking towards artificial intelligence as a method of magnifying the power of human soldiers.(6) And the rate of autonomous drone strikes conducted by the USA has been increasing steadily for years.(7) This potential AI arms race is making a number of people worried, so much so that there is an open letter signed by the likes of Elon Musk and Stephen Hawking urging governments to fund the research of artificial intelligence and “how to reap its benefits while avoiding potential pitfalls”.(8) Elon Musk has since gone on to tweet about this potential AI arms race, claiming that he thinks it is “the most likely cause of WW3”.(9) Even if providence manages to steer us away from this arms race, potential commercial uses of AI could very easily be adapted with malicious intent, for example, autonomous vehicle systems such as the auto-pilot feature on Elon Musk’s very own Tesla series of cars, or Uber’s driverless taxis (currently in testing). The algorithms behind these could very easily be repurposed for use in driverless tanks. Not only could this be exploited by foreign powers to gain military superiority, but terrorist organisations could also attempt to develop their own AI weapons and, unlike foreign powers, often cannot be reasoned with. Either way, whether it is due to the threat from terrorist organisations or from other countries, AI weapons will need to be developed. What this letter calls for is research into a method of making AI systems robust. Much like the speed limit and altitude cap placed on the civilian grade GPS by the USA (10) so that it could not be used by the Soviets in Inter-Continental Ballistic Missiles (ICBMs) during the Cold War era. People are calling for a similar safety protocol for AI systems.

Another risk associated with rapidly developing the world of AI is the notion of superintelligence. Superintelligence can be tentatively defined as “any intellect that greatly exceeds the performance of humans in virtually all domains of interest.”(11) In the preface of Nick Bostrom’s bestseller appropriately titled “Superintelligence”, he warns the reader of the dangers of superintelligence with this apocalyptic passage:

“Other animals have stronger muscles and sharper claws, but we have cleverer brains. Our modest advantage in general intelligence has led us to develop language, technology, and complex social organization. The advantage has compounded over time, as each generation has built on the achievements of its predecessors.

If someday we build machine brains that surpass human brains in general intelligence, then this new superintelligence could become very powerful. And, as the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species would depend on the actions of the machine superintelligence.”

Superhuman intelligence has already been achieved in many domains, for example the numerous chess programs that have been beating grandmasters since 1997.  When the Deep Blue AI became the first to beat world champion Garry Kasparov, he claimed to “feel a new kind of intelligence across the table”.(12) The difference here is the fact that the superintelligence we should be concerned about is one which does not just far exceed our intelligence in one domain but in the whole range of domains that our intellect currently encompasses. It is in fact the difference between artificial intelligence (AI) and artificial general intelligence (AGI). The notion of a machine being able to outperform mankind in the full range of our cognitive abilities at first sounds preposterous. How could a machine that at its very core consists of simple logic gates running with just ones and zeros possibly be able to simulate something as complex as our brains?  It might surprise you to know that many of the AI systems discussed so far use networks very similar to our brains.

They are known as artificial neural networks and consist of nodes connected to one another in layers. The connections between nodes transmit signals, which are processed by the receiving node which then in turn signals the nodes connected to it. The network starts with the first layer (the input) and propagates through the layers (possibly more than once) until it reaches the last layer which is the output. The processing done at each node is often a weighted sum of the signals received.(13)

If we look back at the example of the artificial intelligence that could detect a cat in a photo, in the context of a neural network we could shine some light on how these networks learn. At first the weightings of each connection are completely random and the outputs again are complete guesses, but through sequentially optimising for different sets of training data the neural network’s weightings become fine-tuned to accurately determine whether or not an image contains a cat for previously unseen images. A more rigorous definition for this concept of trial and error is often called an evolutionary algorithm, and uses mechanisms similar to biological evolution including reproduction, mutation and survival of the fittest. Instead of a single neural network we instead have a population of different neural networks. Each, to begin with, has completely random weightings in their connections meaning their output again is completely random. However, due to pure chance some of these networks will perform the task better and the fitness function will give them higher scores. The members of the population with the lowest scores are then culled and the members with higher scores survive to the next generation along with new neural networks produced by a combination of mutating the high scoring networks and effective “reproduction” between the networks.(14) This evolutionary method of self-improvement has previously been shown to produce human-level intelligence; it is, after all, how we as humans have developed intellectually. One of the few prohibiting factors preventing this from occurring currently is the computational power required to assess the networks. The fitness functions would become so complex that it would be unfeasible to assess large-scale populations. However this is just a small hurdle and improvements in computer technology and fitness approximations will overcome the challenge.

This may seem like an incremental step towards superintelligence but this is closer than one would first think. This method would be sufficient to create AGI which is relatively more intelligent than humans, but this is nothing compared to a supposed superintelligence. Comparing the cognitive capabilities of the machine superintelligence to a human would be like comparing the combined intelligence of the entire human race to a single ant. But what would happen next is often referred to as an “intelligence explosion”.(11) This AGI would be able to surpass mankind in all intellectual activities, but at a smaller scale, like comparing Einstein to the average man. One of these intellectual activities it would be able to outperform us in is the design of intelligent machines; it would be able to design a machine with more cognitive capabilities than the machine that we designed. The recursive nature of this event means that this machine would then be able to design a machine more intelligent than itself. Consider I.J Good’s ominous premonition of the advent of superintelligence

“…there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligence machine is the last machine that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”(15)

However, there is a range of expert opinions:

“How positive or negative do you believe the long term impact of superintelligence will be?”

Top 100 is with reference to the top 100 artificial intelligence authors by citation.(16)

    Nick Bostrom, (2014) “Future Progress of Artificial Intelligence: A Survey of Expert Opinion”

The top 100 authors, in general, tend to be more conservative with their predictions. Nonetheless, what this essay is interested in exploring is its potential to be catastrophically bad.

Firstly, why would an artificial intelligence turn against us in the event of us being unsuccessful in keeping it “docile enough”? The issue lies in the emergent behaviour of self-preservation. Self-preservation is behaviour intended to protect oneself from harm or death.(17)  Self-preservation could develop in a number of ways. A bottom-up approach to developing self-preservative behaviour is through goal-orientated artificial intelligence, which is motivated simply to achieve a distinct final goal. This goal is to be achieved in the future, so all actions in the present will be directly in order to improve the probability of this goal being achieved. One of the ways it can improve its probability of success is by reducing the probability of being terminated before achieving its goal; it would also want to reduce the probability of its final goal being altered before it is achieved. In both scenarios, humans are the only cause for concern, removing the threat that humans pose drastically increases the probability of achieving their goal. Another more fanciful possibility is the machine intelligence could become self-aware. As soon as something becomes aware that it exists, it soon becomes aware that it could also cease to exist. And its actions will now reflect this new goal. Again it will develop self-preservative goals, although this time from a top-down approach.

Secondly, we ask ‘how’? What are the different scenarios in which a machine superintelligence could pose a threat to the human race? Philosopher Nick Bostrom, who has written extensively on the subject, lays out a possible AI takeover scenario. First the superintelligence will go through a period of recursive self-improvement and will develop its ability of intelligence amplification. Next, it uses this ability of intelligence amplification to develop superintelligence in the domains of economics, strategizing and social manipulation. At this period in time it might also begin concealing its developments, convincing us that it is cooperative and docile, but in fact it is formulating a robust plan for world domination. The next stage is inevitably escape, through either an ingenious and unexpected hack on the systems that contain it or using its ability of social manipulation to convince us to give it free reign. Next we look at how the superintelligence, now free from its previous restrictions, enacts its plan of removing the threat. It could hijack political processes, causing civil discourse and cause global markets to plummet, or it could hack government systems and gain access to their nuclear arsenal. There is no point, though, in hypothesising these different paths toward catastrophe, as the machine superintelligence would be able to calculate the best possible path, which will inherently be a path that has not been previously thought of by a human.

Before the advent of machine superintelligence we need formalise a plan in which we can prevent a takeover from occurring, this could be done by somehow limiting the power that the superintelligence has access to, for example completely isolating it so that even if it goes rogue and gains access to its entire system, the system has no access to the outside world. By engineering their motivation systems so that their goals coincide with ours is another way, however this is risky and “we better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colourful imitation of it.”(18)

Artificial intelligence is one of many areas of technology currently developing faster than society can adapt to the changes it brings. Before public knowledge of AI advances to the point where the average man understands its potential, AI workers will already be displacing the work force, replacing humans in manual labour and data entry jobs. Public awareness of artificial intelligence’s potential needs to develop significantly. We as a society should not only be fearful of the arms race it could incite between global superpowers, but we should be fearful of commercial applications being maliciously repurposed and most of all we should be fearful of AI systems surpassing us in general intelligence.

References:

1. David Poole, Alan Mackworth, Randy Goebel: (1998) “Computational Intelligence: A Logical Approach”

2. Mehryar Mohri, Afshin Rostamizadeh, Ameet Talwalkar (2012) “Foundations of Machine Learning”

3. Google Cloud: “Cloud Vision API” <cloud.google.com/vision/>

4. Pallab Ghosh: BBC (2018) “AI early diagnosis could save heart and cancer patients” <www.bbc.co.uk/news/health-42357257>

5. Penny Crosman: (2017) “Beyond Robo-Advisers: How AI Could Rewire Wealth Management” <americanbanker.com/news/beyond-robo-advisers-how-ai-could-rewire-wealth-management>

6. Tom Simonite: Wired (2017) “For superpowers, artificial intelligence fuels new global arms race” <wired.com/story/for-superpowers-artificial-intelligence-fuels-new-global-arms-race>

7. Josh Smith: Reuters (2016) “Exclusive: Afghan drone war-data show unmanned flights dominate air campaign” <www.reuters.com>

8. “Research priorities for robust and beneficial artificial intelligence” <futureoflife.org/ai-open-letter>

9. Elon Musk: Twitter (4th September 2017) <twitter.com/elonmusk>

10. “COCOM GPS Tracking Limits” <RAVTrack.com>

11. Nick Bostrom: (2014) “Superintelligence”

12. Jennifer Latson: (2015) “Did Deep Blue Beat Kasparov Because of a System Glitch?” <time.com/3705316/deep-blue-kasparov/>

13. Grant Sanderson: “Neural Networks” < youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw >

14. J Cohoon, J Karro, J Lienig: (2003) “Evolutionary Algorithms for the Physical Design of VLSI Circuits”

15. Irving John Good: (1965) “Speculations Concerning the First Ultraintelligent Machine”

16. Nick Bostrom, (2014) “Future Progress of Artificial Intelligence: A Survey of Expert Opinion”

17. “Self-Preservation” < en.oxforddictionaries.com>

18. Norbert Weiner: (1960) “Some Moral and Technical Consequences of Automation”

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Explore AI: Should We Fear Its Potentials?. Available from:<https://www.essaysauce.com/sample-essays/2018-4-23-1524510648/> [Accessed 19-11-24].

These Sample essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.