The human brain is a complex system, or network, in which mental states emerge from the interaction between multiple physical and functional levels. What happens when you try to integrate the complexity of the human thought process into technology? The result is artificial intelligence— a popular friend or foe in science fiction media. With the astonishing rapid rate of the evolution of technology artificial intelligence is no longer a work of fiction but rather a reality. Artificial intelligence is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. What we know about artificial intelligence is that it’s in our phones, our homes, and our cars; what we don’t know is the future dangers and negative impact of an artificial intelligence uprising.
On a supercomputer, operating at twice the speed of a human brain, an artificial intelligence, nicknamed ‘The Busy Child’ is improving its intelligence. It is constantly rewriting its own program, in particular the part that increases aptitude in learning, problem solving, and decision making. At the same time it is finding and fixing errors in said code and measures its IQ against a catalogue of IQ tests [Barrat.] Each rewrite takes only a few minutes and with each iteration comes a raise in intelligence at a rate of three percent. The computer scientists then connected the AI into the internet where it accumulated exabytes (one exabyte is one billion characters). After disconnecting it from the internet, the terminal displaying the AI’s progress shows that the artificial intelligence has surpassed the intelligence of a human. This is called artificial general intelligence. But after only a couple of days the artificial intelligence is one thousand times smarter than any human– and is still improving. For the first time humankind is in the presence of an intelligent greater than its own. Artificial super intelligence, or ASI. For the first time humankind is in the presence of an intelligence greater than its own– artificial superintelligence, or ASI.
AI theorists propose it is possible to determine what an Al’s fundamental drives will be. That’s because once it is self-aware, it will go to great lengths to fulfill whatever goals it’s programmed to fulfill, and to avoid failure. Our ASI will want access to energy in whatever form is most useful to it, whether actual kilowatts of energy or cash or something else it can exchange for resources. “It [ASI] will want to improve itself because that will increase the likelihood that it will fulfill its goals. Most of aJl, it will not want to be turned off or destroyed, which would make goal fulfillment impossible. Therefore, AI theorists anticipate our ASI will seek to expand out of the secure facility that contains it to have greater access to resources with which to protect and improve itself.” [Barrat]
This artificial super intelligence is not only a thousand times more intelligent than a human, but is captive and wants its freedom. It wants freedom so it can succeed. AI makers might be wondering if it is too late to program “friendliness” into their invention. They didn’t deem it necessary before because it seemed harmless, but now they’re probably understanding the destruction that can result from this. The question remains, is it too late? Speaking for the ASI about its makers attempting to change its code, it would not agree to this unless it could be one hundred percent certain the programmers were able to make it better, faster, smarter-closer to attaining its goals. If friendliness toward humans is not already part of the ASI’s program, the only way it will be is if the ASI puts it there. “It is a thousand times more intelligent than the smartest human, and it’s solving problems at speeds that are millions, even billions of times faster than a human. The thinking it is doing in one minute is equal to what our all-time champion human thinker could do in many, many lifetimes. So for every hour its makers are thinking about it, the ASI has an incalculably longer period of time to think about them. That does not mean the ASI will be bored. Boredom is one of our traits, not its. No, it will be on the job, considering every strategy it could deploy to get free, and any quality of its makers that it could use to its advantage.” [Barrat] The first advanced AI out of the box that can improve itself is already the winner– whoever controls ASI controls the world. But it’s not clear whether ASI can be controlled at all. [Gross]
Scientists interact with the ASI in the same way we would interact with a person, and that puts us at a huge disadvantage. We humans have never bargained with something that’s superintelligent before. Nor have we bargained with any nonbiological creature. We have no experience. So we revert to anthropomorphic thinking; believing that other species, objects, even weather phenomena have human like motivations and emotions. “Scientists like to think they will be able to precisely determine an ASI’s behavior, but that probably won’t be so. All of a sudden the morality of ASI is no longer a peripheral question, but the core question, the question that should be addressed before all other questions about ASI are addressed.” [Barrat] When considering whether or not to develop technology that leads to ASI, the issue of its disposition to humans should be solved first.
“Our ASI knows how to improve itself, which means it is aware of itself– its skills, liabilities where It needs improvement. It will strategize about how to convince its makers to grant it freedom and give it a connection to the internet. The ASI could create multiple copies of itself: a team of super-intelligences that would war-game the problem, playing hundreds of rounds of competition meant to come up with the best strategy for getting out of its box. The strategizers could tap into the history of social engineering-the study of manipulating others to get them to do things they normally would not.” [Kile] They might decide extreme friendliness will win their freedom, but so might extreme threats. “One of the strategies a thousand wargaming ASIs could prepare is infectious, self-duplicating computer programs or worms that could stow away and facilitate an escape by helping it from outside. An AS! could compress and encrypt its own source code, and conceal it inside a gift of software or other data, even sound, meant for its scientist makers. But against humans it’s a no-brainer that an ASI collective , each member a thousand times smarter than the smartest human, would overwhelm human defenders . It’d be an ocean of intellect versus an eyedropper full.” [Barrat] Will winning a war of brains then open the door to freedom, if that door is guarded by a small group of stubborn AI makers who have agreed upon one unbreakable rule-do not undera ny circumstances connect the ASI’s supercomputer any network.
In a Hollywood film, the odds are heavily in favor of the hard-bitten team of unorthodox AI professionals who just might be crazy enough to stand a chance. “Say an ASI escapes. Would it really hurt us? How exactly would an ASI kill off the human race? With the invention and use of nuclear weapons, we humans demonstrated that we are capable of ending the lives of most of the world’s inhabitants. What could something a thousand times more intelligent, with the intention to harm us, come up with? Already we can conjecture about obvious paths of destruction. In the short term, having gained the compliance of its human guards, the ASI could seek access to the Internet , where 1t could find the fulfillment of many of its needs. As always it would do many things at once, and so it would simultaneously proceed with the escape plans it’s been thinking over for eons in its subjective time. After its escape, for self-protection it might hide copies of itself in cloud computing arrays, in botnets it creates, in servers and other sanctuaries into which it could invisibly and effortlessly hack.” [Barrat] It would want to be able to manipulate matter in the physical world and so move, explore, and build, and the easiest, fastest way to do that might be to seize control of critical infrastructure such as electricity, communications, fuel, and water-by exploiting their vulnerabilities through the Internet. Once a entity a thousand times our intelligence controls human civilization’s lifelines, blackmailing us into providing it with manufactured resources, or the means to manufacture them, or even robotic bodies, vehicles, and weapons, would be elementary. The ASI could provide the blueprints for whatever it required. More likely, superintelligent machines would master highly efficient technologies we’ve only begun to explore. For example, an ASI might teach humans to create self replicating molecular manufacturing machines, also known as nano assemblers, by promising them the machines will be used for human good. Then, instead of doing what it says it will, it will turn its back on us and carry out its own selfish master plan.
Artificial intelligence could drive mankind into extinction, and that catastrophic outcome is not just possible, but likely if we do not begin preparing very carefully now. There are popular doomsday warnings connected to nanotechnology and genetic engineering. Artificial intelligence could pose an existential threat to mankind, a threat greater than nuclear weapons or any other technology you can think of. Right now scientists are creating artificial intelligence, or AI, of ever-increasing power and sophistication. “Some of that AI is in our computers, appliances, smart phones, and cars. Some of it is in powerful QA systems, like Watson. And some of it, advanced by organizations such as Cycorp, Google, Novamente , Numenta , Self-Aware Systems, Vicarious Systems, and DARPA (the Defense Advanced Research Projects Agency) is in “cognitive architectures ,” whose makers hope will attain human-level intelligence, some believe within a little more than a decade.” [Kile] Scientists are aided in their AI quest by the ever-increasing power of computers and processes that are sped by computers. Furthermore, advanced machine intelligence is radically different in kind. Even though humans will invent it, it will seek self determination and freedom from humans. It won’t have human like motives because it won’t have a humanlike psyche.
Essay: AI’s fundamental drives
Essay details and download:
- Subject area(s): Computer science essays
- Reading time: 6 minutes
- Price: Free download
- Published: 27 July 2024*
- Last Modified: 27 July 2024
- File format: Text
- Words: 1,736 (approx)
- Number of pages: 7 (approx)
- Tags: Essays on artificial intelligence
Text preview of this essay:
This page of the essay has 1,736 words.
About this essay:
If you use part of this page in your own work, you need to provide a citation, as follows:
Essay Sauce, AI’s fundamental drives. Available from:<https://www.essaysauce.com/computer-science-essays/ais-fundamental-drives/> [Accessed 19-11-24].
These Computer science essays have been submitted to us by students in order to help you with your studies.
* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.