Home > Criminology essays > Artificial intelligence applications, regulation and concerns

Essay: Artificial intelligence applications, regulation and concerns

Essay details and download:

Text preview of this essay:

This page of the essay has 3,210 words.

A blink into the future, and all crime is foreseen. The “precogs” within the Precrime Division use their predictive ability to arrest suspects prior to any harm. Although, Philip K. Dick’s novel, “Minority Report,” may seem far fetched, similar systems exist. One of which is Bruce Bueno de Mesquita’s Policon, a computer model that utilizes artificial intelligence algorithms to predict events and behaviors based on questions asked across a panel of experts. When one thinks of artificial intelligence, their mind immediately jumps to the thought of robots. Modern misconceptions are that these systems pose an existential threat and are capable of world domination. The idea of robots taking over the world stems from science-fiction writers and has created a blanket of uncertainty surrounding the current condition of artificial intelligence, otherwise coined as the term “AI.” It is a part of human nature to solve problems, especially the problem of how to create conscious, yet safe artificial intelligence systems. Although experts warn that the development of artificial intelligence systems reaching the complexity of human cognition could pose global risks and present unprecedented ethical challenges, the applications of artificial intelligence are diverse and the possibilities extensive, making the quest for superintelligence worth undertaking. The idea of artificial intelligence systems taking over the world should be left to science-fiction writers, while efforts should be concentrated on their progression through AI weaponization, ethics, and integration within the economy and job market.
Due to the historical connection between artificial intelligence and defense, an AI arms race is already under way. Rather than banning autonomy within the military, artificial intelligence researchers should cultivate a security culture to help manage developments in this space. The earliest weapon without human input—acoustic homing torpedoes—appeared in World War 2 equipped with immense power, as it could aim itself by listening for characteristic sounds of its target or even tracking it using sonar detection. The realization of the potential such machines are capable of galvanized the AI movement. Countries are beginning to heavily fund artificial intelligence projects with the goal of creating machines that can further military efforts. In 2017, the Pentagon requested to allott $12 to 15 million dollars solely to fund AI weapon technology (Funding of AI Research). Additionally, according to Yonhap News Agency, a South Korean media outlet, the South Korean government also announced their plan to spend 1 trillion dollars by 2020 in order to boost the artificial intelligence industry. The urge to invest in artificial intelligence weaponization displays the value global superpowers place on technology.
Nevertheless, as gun control and violence becomes a pressing issue in America, the controversy surrounding autonomous weapons is high. Therefore, the difficulty in what constitutes an “autonomous weapon” will impede an agreement to ban these weapons. Since a ban is unlikely to occur, proper regulatory measures must be put in place by evaluating each weapon based on its systematic effects rather than the fact that it fits into the broad category of autonomous weapons. For example, if a particular weapon enhanced stability and mutual security its should be welcomed. However, integrating artificial intelligence into weapons is only a small portion of the potential military applications the United States is interested in as the Pentagon wants to employ AI within decision aids, planning systems, logistics, and surveillance (Geist). Autonomous weapons, being only a fifth of the AI military ecosystem, proves that the majority of applications provide various benefits rather require strict regulation to keep order like weapons may. In fact, autonomy in the military is widely endorsed by the US government. Pentagon spokesperson Roger Cabiness asserts that America is against banning autonomy and believes that “autonomy can help forces meet their legal and ethical duties simultaneously” (Simonite). He furthers his statement that autonomy is critical to the military by stating that “commanders can use precision-guided weapon systems with homing functions to reduce the risk of civilian casualties.”
A careful regulation of these clearly beneficial systems is the first step towards managing the AI arms race. Norms should be established amongst AI researchers against contributing to undesirable use of their work that could cause harm. By putting rules in place, it lays the groundwork for negotiations between countries, making them form compromises to forgo some of the warfighting potential of AI as well as focus on specific applications that enhance mutual security (Geist). Some even argue that regulation may not be necessary. Amitai and Oren Etzioni, artificial intelligence experts, examine the current condition of artificial intelligence and discuss whether it should be regulated in the U.S in their recent work, “Should Artificial Intelligence Be Regulated?”. The Etzioni’s assert that the threat posed by AI is not imminent as technology has not advanced enough and technology should be advanced until the thought of regulation is necessary. Additionally they state that when the idea of regulation is necessary, a “tiered decision making system should be implemented” (Etzioni). On the base level are the operational systems carrying out various tasks. Above that are a series of “oversight systems” that can ensure work is carried out in a specified manner. Etzioni describes operational systems as being the “worker bees” or staff within an office and the oversight systems as the supervisors. For example, an oversight system, similar to those used in Tesla models equipped with Autopilot, on driverless cars would prevent the speed limit from being violated. This same system could also be applied to autonomous weapons. For instance, the oversight systems would prevent AI from targeting areas banned by the United States, such as mosques, schools, and dams. Additionally, having a series of oversight systems would prevent weapons from having to rely on intelligence from only source, increasing the overall security of autonomous weapons. Imposing a strong system revolving around security and regulation could remove the risk from AI military applications, lead to saving civilian lives, and gaining an upper edge in vital military combat.
As AI systems are becoming increasingly involved in the military and even daily life, it is important to consider the ethical concerns that artificial intelligence raises. Gray Scott, a leading expert in the field of emerging technologies, believes if AI continues to progress at its current rate, it is only a matter of time before artificial intelligence will need to be treated the same as humans. Scott states, “The real question is, when will we draft an artificial intelligence bill of rights? What will that consist of? And who will get to decide that?”. Salil Shetty, Secretary General of Amnesty International, also agrees that there are vast possibilities and benefits to be gained from AI if “human rights is a core design and use principle of this technology (Stark).” Within Scott and Shetty’s argument, they corroborate the misconception that artificial intelligence, when on par with human ability, will not be be able to live amongst other humans. Rather, if artificial intelligence systems are treated similarly to humans with natural rights at the center of importance during development, AI and humans will be able to interact well within society. This viewpoint is in accordance with the “Artificial Intelligence: Potential Benefits and Considerations,” written by the European Parliament, which maintains that “AI systems should function according to values that are aligned to those of humans” in order to be accepted into society and the intended environment of function. This is essential not only in autonomous systems, but in processes that require human and machine collaboration since a misalignment in values could lead to ineffective teamwork. The essence of the work by the European Parliament is that in order to reap the societal benefits of autonomous systems, they will need to follow the same “ethical principles, moral values, professional codes, and social norms” that humans would follow in the same situation (Rossi).
Autonomous cars are the first glimpse into artificial intelligence that has found its way into everyday life. Automated cars are legal because of the principle “everything is permitted unless prohibited”. Since, until recently there were no laws concerning automated cars, so it was perfectly legal to test self driving cars on highways which helped progress technology in the automobile industry immensely. Tesla’s Autopilot system is one that has revolutionized the industry, allowing the driver to remove their hands from the wheel as the car stays within the lane, changes lanes, and dynamically changes speed depending on the car in front. However, with recent Tesla Autopilot related accidents, the focus is no longer on the functionality of these systems, but rather their ethical decision making ability. In a life threatening situation where a car is using Autopilot, the car has to be able to make the correct and ethical decision as seen in the MIT Moral Machine project. During this project, participants were placed in the driver’s seat of an autonomous vehicle to see what they would do if confronted with a moral dilemma. For example, questions such as “would you run over a pair of joggers over a pair of children?” or “would you hit a concrete wall to save a pregnant woman, or a criminal, or a baby?” were asked in order to create AI from the data and teach it the “predictably moral” thing to do (Lee). The data makes it evident that emotions are a vital part of decision making and it is necessary for independent decision systems to have “emotions”. In other words, a ultimate purpose is needed that can lead to the decision process and can make a system independent and by that ethical. A future in which AI systems have their own ethics module will lead to fruitful interactions between machines and humans as well as progress the decision making of these systems.
The advancement of artificial intelligence will lead to an increase in automation and an alteration of the dynamics of the economy and job ecosystem for the better. An “AI Index,” created by researchers at Stanford University and the Massachusetts Institute of Technology tracks developments in artificial intelligence by measuring progress, investments made into the field, research being done, and university enrollments into AI-related fields (Lohr). The goal of the project is to better inform the public on the current state of AI. One overarching finding from this is that up to one third of the American workforce will have to switch occupations by 2030. This is not necessarily a bad thing, but does mark a gradual shift in the job market. For example, as artificial intelligence continues to grow, more jobs within the field open up, placing a greater importance on technology. The data from the AI Index 2017 Report agrees that the amount of jobs are not decreasing but shifting from field to field. The report shows the total number of AI job openings on the Monster Platform in a given year has increased dramatically, showing a general job growth trend in the AI field. Namely, the job openings in machine learning increased from 2,500 in 2015 to 5,000 in 2016 to 12,000 in 2017. In turn, the job market, by 2030, will be predicted to gradually shift to technology-reliant fields as opposed to manual labor. Elise Chan, marketing director at Macquarie Cloud Services, also agrees that technology is becoming an increasingly important field when she writes, “As artificial intelligence continues to develop, the need for professionals who understand how to work with the new technology will increase. Despite initial fears, I believe that artificial intelligence should be seen as a compliment to professionals…not as a threat.” It is evident that the impact on jobs will be huge, however, the risk is minimal. Certain jobs will become dominated by AI systems, while others that can still be better performed by humans will still supply jobs.
An area that is majorly affected by automated systems is manufacturing. Artificial intelligence plays a large role in manufacturing because the less humans, the less sources of error, leading to safer workplaces. Michael Mendelson, a curriculum developer at the NVIDIA Deep Learning Institute says, “ Many tasks, especially those involving perception, can’t be translated to rule based instructions. In a manufacturing context some of the more immediately interesting applications will involve perception.” Machine vision is one of these applications Landing.ai, formed by Andrew Ng, focuses on manufacturing problems such as precise quality analysis and how they can be fixed through the perfection of autonomous systems. Andrew Ng and his team have developed a solution for defects in products through an automated issue identification system that can determine whether a part has a defect based on a mere image. Applications such as this have created data such as “defect rate over time” and “defect per category” which can both assist companies in understanding how to effectively improve the manufacturing process to reduce defects. Such innovations in manufacturing help improve the process as well as the product, showing that artificial intelligence is vital to the success of any company.
The success in artificial intelligence is not only within the economy and job market, but can only be seen in public interest. As part of the AI Index, the percentage of popular media articles that contained the term “Artificial Intelligence” were analyzed and classified as either positive or negative articles. In the last 5 years, each year roughly 5% of popular media articles contain negative thoughts about AI, while the amount of popular media articles containing positive thoughts about Ai has increased to 30% within 2016 and 2017. The large positive interest in artificial intelligence shows that the advancement of AI had the backing of the public. It also shows that more people believe that AI should continue to be pursued rather than being banned and advancements stopped. Due to the rapid increase of automation and its public backing, the job market shifts toward a need for advanced technical skills and has paved way for the relatively new fields surrounding machine and deep learning.
However, some believe that having artificial intelligence at human cognition will be too risky to be pursued, arguing that the cons outweigh the pros. As the newer generation is becoming increasingly reliant on technology, the older generation has separated themselves from this industry having gone through a majority of their lives without technology. The poem “Out Out,” by Robert Frost, written in 1916, supports the viewpoint of the older generation. In the poem, a young boy is cutting firewood with a buzzsaw and the boy’s sister calls out his name announcing it is time for dinner. Out of excitement, the boy accidentally cuts his hand with the saw and loses his life. Frost uses personification within the poem to show the viewpoint that technology is evil. The buzz saw, although clearly inanimate, aggressively “snarled and rattled” as it cuts (Frost 1). When the sister calls the boy for dinner, the saw indicates that it has a mind of its own as it “leaped out at the boy’s hand” (Frost 16). Frost immediately jumps to blame the injury on the saw rather the boy, claiming he is still a “child at heart” (Frost 24). In personifying the saw as an evil being, Frost is sending the message that technology, as harmless as it may seem, can lead to one’s downfall and should not be taken lightly. As technological advances were in their early stages when his work was written, it was justifiable that society would be apprehensive about having systems that could potentially outperform humans. However, even a 100 years after this poem was written, AI experts have concerns over the threat artificial intelligence poses. Elon Musk, the CEO of Tesla Motors, along with other AI experts urge to ban artificial intelligence in weapons. In an open letter directed toward the United Nations Musk wrote, “We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.” In other words, if autonomous weapons were mass produced, it would permit armed conflict to be fought on a dangerously large scale.
Nonetheless, AI weaponization can prove to be a valuable asset to the military if regulated correctly. Autonomous weapons can be regulated to verify safe versus dangerous technological advancements by evaluating each weapon on its systematic effect. This means AI researchers can pull the plug on dangerous advancements from the get go, and focus of the safe developments that further the military. Precision guided weapon systems can pinpoint a specific area for bombing which can reduce the risk of civilian casualties during war. Additionally, with the use of autopilot drones, care packages can be dropped to areas of distress. Although AI weaponization provides clear benefits, it is not the only military application. Many of the applications of AI can be used off the battlefield such as training systems for soldiers and pilots. Such systems could provide unpredictable and adaptive scenarios that could be run through in order to improve the strength of American military combat.
Aside from concerns regarding AI weaponization, others believe that artificial intelligence will take over the job market, leaving the economy in a dire state and people without jobs. According to Brent Schutte, the chief investment strategist of Northwestern Mutual, “Even I worry that some day my job will fall prey to artificial intelligence.” The introduction of robots into the workplace will likely create a shroud of uncertainty on whether or not certain jobs will even exist in the future. A prime example of an area that could be taken over by robots completely is manufacturing. However, without the integration of artificial intelligence within the manufacturing industry, the economy would cripple. Data reveals a discouraging truth about growth today, showing a decline in traditional production methods such as capital investment and labor that would normally propel economic growth. AI is a new factor of production that can introduce new sources of growth. A company knows as Accenture provides valuable research on the impact of AI and shows that it could double annual economic growth rates by 2035. In addition it shows an increase in labor productivity by up to 40 percent (Purdy). Nobel Prize winner Paul Krugman once said, “Productivity isn’t everything, but in the long run it is almost everything.” The progression and integration of AI within various industries will allow productivity to be maximized thus improving the overall economy.
The ability of artificial intelligence systems to transfigure complex data into insight on the American condition has the potential to reveal solutions to age old unknowns as well as solve America’s most enduring problems. AI systems can be used to understand diseases better, predict the weather, or even just for convenience like the Amazon Alexa. However, like all powerful technology, there are concerns about the existential threat posed by these systems. But the development of trust with artificial intelligence at human cognition will come over time, like it did with the technology that preceded it. In order to truly reap the benefits of AI, its development must be pursued while keeping in mind its regulation, ethical challenges, and economical impact. As Marie Curie, a pioneering physicist and chemist, once said, “ Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less.” Artificial intelligence comes with risks, but risks that are manageable. The far greater risk would be to suppress the development of technology with potential to further humanity and the American condition.

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Artificial intelligence applications, regulation and concerns. Available from:<https://www.essaysauce.com/criminology-essays/pre-crime/> [Accessed 19-11-24].

These Criminology essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.