I. (Gain Attention and Interest) Here are some scenarios for you to think about: imagine you are travelling to EU. Before you enter, a detector system will automatically assess your official documents, social media activity and biometric data and analyze your' faces to see if you are lying; (a brief pause) How about this: when your personal information, behavior, and thought are digitally surveilled by public authorities in secrets through web surfing and the use of smart assistants on your mobile phones and virtual “home assistants.” How do feel about these technologies stealing your privacy and data protection? And possibly your human rights? These two scenarios are currently happening, and they have already gradually limited and eroded our personal space in our lives.
II. (Reveal Topic) These problems only indicate the beginning uses of AI technology that we are currently experiencing. So now the question is: How should AI be used fairly and unbiasedly? We need a law system to regulate Artificial Intelligence, just like a law on the use of gun, a law on the freedom of speech and press, and a law system on criminal injustice, to prevent unnecessary conflicts between technologies and humans and secure humans’ rights and safety in the future.
III. (Establish Credibility) After my research through authoritative websites and scholarly resources that relate to my topic, I have come to the realization that any higher authority needs to regulate further development and use of AI.
IV. (Preview the Body of the Speech) Now, I would like to present you the risks presented by certain applications of AI in our society and potential solutions with the help of government.
(Signpost: We’ll start by looking at the need for AI.)
Body
I. (Need) Let’s take a look at how Artificial Intelligence has negatively influenced the human race in different aspects
But first, we need to understand what exactly Artificial Intelligence is
Professor Nilsson, Nils J at Stanford University states: “Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.” In other words, AI enables a computer to “think” and “learn” like a person. It has an algorithm to process large amounts of information and draw conclusions and learn from its experiences, such as Netflix. AI and robotics are totally separate things. Robotics deals with robots, whereas AI deals with programming intelligence.
Now let us dive into the current issues that are caused by this advanced technology.
We already knew that companies and other organizations had been applying A.I. technology into social media/networks, but we never knew how detrimental the consequences would be. People’s online information and behaviors are secretly harvested from social networks, like Twitter and Facebook. The information could be used to automatically create custom emails. These emails, bad websites, or links, could be sent from fake accounts that are able to mimic the writing style of people’s friends so they look real. Therefore, this fake information can lead to misunderstanding and conflicts between friends, families, and even countries.
For example, in a broader sense of the use of A.I., Russian hackers and others are using A.I. to manipulate the views of Americans by mining data and disseminating fake news on social media site. According to the Brookings Institution, Moscow’s savviness has shown that “ready-made commercial tools and digital platforms can be easily weaponized, digital information warfare is cost-effective and high-impact, making it the perfect weapon of a technologically and economically weak power.” On the other hand, U.S. government and independent investigations discovered that the cost of Russia’s influence campaign against the United States during the 2016 elections was quite low: the purchase of ads on Facebook is about $100,000 and the purchase of ads on Google is about $4,700. It also created approximately 36,000 automated bot accounts on Twitter, causing production of misleading or divisive content, such as pictures and memes. These costs plus additional costs related to the cyber attacks on the Democratic National Committee and the Clinton campaign are only around one million dollars, showing that AI-driven asymmetric warfare (ADAW) capabilities are beneficial to any country that may use them. As a result, if AI production doesn’t have a high cost plus the absence of regulation or agreement on this issue, how could we live in this world safely and in peace?
Here is another example that you might know better. The new A.I. software “Animoji” on iPhones allows you to turn your face into animated emoji, and it projects your facial expressions onto emojis with your voice. Similarly, Face2Face is an advanced version of “Animoji”. According to researchers from multiple universities, Face2Face can capture “real-time markerless” facial performance based on commodity sensors. It can animate the facial expressions of the target video by a source actor and transfer it to the manipulated output video. A.I. tool like this can be used to promote misinformation, slander, fraud, and misrepresentation, just like the recent Barack Obama fake news video. (Visual Aid-video) Based on Washington Times in 2018, Obama wasn’t speaking in this video to convey a “message” to the public. You might see his face, mouth, and lips shaping syllables in sync with exact words that were heard on screen. But everything you see or hear is a fraud. He was all a product of Jordan Peele’s production company with Adobe After Effects and the AI face-swapping FakeApp tool.
Furthermore, recently disclose information about the development of autonomous weapons has raised my awareness. Based on my survey that was conducted in this class, 80% of students in this class are also concerned about this issue.
Autonomous weapons do not mean smarter “smart bombs.” A smart bomb is a bomb that can precisely locate a particular location in time and space, but the key is–the location is set by a human. According to the textbook, Issues in Science & Technology, this is called weapons with the first level of autonomy, or “human-in-the-loop systems.” They are in use today, and Israel’s Iron Dome system is an great example of this level of autonomy. However, fully autonomous weapons choose their own targets and decide when to fire and how much force to apply. And the major issue is the very choice of target. Even with the advancement in facial and gesture recognitions and biometrics, it is still not guaranteed that civilians would not be targeted. And other unknown or potential threats might emerge. “Civilian safety” is significant and it means the right to life. According to United Nations Human Rights, Office of the High Commissioner, “international human rights law, which is a broader set of treaties, principles, laws, and national obligations regarding civil, political, economic, social, and cultural rights that all human beings should enjoy”. If autonomous weapon systems with learning abilities were able to improve themselves quickly and exceed their creators’ control, they would be a danger to anyone within their immediate reach. The unintended effects of creating and fielding fully autonomous systems might lead to unwantedly severe consequences.
(Transition: Now you know the need for regulations over the development of AI. So the question is: How can we and the higher authority help?)
II. (Satisfaction+Visualization) In order to discipline the creation of different artificial intelligence technologies, the government, relevant engineers, scientists, academics, and researchers need to make agreements globally. And each country should generate a law system over who can perform AI and where and when he or she can use the technology.
In terms of our social life, we are working with social media and networks nonstop day and night. We should not be the slaves to AI softwares or services. You can accomplish that by not leaking out too much of your personal information on social platforms, such as Facebook, Twitter, and Instagram. Otherwise, you will be voluntarily giving up your privacy to strangers or bots. While surfing the internet, you can turn on the “privacy” option so it would be harder for others to track you down. A senior European commission official, Paul Nemitz, recently said, “We need a new culture of technology and business development for the age of AI which we call rule of law, democracy and human rights by design. These core ideas should be baked into AI, because we are entering a world in which technologies like AI become all pervasive and are actually incorporating and executing the rules according to which we live in large part”. As a result, the government should set up a built-in system in the networks to filter out bots, fake accounts, virus websites or links in order to minimize crimes in the invisible online world. However, not only should the government set up a regulation system, it is also crucial for us to contribute to the society that we live in, as I stated before.
Next, I am going to focus on how the government should regulate the use of autonomous weapons. The use of autonomous weapons is dangerous to human race, and the potential threats are unforeseen. The government should absolutely come to the agreement on restricting the use of fully autonomous weapons. The legislators need to pass a legal act on teaching AI weapons to understand the laws of wars, peace, and injustice through pre-set algorithms. It is also essential for autonomous weapons to promote human rights globally. The best way to prevent future disruptions of human activities is to set up policies, and laws, and potential strategies and solutions that we all agree. For example, according to the University of Texas, most nations of the world have already signed the Treaty on the Non-Proliferation of Nuclear Weapons, which was one major reasons that several nations, including South Africa, Brazil, and Argentina, dropped their programs to develop nuclear weapons and that those who already had them reduced their nuclear arsenals. Other relevant treaties include the ban on biological and chemical weapons and the ban on landmines. Similarly, the government can do even more to prevent a tragedy between the technology and humans, such as promoting oversight AI systems— “AI Guardian.” Researchers at University at Texas state that the system can ensure the decisions made by autonomous weapons stay within a predetermined set of parameters. For instance, they would not be permitted to target the scores of targets banned by the US military, including mosques, schools, and dams.
Conclusion (Action)
I. (Signal Ending) So now you are all filled with the knowledge of the need and ways of regulating artificial intelligence technologies in your heads. Think about how might these solutions would make a difference in the life of you, your family and friends.
II. (Call to Action) I understand that you, as an individual, may not be able to make a big difference right now with the technology but as long as you use your smart devices smartly and ethically, you will still be able to preserve your data protection, privacy, human rights, safety, and if we all work together and understand the significance of the voice of humans, us, we would still be contributing to make a world that is at peace.
III. (End on a Strong Note) As such, we, humans are not ready to give away our power yet!