Inevitably, we would have built a computer that outsmarts us. As intelligent beings, this thought scares us, especially those who take pride in their intelligence. They think only their brains make them unique, and so they discount the idea of a computer that is smarter than them. However, that is wishful thinking because reality says otherwise. In the next decade or so, research would have developed a machine that can analyze the data at supersonic speeds, offering answers to century old questions. And this is for the most part a dangerous thing, if we aren’t careful about the building AI safely and ethically. Super intelligent robots in the hands of bad people will do bad things, and it’ll be better if those robots weren’t in the hands of those people for the sake of humanity.
Despite concerns, we benefit from technological advancements that supercomputers will bring. With them, we augment our cognitive capacities, bringing closer solutions to problems that boggled society for decades. Problems, like cancer, war, and the existence extraterrestrial beings, are solved quicker through algorithms plugged into supercomputers than hashed out by hand, due to the nature of the super computers. Super computers are high-performing and not only capable computers of outperforming desktop computers, but also they are capable of being programmed to be as intelligent as human beings. When this happens to a computer, we give them the name artificial intelligence (AI).
People are fairly convinced AI will destroy our world as we know it. Much of science fiction, like West World and Resident Evil, depict violent red-eyed robots with the secret object to overthrow our government and dominate our minds. This theory places too much emphasis on the AI’s ability to freely act on their will. Only when robots have free will, can they choose to act contrary to their code. However, that is not the case. Robots don’t have free will. Their actions are predetermined by the code a software engineer developed. So, it’s a person who determines the robot’s objective. If anything is going to kill us, is us humans. The process of software development is a lengthy one filled with trial runs and debugging, a term that means checking lines of code for errors. Specifically, the engineer knows well before hand, through trial and error, the risks involved in the project so, when the engineers are called to court, they can’t beg for their innocence because they didn’t know. Every party involved with the manufacturing of a faulty AI should be held accountable by the civil law.
A libertarian, a believer of a free market, might look at me as if I had a wart on my nose. Why on earth would I try bringing the government into the private sector? The companies can produce AI and give those AI whatever objective as they wish, without Uncle Sam’s watchful eye. Genuine scientific discovery comes spontaneous and without direction; all the government will do is to impose standards and codes that inhibit research. My response would be that those who feel oppressed by the law are the ones who need it the most. Aquinas says that men and women are naturally inclined towards virtue, and so for the, the law is merely a reinforcement of their moral behavior. He also says that “since some are found to be depraved, and prone to vice, and not easily amenable to words, it was necessary for such to be restrained from evil by force and fear, in order that, at least, they might desist from evil-doing, and leave others in peace, and that they themselves, by being habituated in this way, might be brought to do willingly what hitherto they did from fear, and thus become virtuous. Now kind of training, which compels through fear of punishment, is the discipline of laws" (Aquinas). In other words, those who aren’t naturally inclined towards virtue have the law as a standard that guides their action, since the law holds their misbehavior accountable using methods of psychological and financial pain. Aquinas would agree with me when I say that companies are to be held accountable by the law, especially when they have passed a line and refuse to accept their responsibilities. Especially in the quotation mentioned, Aquinas demonstrates acute awareness on human behavior. He isn’t naïve to think everyone is virtuous; instead, he believes the public, except for outliers, are prone to virtue. He is correct to expect the best from people and assume the worst from them. In this paradox, I argue for the development of technology while relentlessly observing of the law. Before I go any further, I must bring up a question: do robots have free will?
Imagine this: a piece of metal connected to a power outlet can speak like a human, think like a human, hear and process sounds like a human, do all sensory things like a human. Robots are physically programmed by humans to perform actions and gather data. That is the way they work. Humans have a different objective. For starters, no one advised us to develop robots. We came up with that idea through imagination and curiosity. We are the makers and robots, our creation. That should be distinction enough. For example, A robot’s code can change so easily as to produce one result over the other. Did they will that change? No, the programmer willed that change. Therefore, robots don’t have free will. They haven’t self-generated their system, like humans have. Only living creatures can reproduce without anything prior.
My own objection raises another objection. Don’t humans also undergo transformative change? Aren’t there psychological cases where a person changes personality after a traumatic experience? In fact, frontal lobe brain injury, typically changes the temperament of a person. Personality is much like a system through which thoughts are processed and then actions produced. When the personality changes, the thought process and the actions changes. These types of injuries are analogous to the changes done to the robot’s code. In that case, it’s objective to say that humans don’t have free will either. After all, we didn’t cause ourselves. By definition, free will is to be morally responsible for all of one’s actions. Next, one’s actions are produced by one’s personality, which one also must be responsible for. Besides personality, there is hereditary traits and previous experiences that one must be responsible for. Since one can’t be responsible for one’s personality, hereditary traits, or previous experiences, one can’t have free will, as stated by Strawson’s Basic Argument (Strawson). Consequentially, without moral responsibility, moral ethics or punishments don't stand in an argument. Without moral responsibility, the companies who produce evil robots won’t be held accountable.
Nevertheless, there is a way out of Strawson’s Basic Argument, and it’s this: free will isn’t based on causa sui, but rather, on whether the will was freely generated by the agent whose structures were self-created. Farnsworth says, “to make the closure condition concrete and include an answer to Strawson’s “Basic Argument” it will now be narrowed to a requirement for self-construction, since this implies the embodiment of self with the pattern-information that will subsequently produce the agent’s behavior. Put differently, we are to consider a cybernetic system that, by constructing itself materially, determines its transition rules, by and for itself”(Farnsworth). He means a robot must be able to reproduce all its internal and external components for it to have free will. As of recently, robots have been able to reproduce its external components once a human has given it the first copy of the code. Because the robot is limited in this sense, it doesn’t have free will. Besides the robots not having free will, Farnsworth raises an important distinction between humans and robots, which brings back the possibility of humans having free will. Humans reproduce themselves without the need of the first copy. It’s in our DNA to reproduce. Involuntarily, our internal structure has all the instructions it needs to autogenerate. At that point, we have caused ourselves, debunking Strawson’s Basic Argument. Once again, I can argue that humans have free will and moral responsibility.
Because humans have free will and moral responsibility, and because humans run companies that create AI, humans who are fault must be held accountable by the law. The law will then decide the best punishment for them. Only when misbehavior is exposed can we make a moral warning out of it. It’s the people that need to be sent to prison, it’s the ideas those people believe in that needs to be morally evaluated. During the trial, those ideas are evaluated, hence specifying reason to put those people in prison.