Home > Essay examples > Exploring Ethics in Self-Driving Car Control: Is Relying on Technology Too Much?

Essay: Exploring Ethics in Self-Driving Car Control: Is Relying on Technology Too Much?

Essay details and download:

Text preview of this essay:

This page of the essay has 3,265 words.



Imagine sitting in a car at highway speeds without touching the wheel or pedals. This used to sound like an idea from the future, but it has now become a reality. Engineers from Google showed off this technology to the writers of The Second Machine Age a few years ago, and artificial intelligence like this has only improved since. The car performed flawlessly, and it “provided a boring ride” (Brynjofsson, McAfee 14). Despite how well this experiment performed, along with several other applications of artificial intelligence, are humans relying on technology too much?

Erik Brynjofsson and Andrew McAfee can both affirm and deny this claim. After all, riding in a car controlled by a computer at sixty miles per hour is not something to be taken lightly — “at highway speeds the consequences of driving mistakes can be serious ones…these consequences were suddenly more than just intellectual interest to us”  (Brynjofsson, McAfee 13). Earlier, computer error did not have a significant effect on our state of being. Maybe we were working on an important document in Word and it did not save correctly, resulting in a loss of all our progress. But consequences like these are hardly comparable to the physical implications of the newest applications of computers. As the authors detailed, handing the wheel over to a computer is essentially putting one’s life in its hands. While it did not phase the engineers, this is a scary thought for many others.

As the underlying technology gets better in applications such as autonomous driving, the computing power will eventually not be the issue (Brynjofsson, McAfee 43). I am going to examine the real emerging issue — the question of ethics. Cars under control by thousands of lines of code need to make decisions. But when in a situation of distress, there isn’t always a safe course of action. Consequentially, the accountability for the decisions made by these machines needs to be placed somewhere. After writing code myself, I know that there is also a software engineer that is faced with the dilemma of the problem explained earlier, and several other moral questions. This issue is important to me because I know how easy it is to develop something without considering all of the possible consequences, even on the small scale that I have experienced. However, leaving ethics behind in this era of extremely personal technology will have deadly consequences.

In the last ten years, self-driving cars have become a significant interest in the technology industry. According to the Franklin Institute, there are three main technologies that allow for self-driving cars: sensors, connectivity, and intricate algorithms. The sensors are the eyes of the computer — hardware such as radar, ultrasonics, and cameras provide an omnidirectional cloud of data which the computer can piece together to decipher its surroundings. The precision of these sensors are better than ever. Each year, they achieve higher clarity and range. The second important development in technology’s recent years is connectivity and data. In the same way that the car must be aware of its immediate surroundings through the use of sensors, it must know how to handle the road, weather, and traffic conditions. Companies like Waymo and Tesla already have thousands of vehicles out on the road collecting millions of miles of driving data. Each time a self-driving car drives the road again, it both consults and refines this data. This is a crucial part to the development of self-driving cars. As more data is collected about the roads around us, self-driving cars will have a better picture of how they should behave in a wide variety of conditions. The final component powering autonomous driving is the software and control algorithms which interpret the sensor and condition data which they are fed. This part of self-driving cars is the most complex of all three – it is how the cars actually know what to do in each situation they encounter (“The Science of Self-Self-Driving Cars”).

However, unlike the first two components of self-driving cars – sensors and connectivity – the final one has a unique catch. Sensors are just hardware, so they are subject to a concept known as Moore’s Law, stating that technology will inevitably get better over time. In fact, there is a predictable rate to this. As measured recently, the rate of change in computational power is exponentially doubling itself every eighteen months. This means that the sensors detecting the objects around a car and the computer aggregating and analyzing this data will keep getting better, to the point where sensor error will be almost nonexistent (Brynjofsson, McAfee 48). The connectivity aspect is essentially the same — as more and more data is collected, cars will have a better idea what to expect as they approach intersections, curvy roads, construction zones, and more potential hazards on the roadway. However, the software running in each of these cars is different because it will not just consistently improve over time. Software engineers continually face tough development decisions, but are beginning to hit a roadblock (Brandom).

First, teaching computers how to think is a nearly impossible task. Many experts in the field are coming to the realization that programming self-driving cars is more difficult than they thought it would be years ago. The prospect of self-driving cars seemed promising when early prototypes behaved flawlessly in isolated environments. Nevertheless, engineers began to realize that simply throwing more computing or sensor power at the problem would not work. The real challenge was figuring out how to program computers to identify situations and act on the data they were fed, explained Gary Marcus, an expert at the matter researching at NYU (Brandom).

Today, there are two common approaches taken in an attempt to solve the self-driving car algorithm problem, which at heart is a challenge of machine learning. First, deep learning involves tediously teaching a computer to recognize certain objects by studying data. But this is difficult when engineers are relying on a finite set of data that has been collected in the space. The second method is rule-based artificial intelligence, but as mentioned before, the possibility of errors is greater as the computer’s ability to create its own behavior is limited. This is especially true when an autonomous vehicle encounters a situation that it has never seen before. The computer will inevitably pick the wrong course of action at some point, and this is the tricky part when programming a computer to “know” the unknown. Marcus says that many companies are only turning to what they know — analyzing big data — but he says that doing this might not even be the right technique to reach the level of autonomy desired (Brandom).

Fundamentally, programming autonomous vehicles is a standstill artificial intelligence challenge. There are only so many concrete rules that can be programmed into the software. A car can be programmed to stop at a red light, but the engineers behind the keyboard cannot predict every possible situation that the car may encounter. Thus, artificial intelligence is so important because it allows the car to analyze the data and decide the best path of action dynamically (Metz).

In addition to the challenge of artificial intelligence, I believe that there is a bigger road block ahead for the development of self-driving cars. When software engineers are developing these artificial intelligence algorithms, they are essentially deciding what kind of drivers these vehicles should be. Consequentially, there is a huge ethical dilemma regarding autonomous vehicles and how they should behave on the road, and it is already stirring public fear (“Why Self-Driving Cars Must Be Programmed Kill”).

Figuring out how self-driving cars handle unavoidable situations is an important aspect of how the public views the machines. For example, people would most likely not buy a car if they knew there was any risk that the car could kill the owner instead of choosing some alternate course of action. As painful as the issue is, it must be faced if self-driving car manufacturers want to make any headway in the subject matter (“Why Self-Driving Cars Must Be Programmed Kill”).

The ethical dilemma is largely centered around human psychology. Humans value a situation differently when they are the ones that are actually in potential danger. A common example is the age-old dilemma called The Trolley Problem. In its simplest sense, a trolley is heading towards a junction — continuing straight will kill five people and heading right will kill one person. A person at the switch is tasked with the decision to pull the switch and save five people, killing one person, or let fate play out, killing five but “saving” one. The decision, however, is more complicated that it may seem. In one case, the witness is deliberating a person’s death, where in the other, he or she is simply allowing it (“Ethics and Self-Driving Cars”). When asked about a similar hypothetical situation dealing with self-driving cars, a large majority of survey respondents were comfortable with a utilitarian approach, in which the computer would choose the course of action which resulted in the minimum amount of deaths. However, when the wording of the question was slightly changed so that they were the ones behind the wheel, or being sacrificed, the support for the method was much lower (“Why Self-Driving Cars Must Be Programmed Kill”).

As difficult as this ethical question will be to answer, I believe that the development of self-driving cars should certainly continue. As more and more vehicles hit the road, even just with partial self-driving features, lives will be saved. Self-driving cars will eliminate some of the most common factors that lead to human error behind the wheel: drunk driving, anger, fatigue, and distractions such as cell phones just to name a few (“Ethics and Self-Driving Cars”). Authors of The Second Machine Age also explained how the cars will solve some of the most difficult (and annoying) human faults while driving. Driving on California Highway 101, they remarked, “It was a car without blind spots. But the software was aware that cars and trucks driven by humans do have blind spots” (Brynjofsson, McAfee 14). Self-driving cars will save lives by reducing human error and quite literally having eyes on the road at all times.

At the same time, the potential dangers of self-driving cars should not be ignored. Security is one of the biggest fears. “So far, just about every computing device we’ve created has been hacked” (Lin). Patrick Lin recalls the recent history of computers as we know very well — with nearly every device being hacked — and believes that self-driving cars are not magically exempt from this trend. One can only imagine what the potential dangers are when a moving machine with human passengers is hacked, tampered with, or hijacked, and how the computer-controlled car would handle such an event (Lin).

With privacy a huge concern in modern times, self-driving cars offer much more sensitive information at stake. Already today, online ads are targeted using our personal information, personal data is constantly being leaked and sold by tech giants, and companies are piecing together “profiles” of users based on their frequent activity online. Imagine the level of sensitivity people feel about their credit card information and social security number. Now, imagine companies having access to where and when you drive, what the car sees while you or it is driving, and much more. Insurance companies could observe consumers’ driving habits to adjust their premiums, which is only one of the many alarming potential uses of this sensitive data (Lin).

There are more worries that come with the driverless technology. Questions must be asked: Once self-driving cars reach levels 4 and 5, which entails being able to drive passengers to any place from start to finish automatically, will alcohol consumption increase if humans know they will not need to drive home? If the behaviors of autonomous cars are predictable and in an obvious pattern, will other human drivers on the road take advantage of this by trying to trigger the crash prevention systems of these cars, or make other dangerous maneuvers within proximity to these cars? Furthermore, humans encounter several situations on a daily basis that require them to bend the law to stay safe, but does this mean that autonomous car makers should deliberately program cars to choose an illegal course to preserve safety (Lin)?

Some of the questions presented, both of ethical and practical relevance, are going to be difficult to address during the development process of these machines. For example, it would likely be unwise and dangerous to program self-driving cars to strictly follow the law in every situation. At the same same, it is very difficult to determine where the line should be drawn for how closely these vehicles should adhere to the letter of the law. I think that while the questions of security and privacy are important, they will answer themselves as long as the entities which are developing these vehicles act with trust and righteous values. The tricky questions are the ethical ones — they are very subjective in nature, and different people have different opinions on them. Ethics and the law can frequently conflict each other, and for humans, it is easy to commit an illegal act in order to do the ethically correct thing. For a computer, making this judgement isn’t so simple (Lin).

In one sense, we might be asking the wrong questions. Talking about situations like the Trolley Problem, trying to make self-driving cars imitate human drivers, and looking to human drivers to answer the pressing questions is not the answer. Humans are actually pretty poor drivers due to the number of distractions and factors that influence how they perform on the road. Therefore, it isn’t logical to use humans as a framework for shaping how self-driving cars should behave (Himmelreich).

We should program self-driving cars to be more fair, more safe, and more efficient than humans on a daily basis. In the talk of autonomous vehicles, many engineers are jumping straight to the extreme situations which are very unlikely to occur. Instead, engineers should be focusing on everyday normal driving, making this a more safe and fair experience for everyone in the transit system — whether it be other self-driving cars, human drivers, or pedestrians. Several laws of the road have been put in place to compensate for human errors and fallibility. Instead of applying the current rules of the traffic system to these machines with more precision and visibility, new rules must be developed (Himmelreich).

Mobileye, a company within Intel, has realized the importance of this point, and has already began exploring how this perspective could be possible from a legal standpoint. Presenting the Responsibility Sensitive Safety (RSS) concept, they seek to provide an answer to liability and fault on the road, as it does not fall into the existing rules in place. The concept could keep driverless cars from fault in the event of a crash by using a formula to assist the decision-making process in these vehicles, ensuring that they make 100 percent safe decisions at all times. Imagine an extremely safe driver who will most likely never be at fault in an accident. Now, give this driver omnidirectional vision with instant reaction time to all situations. With the level of perfection in the sensors of an autonomous car, if the vehicle never makes “risky” decisions, defined by the RSS formula, then the car can never be at fault in an accident (“Intel and Mobileye”).

Mobileye’s pursuit is certainly an important one. Determining fault in an accident is a crucial part of assigning blame, charges, and even punishment. Even when a self-driving car is suspected to not be at fault in an accident, the mere involvement arises public fear. A survey commissioned by Intel concluded that 43 percent of people do not feel safe around self-driving cars. The study also details on why people may be frightened by the idea of self-driving cars on the road. Accidents, even if they were not the fault of the self-driving car, seriously impact the public reputation of the technology. Based on the wording of their study, this means that nearly half of the drivers on the road would disapprove of even driving alongside the vehicles. In other words, the prospects of public support of self-driving cars is not looking good (Wiggers).

As mentioned before, it is crucial to design driverless cars with both their passengers and their surroundings in mind. Therefore, when programming the vehicles, neither one of these must be put before the other — they must be looked at as equally important factors. These two approaches have been officially distinguished. One, called the top-down approach involves always prioritizing human life. For example, if a car was going to hit either a dog or a pedestrian, this approach would tell the car to take the dog’s life. However, as one can imagine, this does not cover every situation. Second is the bottom-up approach which focuses more on things inside or part of the car, such as the owner, the car itself, or the manufacturer. Software engineers are looking for some combination of these two ideas that would minimize loss of live and give those inside and outside the car a fair chance (“Ethics and Self-Driving Cars”).

Thus far, engineers have continually made breakthroughs in the hardware and connectivity of self-driving cars. They have used an intelligent blend of sensors such as radar, ultrasonics, and cameras to piece together a picture of a car’s immediate surroundings. Engineers have also programmed these cars to share this vast amount of collected data with their databases to assemble a larger scale picture of the highway system across the world. However, these are only two of the three parts in the self-driving car equation. The final and arguably most important component is the software algorithms that interpret and instruct the car in various situations (“The Science of Self-Self-Driving Cars”). Unlike the first two components, these software algorithms will not magically improve. Moore’s Law, or the continual improvement of computer hardware over the last few decades, is does not apply to these human-produced lines of code. In attempting to program autonomous vehicles to perform like human drivers, software engineers have reached a roadblock. They have observed the human thinking and decision-making process, which is very complicated and difficult to transcribe into lines of code for a machine to follow (Brynjofsson, McAfee 14-43). Recall that the decision-making process in self-driving cars is fueled by a technology called artificial intelligence, which allows computers to analyze a set of data that it may have never seen before and attempt to pick the best course of action (Metz). In developing these artificial intelligence algorithms, engineers have failed to realize the several flaws of human driving, and are missing a prime opportunity to make self-driving cars more fair, safe, and competent than human drivers on the road. To achieve this, engineers must instead write new rules, such as the Responsible Sensitive Safety formula ensuring 100% safe decisions in all situations (“Intel and Mobileye”). It is also important to recognize the several benefits that self-driving cars will bring to the transit system. They will remove human error due to distractions and personal health, and also bring a great amount of convenience to our daily lives by freeing up the large amount of time many spend commuting every day (“Ethics and Self-Driving Cars”). Therefore, I believe that the development of self-driving cars should not be abandoned, but continue in a matter that is fair for everyone in the transit system, more efficient than human drivers, safe, legally concise, and most importantly, ethically correct.

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Exploring Ethics in Self-Driving Car Control: Is Relying on Technology Too Much?. Available from:<https://www.essaysauce.com/essay-examples/2018-11-18-1542559529/> [Accessed 23-11-24].

These Essay examples have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.

NB: Our essay examples category includes User Generated Content which may not have yet been reviewed. If you find content which you believe we need to review in this section, please do email us: essaysauce77 AT gmail.com.