(Baby You Can’t) Drive My Car: The Ethical Implications of Driverless Cars

Driverless cars are one of the hottest topics in media reporting on science today. The idea that a person may be able to tell a car where to go without having to operate it is alluring, and these vehicles have the potential to increase the efficiency and safety of travel. Scientists have many promising ideas for how driverless cars will operate: the vehicles are envisioned to utilize GPS, radar maps, laser ranging systems, and vehicle-to-vehicle communication in order to make for the safest and most efficient transport (1).
These exciting prospects, however, do not come without understandable doubts about whether these cars will be safe and what it means for transport machines to have an unprecedented level of autonomy. One major issue is the question of how to handle the many ethical predicaments that arise when considering these machines. To tackle this problem, researchers have begun to craft algorithms that might “tell” a car how to react in a specific scenario; to some, these formulas, though imperfect, seem to be the most pragmatic solutions to these ethical difficulties (2).
An example: a driverless car is about to crash. The crash is inevitable, but the car can veer one of two ways: it can hit an eight-year-old girl or an eighty-year-old woman. Which way should it go?
This example, crafted by Patrick Lin in his review of ethics in autonomous cars, has no categorically correct solution. While the young girl has her whole life ahead of her and is considered a “moral innocent,” the older woman also has a right to life and respect (2). Even the professional code of ethics has no true recommendation, nor does it seem morally sound to take no stance and let the situation play out arbitrarily and unpredictably.
Scenarios like this have led to disagreement among scholars, who often adhere to different approaches when thinking through ethical dilemmas. Should a car be utilitarian, or is the raw number of lives saved not enough of a metric? Should a car have the option for a human driver to override the autonomous system? Should a car swerve to miss a crash, even if missing the crash would end up causing a different collision? These are all questions that researchers must consider as they form the algorithms that will govern the outcomes of these situations (2).
The implications of these decisions extend beyond the puzzle of determining who should fall victim to a crash. For example, some would argue that the ethically sound choice is for driverless cars to self-sacrifice, if that sacrifice would save a larger number of people than would the potential crash on its original trajectory—but surveys report it is highly unlikely that the average buyer would be willing to purchase and ride in a vehicle programmed to take this course of action (3).
Another question is one of individual human decisions: consider a car that is inevitably going to hit a bicycle rider. It has two options: hit a biker with a helmet or hit a biker who did not wear one. Many would advise that the car be programmed to hit the rider that is wearing a helmet—the one who has the greater chance of survival; but the choice has many repercussions: this type of programming essentially penalizes riders for wearing helmets and could deter people from ever wearing helmets at all (2).
Last, the issue stands of who is responsible for an accident, should one occur. Though one reasonable suggestion is to hold manufacturers responsible for fatalities due to driverless car crashes, this might compromise their willingness to develop and optimize new products, out of fear of being liable. An alternative might be to hold the “driver” responsible, no matter what—but this also seems unfair, given that the rider may have no way to interfere (4).
A major difficulty of ethical questions like these is that they do not have a “right” answer; as scientists write algorithms that feel the most reasonable, it will be important to keep in mind that there exists no solution that will please everyone. Through this moral murkiness, however, one thing is certain: driverless cars will transform our world, and programming them to reflect generally acknowledged standards of public safety will be revolutionary.

Julia Canick is a senior in Adams House studying molecular and cellular biology.
Works Cited

[1] Waldrop, M. Mitchell. “No drivers required.” Nature 518.7537 (2015): 20.

[2] Lin, Patrick. “Why ethics matters for autonomous cars.” Autonomes fahren. Springer Vieweg, Berlin, Heidelberg, 2015. 69-85.

[3] Bonnefon, Jean-François, Azim Shariff, and Iyad Rahwan. “The Social Dilemma of Autonomous Vehicles.” Science 352.6293 (2016): 1573-1576.

[4] Hevelke, Alexander, and Julian Nida-Rümelin. “Responsibility for crashes of autonomous vehicles: an ethical analysis.” Science and engineering ethics 21.3 (2015): 619-630.

Image credit: Wikimedia Commons

Advertisements

Categories: Fall 2018, Uncategorized

Tagged as:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s