Verified Document

Trolley Problems And Self Driving Cars Essay

Related Topics:

The Limits of Deontology and Utilitarianism in the Trolley Problem
Introduction

The trolley problem is an old moral quandary that essentially has no wrong or right answer. It is a kind of worst case scenario in which one must choose the lesser of two evils. For example, a runaway trolley is set to crash and kill five people, but by throwing a lever you might spare those five but take the life of one innocent man crossing a connecting set of tracks. Is there a morally wrong or right answer to the question? And how does it apply in the case of self-driving cars? How should an engineer program an autonomous vehicle to respond to such a worst case scenario? Should the machine be programmed to swerve and take the life of an innocent man on the sidewalk so as to avoid taking the lives of five people dead ahead who have been stopped for some reason? Or is such a scenario even worth thinking about? The reality is that the trolley problem is more useful as a philosophical tool for identifying the differences between various ethical perspectives, such as utilitarianism and deontology (Carter). Outside of that exercise, it really has little merit. At the end of the day, the engineer of the self-driving car is going to have to decide upon what ethical perspective is guiding him and then program the machine accordingly. As Nyholm and Smids point out, other than the legal ramifications of how an engineer programs a self-driving car, the morality of solving the trolley problem is too elusive to solve: obviously it is important to take ethical problems seriously, but “reasoning about probabilities, uncertainties and risk-management vs. intuitive judgments about what are stipulated to be known and fully certain facts” is just not something that can be effectively left up to a machine that is guided by pre-programmed data (1). People make the mistake of thinking logic and reason can be applied to machine learning—but they forget that long before there were the deontological and utilitarian ethical frameworks there existed the classical ethical theory of virtue ethics, i.e., character ethics. It is the argument of this paper that character ethics is the best approach to solving moral dilemmas for humans. Leaving morality up to machines is a bad way for anyone to have to live.

The Self-Driving Car is Worse than a Trolley Problem

The trolley problem is an ethical puzzle. The self-driving car is a dangerous reality that is already happening in today’s world. As Himmelreich points out, the trolley problem pales in comparison to the ethics of autonomous driving. Essentially, the self-driving care is a deadly object hurtling forward through space and time, whose resistance depends upon the strength of the programmer’s skills and the technology’s efficacy. Accidents happen all the time. Teslas are notorious for crashing in auto pilot mode. They are sold as being fully self-driving, yet a German court recently found that Teslas are not self-driving and that marketing them as such is false advertising (Ewing). Does the world need more false assurances and false sense of safety? Should people really put so much trust in machines? The argument is that, “Planes basically fly themselves these days.” But the argument is disingenuous: plays may be largely flown on auto pilot, but there are always real pilots in the cockpit who are trained to fly the plane should they actually need to. It is when the auto pilot function cannot be overcome by the actual real life human pilots that bad things happen. See Boeing’s share price and reputation for evidence of that.

People look at the self-driving car and say that it makes life easier: they can sleep on the way to work or they can read a novel. The reality is that self-driving is really just a novel technological development that is not even yet perfected (and likely never will be, which is why pilots are still required for air travel). Trusting one’s life and the lives of others to a machine programmed by a programmer from some other part of the world, a programmer who will never be held accountable should an accident occur, is the height of absurdity. Human beings are capable of reason and have a free will, but they often act irrationally and seem desperate at times to give up the use of their free will and make themselves into slaves, whether of passions, of other men or of machines. From a virtue ethics standpoint, men should be reluctant to give up their own autonomy to a robot—yet with self-driving machines, they are asked to do just that. Thus, the self-driving car is a greater moral problem than the theoretical trolley problem. But even the trolley problem is problematic.

The Problem with the Trolley Problem

Cite this Document:
Copy Bibliography Citation

Related Documents

Heavy Recovery Vehicle Lighting Emergency
Words: 2159 Length: 7 Document Type: Term Paper

It comes with a built-in tripod, so heavy recovery workers needing an auxiliary source of light can adjust its stream of light exactly where it is needed, and keep it there. The flashlight is 2.5 kg, 340 mm long and 160 mm in diameter. Wearable Lights: The Pelican manufacturer offers a hands-free light (#2680 "Headsup Lite") that is ideal for recovery professions who have their hands full dealing with emergencies

Sign Up for Unlimited Study Help

Our semester plans gives you unlimited, unrestricted access to our entire library of resources —writing tools, guides, example essays, tutorials, class notes, and more.

Get Started Now