The Limits of Deontology and Utilitarianism in the Trolley Problem
Introduction
The trolley problem is an old moral quandary that essentially has no wrong or right answer. It is a kind of worst case scenario in which one must choose the lesser of two evils. For example, a runaway trolley is set to crash and kill five people, but by throwing a lever you might spare those five but take the life of one innocent man crossing a connecting set of tracks. Is there a morally wrong or right answer to the question? And how does it apply in the case of self-driving cars? How should an engineer program an autonomous vehicle to respond to such a worst case scenario? Should the machine be programmed to swerve and take the life of an innocent man on the sidewalk so as to avoid taking the lives of five people dead ahead who have been stopped for some reason? Or is such a scenario even worth thinking about? The reality is that the trolley problem is more useful as a philosophical tool for identifying the differences between various ethical perspectives, such as utilitarianism and deontology (Carter). Outside of that exercise, it really has little merit. At the end of the day, the engineer of the self-driving car is going to have to decide upon what ethical perspective is guiding him and then program the machine accordingly. As Nyholm and Smids point out, other than the legal ramifications of how an engineer programs a self-driving car, the morality of solving the trolley problem is too elusive to solve: obviously it is important to take ethical problems seriously, but “reasoning about probabilities, uncertainties and risk-management vs. intuitive judgments about what are stipulated to be known and fully certain facts” is just not something that can be effectively left up to a machine that is guided by pre-programmed data (1). People make the mistake of thinking logic and reason can be applied to machine learning—but they forget that long before there were the deontological and utilitarian ethical frameworks there existed the classical ethical theory of virtue ethics, i.e., character ethics. It is the argument of this paper that character ethics is the best approach to solving moral dilemmas for humans. Leaving morality up to machines is a bad way for anyone to have to live.
The Self-Driving Car is Worse than a Trolley Problem
The trolley...
Our semester plans gives you unlimited, unrestricted access to our entire library of resources —writing tools, guides, example essays, tutorials, class notes, and more.
Get Started Now