Whenever I tell someone that I studied philosophy in college, that person never seems to know what to say. Few mention a shared interest in philosophy. Some will comment that I was lucky I found a job with such a degree.
As annoying as this can be, I understand. Most people have no idea what philosophy classes are like. Only a handful have read any works of philosophy. Few can describe the usefulness of philosophy.
So it’s nice when there are some real-world applications of philosophy that I can point people to. Even though I think philosophy is valuable apart from these practical applications, it’s still nice to have these examples.
Trolleys and Self-Driving Cars
I was recently listening to some interviews about the current state of artificial intelligence. The boom of A.I. is fascinating, is already changing our world, but will utter reshape our world (I think). But it will bring some ethical problems to the fore.
The clearest example is with self-driving cars. The A.I. has to make decisions about what to do in situations that would arise when driving. And out of the millions of miles that automobiles travel everyday, it is certain that a situation like the following will likely occur: the car will crash into a bus full of people if it continues on its current path, likely killing many people. But the car can swerve onto the sidewalk, but doing so would kill a single pedestrian on the street.
What should the A.I. be programmed to do? Continue and so kill many people? Or swerve and kill one?
This is not a problem of computer science or something that can be solved with engineering ingenuity (though such ingenuity can greatly reduce the likelihood that such situations will occur). Instead, a decision has to be made about the best moral option.
You might think that the best option is obvious: program the car to swerve and kill the one pedestrian. Better one man die than a while busload, right?
I don’t think the answer is so obvious. But notice that you have already begun to do philosophy. The deliberation about that being the best moral choice was philosophical deliberation. And people actually have to program the A.I. to do this. This isn’t the domain of philosophers in their ivory towers.
An article appeared in The Washington Post several months ago discussing this problem. The author wrote:
Computer programmers can’t just shrug their shoulders. They have to decide how to program the vehicle. And how do you write an algorithm for all these different kinds of situations?
Despite the stupid rejections of philosophy that I’ve heard from some in the sciences, engineers are having to get their hands dirty with a genuine philosophical issue that has been discussed (in great length) for decades. In this case, they are having to work through the Trolley Problem and its variations.
The Trolley Problem
The earlier situation I gave is a basic version of the Trolley Problem. Imagine you are driving a trolley. You are nearing a switch in the track. You notice that if you remain on the current track, you will kill five workmen who are working on the track, unaware that you are speeding toward them. You can pull one lever to switch tracks, but you notice that there is one worker on that track who will be killed if you do so.
What do you do? Nothing and kill the five? Act by pulling the lever and kill the one?
One disagreement in ethical reflection is how the outcomes of our actions should affect our ethical deliberations. Normally, we would consider it immoral to do an action that leads to one person being killed. But it also seems that we have an ethical obligation to save lives if possible. So would it be wrong to pull the lever and save the five workers at the expense of one worker’s life? Or is it simply wrong to take an action that kills someone?
The Problem Amplified
What makes this problem so interesting is that so many variations can be given that pressure you in different ways.
Let’s say that you think it’s obviously the right decision to switch tracks to save the five. Why? Because when given two options, the one with the less loss of life would be morally best.
Here’s a tweaked version of the Trolley problem: imagine that you are on a bridge under which the trolley tracks go. You see the trolley speeding uncontrollably down the track and, just on the other side of the bridge, you see five workers unaware that they are about to be killed. There is no switch to send the trolley to a different track. Unless it is stopped, five workers will be killed.
Fortunately, standing next to you is a fat man. You know that if you push him off the bridge, he’ll land on the track and stop the trolley before it hits the workers.
(You are too skinny to stop the trolley, which is why you don’t jump. The man next to you, though, is sufficiently big enough to block the track.)
Should you push the overweight man off the bridge to his certain death in order to stop the trolley from killing the five other men? If not, how does this differ from the earlier version of the trolley problem?
If you say “No!”, ask yourself, “Why?” If the outcome with more lives saved is morally better than pushing the fat man in front of the trolley, then why not do it? Perhaps you should not intend to do harm to someone else, even if that harm is necessary to save five other people.
Trolleys, A.I., and Ethics
Situations similar enough to the trolley problem will arise for A.I. controlled vehicles. Those designing the A.I. have to determine how to handle these situations. But philosophers have reflected deeply on the ethics of these situations. I am aware of no consensus among philosophers on how to answer the Trolley Problem. But we need to think carefully and deeply about these problems — particularly when these practical situations arise — and that is what philosophers tend to excel at.
So, go read philosophy. Think. Read. And think some more.
And watch out for A.I. controlled vehicles.
Join other dedicated readers of Thinking and Believing and subscribe to the email list. You'll receive every new post in your inbox, so you never have to worry about missing a post. Click here to subscribe.