<AI>Devspace
upvote

102

downvote

0

save

How could self-driving cars make ethical decisions about who to kill?

clock icon
asked 3 weeks ago
message icon
14
eye icon
8.2K

Obviously, self-driving cars aren't perfect, so imagine that the Google car (as an example) got into a difficult situation.

Here are a few examples of unfortunate situations caused by a set of events:

  • The car is heading toward a crowd of 10 people crossing the road, so it cannot stop in time, but it can avoid killing 10 people by hitting the wall (killing the passengers),
  • Avoiding killing the rider of the motorcycle considering that the probability of survival is greater for the passenger of the car,
  • Killing an animal on the street in favour of a human being,
  • Purposely changing lanes to crash into another car to avoid killing a dog,

And here are a few dilemmas:

  • Does the algorithm recognize the difference between a human being and an animal?
  • Does the size of the human being or animal matter?
  • Does it count how many passengers it has vs. people in the front?
  • Does it "know" when babies/children are on board?
  • Does it take into the account the age (e.g. killing the older first)?

How would an algorithm decide what it should do from the technical perspective? Is it being aware of above (counting the probability of kills), or not (killing people just to avoid its own destruction)?

Related articles:

14 Answers

For a driverless car that is designed by a single entity, the best way for it to make decisions about whom to kill is by estimating and minimizing the probable liability.

It doesn't need to absolutely correctly identify all the potential victims in the area to have a defense for its decision, only to identify them as well as a human could be expected to.

It doesn't even need to know the age and physical condition of everyone in the car, as it can ask for that information and if refused, has the defense that the passengers chose not to provide it, and therefore took responsibility for depriving it of the ability to make a better decision.

It only has to have a viable model for minimizing exposure of the entity to lawsuits, which can then be improved over time to make it more profitable.

Personally, I think this might be an overhyped issue. Trolley problems only occur when the situation is optimized to prevent "3rd options".

A car has brakes, does it not? "But what if the brakes don't work?" Well, then the car is not allowed to drive at all. Even in regular traffic, human operators are taught that your speed should be limited as such that you can stop within the area you can see. Solutions like these will reduce the possibility of a trolley problem.

As for animals... if there is no explicit effort to deal with humans on the road I think animals will be treated the same. This sounds implausible - roadkill happens often and human "roadkill" is unwanted, but animals are a lot smaller and harder to see than humans, so I think detecting humans will be easier, preventing a lot of the accidents.

In other cases (bugs, faults while driving, multiple failures stacked onto each other), perhaps accidents will occur, they'll be analysed, and vehicles will be updated to avoid causing similar situations.

This is the well known Trolley Problem. As Ben N said, people disagree on the right course of action for trolley problem scenarios, but it should be noted that with self-driving cars, reliability is so high that these scenarios are really unlikely. So, not much effort will be put into the problems you are describing, at least in the short term.

How could self-driving cars make ethical decisions about who to kill?

It shouldn't. Self-driving cars are not moral agents. Cars fail in predictable ways. Horses fail in predictable ways.

the car is heading toward a crowd of 10 people crossing the road, so it cannot stop in time, but it can avoid killing 10 people by hitting the wall (killing the passengers),

In this case, the car should slam on the brakes. If the 10 people die, that's just unfortunate. We simply cannot trust all of our beliefs about what is taking place outside the car. What if those 10 people are really robots made to look like people? What if they're trying to kill you?

avoiding killing the rider of the motorcycle considering that the probability of survival is greater for the passenger of the car,

Again, hard-coding these kinds of sentiments into a vehicle opens the rider of the vehicle up to all kinds of attacks, including "fake" motorcyclists. Humans are barely equipped to make these decisions on their own, if at all. When it doubt, just slam on the brakes.

killing animal on the street in favour of human being,

Again, just hit the brakes. What if it was a baby? What if it was a bomb?

changing lanes to crash into another car to avoid killing a dog,

Nope. The dog was in the wrong place at the wrong time. The other car wasn't. Just slam on the brakes, as safely as possible.

Does the algorithm recognize the difference between a human being and an animal?

Does a human? Not always. What if the human has a gun? What if the animal has large teeth? Is there no context?

  • Does the size of the human being or animal matter?
  • Does it count how many passengers it has vs. people in the front?
  • Does it "know" when babies/children are on board?
  • Does it take into the account the age (e.g. killing the older first)?

Humans can't agree on these things. If you ask a cop what to do in any of these situations, the answer won't be, "You should have swerved left, weighed all the relevant parties in your head, assessed the relevant ages between all parties, then veered slightly right, and you would have saved 8% more lives." No, the cop will just say, "You should have brought the vehicle to a stop, as quickly and safely as possible." Why? Because cops know people normally aren't equipped to deal with high-speed crash scenarios.

Our target for "self-driving car" should not be 'a moral agent on par with a human.' It should be an agent with the reactive complexity of cockroach, which fails predictably.

Frankly I think this issue (the Trolley Problem) is inherently overcomplicated, since the real world solution is likely to be pretty straightforward. Like a human driver, an AI driver will be programmed to act at all times in a generically ethical way, always choosing the course of action that does no harm, or the least harm possible.

If an AI driver encounters danger such as imminent damage to property, obviously the AI will brake hard and aim the car away from breakable objects to avoid or minimize impact. If the danger is hitting a pedestrian or car or building, it will choose to collide with the least precious or expensive object it can, to do the least harm -- placing a higher value on a human than a building or a dog.

Finally, if the choice of your car's AI driver is to run over a child or hit a wall... it will steer the car, and you, into the wall. That's what any good human would do. Why would a good AI act any differently?

The answer to a lot of those questions depends on how the device is programmed. A computer capable of driving around and recognizing where the road goes is likely to have the ability to visually distinguish a human from an animal, whether that be based on outline, image, or size. With sufficiently sharp image recognition, it might be able to count the number and kind of people in another vehicle. It could even use existing data on the likelihood of injury to people in different kinds of vehicles.

Ultimately, people disagree on the ethical choices involved. Perhaps there could be "ethics settings" for the user/owner to configure, like "consider life count only" vs. "younger lives are more valuable." I personally would think it's not terribly controversial that a machine should damage itself before harming a human, but people disagree on how important pet lives are. If explicit kill-this-first settings make people uneasy, the answers could be determined from a questionnaire given to the user.

“This moral question of whom to save: 99 percent of our engineering work is to prevent these situations from happening at all.” —Christoph von Hugo, Mercedes-Benz

This quote is from an article titled Self-Driving Mercedes-Benzes Will Prioritize Occupant Safety over Pedestrians published OCTOBER 7, 2016 BY MICHAEL TAYLOR, retrieved 08 Nov 2016.

Here's an excerpt that outlines what the technological, practical solution to the problem.

The world’s oldest carmaker no longer sees the problem, similar to the question from 1967 known as the Trolley Problem, as unanswerable. Rather than tying itself into moral and ethical knots in a crisis, Mercedes-Benz simply intends to program its self-driving cars to save the people inside the car. Every time.

All of Mercedes-Benz’s future Level 4 and Level 5 autonomous cars will prioritize saving the people they carry, according to Christoph von Hugo, the automaker’s manager of driver assistance systems and active safety.

There article also contains the following fascinating paragraph.

A study released at midyear by Science magazine didn’t clear the air, either. The majority of the 1928 people surveyed thought it would be ethically better for autonomous cars to sacrifice their occupants rather than crash into pedestrians. Yet the majority also said they wouldn’t buy autonomous cars if the car prioritized pedestrian safety over their own.

I think that in most cases the car would default to reducing speed as a main option, rather than steering toward or away from a specific choice. As others have mentioned, having settings related to ethics is just a bad idea. What happens if two cars that are programmed with opposite ethical settings and are about to collide? The cars could potentially have a system to override the user settings and pick the most mutually beneficial solution. It's indeed an interesting concept, and one that definitely has to discussed and standardized before widespread implementation. Putting ethical decisions in a machines hands makes the resulting liability sometimes hard to picture.

They shouldn't. People should.

People cannot put the responsibilities of ethical decisions into the hands of computers. It is our responsibility as computer scientists/AI experts to program decisions for computers to make. Will human casualties still exist from this? Of course, they will--- people are not perfect and neither are programs.

There is an excellent in-depth debate on this topic here. I particularly like Yann LeCun's argument regarding the parallel ethical dilemma of testing potentially lethal drugs on patients. Similar to self-driving cars, both can be lethal while having good intentions of saving more people in the long run.

1

Write your answer here