Should your self-driving car kill you to save others?

Plazma Inferno!

Ding Ding Ding Ding
Administrator
In a split-second, the car has to make a choice with moral—and mortal—consequences. Three pedestrians have just blindly stumbled into an oncoming crosswalk. With no time to slow down, your autonomous car will either hit the pedestrians or swerve off the road, probably crashing and endangering your life. Who should be saved?
A team of three psychologists and computer scientists at the University of Toulouse Capitole in France, just completed an extensive study on this ethical quandary. They ran half a dozen online surveys posing various forms of this question to U.S. residents, and found an ever-present dilemma in peoples' responses.
Surprisingly or not, the results of the study show that most people want to live a world in which everybody owns driverless cars that minimize casualties, but they want their own car to protect them at all costs.

http://www.popularmechanics.com/cars/a21492/the-self-driving-dilemma/
 
It isn't a simple mathematical problem. What if the driverless car contains a surgeon on his way to save somebody's life - and what if the three pedestrians are gang-bangers on their way to sell drugs to children?

There are times when slow thinking is an advantage. It can be a comfort having dumb luck to fall back on.
 
It isn't a simple mathematical problem. What if the driverless car contains a surgeon on his way to save somebody's life - and what if the three pedestrians are gang-bangers on their way to sell drugs to children?
Saving a life versus many lives is not predicated on placing a differential value on those lives.

Imagine for a moment that social media gets ubiquitous enough that the car's software can actually identify the potential victims. Do you think it would be a relevant criterion in its decision-making to value one scientist's life over that of several thugs?

Now there's a slippery slope indeed.

Heck, why not rate them based on salary, societal contribution, tax bracket, heredity and even skin colour?

Imagine a world in which your Credit Rating is trivial compared to your Life Value Rating.

"Man, I'm not going out there! My LVR is only 450. I'm a walking target! Last week, a Google Car went out of control and mowed down six guys behind on their taxes and a homeless guy, all to save one lawyer with an LVR of 1500."

"Don't fret. I'll introduce you to a guy who sells LVR points - every G-note wil get you +100 points. You can buy your immunity. You can be untouchable!"


No, at the moment of truth, we are all created - and destroyed - equal.
 
Last edited:
Can this type of car distinguish three people from three ostriches?
Of course it. But hitting three ostriches (how they got into the intersection, I'll never know!) - or, for that matter, three people - might endanger the passenger just as much as going through the plate-glass window of a restaurant - where it may hit five patrons, or a bus shelter with a kindergarten class huddled inside, or however many people are lined up outside a movie theater where the car swerved to avoid the crossing pedestrians.

It's a silly question. These hypothetical situations are designed with only two choices. In real life emergencies, there might be three or more choices, or none. You can't prepare for every scenario, and neither the human driver nor a robot could possibly be programmed to make the best available choice, even if they had all the information, because the programmers couldn't agree what the best choice was in every weird eventuality they can think of.
Math is all they can use.

One thing you can be sure of: the robot can collect and process more information, sooner and faster than a human driver, and choose the lowest probability of casualties more accurately.
People have trouble giving up control, but they always do, eventually, for security or convenience.
 
Last edited:
In a split-second, the car has to make a choice with moral—and mortal—consequences. Three pedestrians have just blindly stumbled into an oncoming crosswalk. With no time to slow down, your autonomous car will either hit the pedestrians or swerve off the road, probably crashing and endangering your life. Who should be saved?
This effectively never happens. It's like asking "what should the autopilot of a 747 do - turn too hard to avoid a collision with an MD-80 and shear its wings off in the process, or allow its wingtip to slice through the MD-80 and kill everyone on board?" The autopilot doesn't make decisions like that, and even if it did, neither it nor the autonomous vehicle in the example above has the control to be able to implement either scenario reliably.
 
Perty simple realy:::

The car shoud be adjusted to protect its occupants... an wit that as the known standard... all other surroundin situations can adjust or be adjusted in ways to best protect itself... ie... just like now its known to be a bad idea to step in front of a movin vehicle.!!!
 
With no time to slow down, your autonomous car will either hit the pedestrians or swerve off the road, probably crashing and endangering your life. Who should be saved?
If action is not taken, the collision with pedestrians is imminent and certain. Death is very likely, especially since they are a] completely unprotected, and b] will take the full brunt of a moving vehicle.

If action is taken, the injury to the occupant is neither imminent nor certain. Death is unlikely, especially since the occupant is a a] well-protected by the car, and at rest wrt to the car. Any collision will be mitigated by the full array of protective features - airbags, crumple zones, etc.
 
Any human driver will try to save the pedestrians, normal reaction, and may hurt himself.

But..

If the car manufacturer tells me that this car has a feature, that is, under certain rarest of rare circumstances this electronics will give more importance to the life of pedestrians than to the occupants.........Will I buy that car ? Possibly no, and certainly not in this era when the system can be hacked.......So no manufactutere will commit professional harakiri by designing such cars which compromises the occupant safety...


Results are not surprising...if my life is in someone else hand, then my safety must be paramount for that someone else.
 
In a split-second, the car has to make a choice with moral—and mortal—consequences. Three pedestrians have just blindly stumbled into an oncoming crosswalk. With no time to slow down, your autonomous car will either hit the pedestrians or swerve off the road, probably crashing and endangering your life. Who should be saved?
A team of three psychologists and computer scientists at the University of Toulouse Capitole in France, just completed an extensive study on this ethical quandary. They ran half a dozen online surveys posing various forms of this question to U.S. residents, and found an ever-present dilemma in peoples' responses.
Surprisingly or not, the results of the study show that most people want to live a world in which everybody owns driverless cars that minimize casualties, but they want their own car to protect them at all costs.

http://www.popularmechanics.com/cars/a21492/the-self-driving-dilemma/

This is Utilitarianism hahaha thats funny!

Personally I would prefer my car save me but if all cars are only trying to save owner I wonder how it would turn out. Thats a bit of a gamble isnt it?
 
Do you think it would be a relevant criterion in its decision-making to value one scientist's life over that of several thugs?
I think it's at least as relevant as raw numbers.
Heck, why not rate them based on salary, societal contribution, tax bracket, heredity and even skin colour?
That's the point. The car has to use some criterion and that criterion will not please everybody in every situation. With human drivers, at least we have the satisfaction of putting somebody in jail if his criteria don't match up to ours.
 
Anyway, a self-driving car wouldn't be speeding on an urban street with pedestrian crossings. And it would detect both the drunks and their trajectory in plenty of time to slow down and honk.
It's silly to second-guess something with more acute perception, better focused attention and a thousand times faster reflexes than we have.
 
"All men are created equal - but some are more equal than others"?
Sure, morality by body count is simpler for a machine to handle. So how would it distinguish between three on their way to sell drugs and three on their way to volunteer at the soup kitchen?
 
One thing that we should consider that will sober up the discussion is that this is an extreme edge case.

There is almsot no circumstance where it will occur.

1] As Jeeves points out, the way to avoid injuries is not by acting 1/10 of a second before the collision and deciding what's the least worst option. The way to avoid injuries is to not have accidents in the first place. Even human drivers are taught to always drive safely, always be aware, and always have an "exit plan".

2] There are precious few circumstances where simply braking to a stop is not - by far - the best plan, and failing that, taking your "exit" option is not the next best plan.

We are inventing a scenario in which a self-driving car had found itself in a situation where there is no acceptable exit plan, not only doomed for a collision, but a fatal one.
 
Sure, morality by body count is simpler for a machine to handle.
It's not "simpler for a machine to handle"; it's best full stop. For both machine and human.

So how would it distinguish between three on their way to sell drugs and three on their way to volunteer at the soup kitchen?
It wouldn't. It shouldn't. And neither would I. And hopefully, neither would you.

When it comes to life and death, people are not given differing values. (OK, well, except maybe babies.)
 
It's not "simpler for a machine to handle"; it's best full stop. For both machine and human.
I didn't say anything about what's "best". I said that it's easier for a computer to distinguish one from three than to distinguish the relative "value" of three versus three. It's easier for humans too but we have the advantage of being slow thinkers so we usually don't have to make the decision explicitly. We also have the advantage of being able to make excuses in hindsight.

When it comes to life and death, people are not given differing values.
Of course they are, all the time. Take transplant lists, for example. They're based on who's likely to last longer. Take insurance. The ones who are more likely to have claims pay more.

So, if we're going to program our cars to decide who to kill, why not bet on the most likely outcome there too?
 
How many human drivers, in the split second they become aware of a pedestrian, can tell what that pedestrian is on his way to doing? How many can then take a different course of action based on their assessment of the value of the pedestrian?
In real life, we don't make value judgments in an emergency. We suspend thought; react reflexively. The limbic system has all the moral sense of a computer.
 
I didn't say anything about what's "best". I said that it's easier for a computer to distinguish one from three than to distinguish the relative "value" of three versus three.
I know, but why introduce it?

I assumed you were saying "it's easy and that's why we would program a computer to use it (it's expedient) - even if it's not the best criteria".

I'm saying it is the best criteria - because we don't decide that "person X" is hardly worth saving.

Of course they are, all the time. Take transplant lists, for example. They're based on who's likely to last longer.
That's not value. They don't choose recipients based on their value to society.

I could see as car being programmed to choose between a pedestrian and a car with an occupant. That would dramatically increase the likelihood of both potential victims surviving.

So, if we're going to program our cars to decide who to kill, why not bet on the most likely outcome there too?
What does "most likely outcome" mean? What does it have to do with soup kitchen volunteers versus gang bangers?

You were talking about value, not likelihood of outcome (whatever that means).
 
Back
Top