Should your self-driving car kill you to save others?

I think bill is probably right. There won't be algorithms to determine who will I hit. There will be algorithms to determine how I can avoid an accident.

Human drivers don't think, "should I swerve into oncoming traffic or should I run over the little old lady on the sidewalk?" We swerve the direction our instincts take us and then say "Oh Shit!". There will probably be standard protocols, IF possible - stop! ELSE swerve in the direction that avoids oncoming traffic. (Left in remnants of the British Empire, right everywhere else in the world.)
I think the premises of the thread are that, given time and progress, it is inevitable that

1] there must be - will be - edge case accidents wherein someone's death ends up being unavoidable,
2] AI will have enough processing power and speed that it is has sufficient time and operability that it could, conceivably, have the opportunity where a choice could be made.

Sure it's an edge case, but this a science forum, where edge cases (such as relativistic speeds and black holes) are routinely discussed.

But more than an edge case, it's got to happen (again, given enough time and progress).
 
No doubt we will, and that will result in cars that are safer than human-driven cars today.
But they will still not be deciding trolley problems.
Precisely... Why not?
I can even suggest the AI might be able not only to count how many people would die in the different alternative scenarios of the unfolding accident but how many years of potential life would be lost in each scenario depending on the age of each person involved. The car's AI might also be able to identify particular people as having greater value, say, a president/dictator/foreign diplomat, a renown and beloved scientist/artist/activist etc. Maybe each car will have it's own theory about how to go about it, perhaps with some inputs from the passengers, from the owner of the car, or from local legislation and case law.
I don't see that there's any logical problem. We may well decide in the end not to do it ever but that's an entirely different question.
EB
 
But if I were to interpret your question literally, I would envision a scenario posted by you where an AI would happily drive directly into a post, surely killing its occupants, just to avoid a pedestrian.
Not my scenario, no. Not my post either.
My point was simply that "doing less" (for the passengers) does not equate with "coming last". If doing less for the passenger and more for the pedestrian manages to save both lives, then no one has "come last".
That's irrelevant.
The question is whether there's any logical reason that the car's AI shouldn't be authorised to choose the victims to minimise for the number of casualties and severity of the accident.
EB
 
Precisely... Why not?
I can even suggest the AI might be able not only to count how many people would die in the different alternative scenarios of the unfolding accident but how many years of potential life would be lost in each scenario depending on the age of each person involved. The car's AI might also be able to identify particular people as having greater value, say, a president/dictator/foreign diplomat, a renown and beloved scientist/artist/activist etc. Maybe each car will have it's own theory about how to go about it, perhaps with some inputs from the passengers, from the owner of the car, or from local legislation and case law.
I don't see that there's any logical problem. We may well decide in the end not to do it ever but that's an entirely different question.
EB
This goes beyond any plausible reach of what an AI driving a car could do.

It would understand the physics, but there is no conceivable reason, outside wild science fiction, that an AI driving a car would, could or should know personally identifiable details about a person.
 
I can even suggest the AI might be able not only to count how many people would die in the different alternative scenarios of the unfolding accident but how many years of potential life would be lost in each scenario depending on the age of each person involved.
That's ridiculous.

Get the top ten trauma surgeons and the top ten accident reconstruction experts in a room. Give them a person to examine, then tell them "OK, this guy is about to be struck by a 2016 Ford Focus. What will his injuries be at 20, 25 and 30mph?" They will not be able to tell you other than in _very_ broad generalities. In fact, their conclusions will be along the lines of "we don't really know, but slower is better, and the best is if you don't hit him at all."

A car is not going to do a better job.

There is a tendency to believe in the sci-fi image of future computers as omniscient, able to make moral decisions and decide how an uncertain event will unfold with great certainty. They won't be. They will like computers we have now, just faster.
 
They will like computers we have now, just faster.
We have a saying in the programming world: Bug free, on time and within budget - pick two.

In my experience, corporations usually opt for within budget (top priority) and on time.
 
This goes beyond any plausible reach of what an AI driving a car could do.
It would understand the physics, but there is no conceivable reason, outside wild science fiction, that an AI driving a car would, could or should know personally identifiable details about a person.
That's still irrelevant. It's not the question why it should, it's the question that assuming it could, why should it not.
EB
 
That's ridiculous.
Get the top ten trauma surgeons and the top ten accident reconstruction experts in a room. Give them a person to examine, then tell them "OK, this guy is about to be struck by a 2016 Ford Focus. What will his injuries be at 20, 25 and 30mph?" They will not be able to tell you other than in _very_ broad generalities. In fact, their conclusions will be along the lines of "we don't really know, but slower is better, and the best is if you don't hit him at all."
A car is not going to do a better job.
There is a tendency to believe in the sci-fi image of future computers as omniscient, able to make moral decisions and decide how an uncertain event will unfold with great certainty. They won't be. They will like computers we have now, just faster.
That's still irrelevant. It's not the question why it should, it's the question that assuming it could, why it should not.
I didn't consider "omniscience" as you suggest. That's ridiculous of you. Don't make up things, please.
I didn't consider "moral decisions" as you suggest. That's ridiculous of you. Don't make up things, please.
I'm assuming AI will be able to assess the situation much more effectively than humans could. Assuming this, why not let them decide on the scenario less costly in lives?
EB
 
A self-driving car drives all the time. Night or day, it drives. Passengers go further, for longer. They are able to traverse the Earth, but the car is without a home. It is a gypsy, a caravan...it IS a home.
 
Last edited:
That's still irrelevant. It's not the question why it should, it's the question that assuming it could, why it should not.
There is no reason it should or should not.

Let's take another example - an airliner landing during a CAT IIIc approach (zero-zero.) It has been holding a long time and has minimum fuel. It flares and the wheels touch the ground. Suddenly a small business jet on the same runway turns on its transponder. The TCAS in the landing airliner receives the transponder echo and gives an "increase climb" RA that the autopilot is aware of.

What happens? Does the autopilot think "hmm, if I hit that small jet then the people in it will die, but if I take off again then everyone on board might die when we flame out?" Does it look up whether there is a Nobel Prize laureate on board the business jet, and compare that to the value of the people on board the aircraft?

Nope. It does its best to stop the aircraft. Why? Because
1) that is an incredibly contrived example that will never happen.
2) a system that does a good job with a straightforward task is, in general, safer and more reliable than a system that tries to do a much more complex task with the same resources.
I didn't consider "omniscience" as you suggest. That's ridiculous of you. Don't make up things, please.
I didn't consider "moral decisions" as you suggest. That's ridiculous of you. Don't make up things, please.
Good! So we agree that any such vehicle will be operating with a limited set of data, and will make its decisions based on that limited set of data, rather than any moral, social or medical parameters.
I'm assuming AI will be able to assess the situation much more effectively than humans could. Assuming this, why not let them decide on the scenario less costly in lives?
Because the trolley problem is a mental exercise that never happens in real life. It would be more useful to program the car to react well to a meteor strike (and that wouldn't be all that useful either.)
 
That's still irrelevant. It's not the question why it should, it's the question that assuming it could, why it should not.
I included "could" in my list of "no reason why".

An AI can only work with the data its given, and can only solve problems that it has algorithms for.

There is no conceivable reason why - or how - a self-driving car's AI would have access to data about the people within its sensor range other than their physical properties such as mass, speed and direction.
Nor is there any conceivable reason why - or how - a self-driving car's AI would have the capability to making decisions about it, even if it did.

Unless you're talking sci-fi. If you are, just say so.
 
I included "could" in my list of "no reason why".
An AI can only work with the data its given, and can only solve problems that it has algorithms for.
There is no conceivable reason why - or how - a self-driving car's AI would have access to data about the people within its sensor range other than their physical properties such as mass, speed and direction.
Nor is there any conceivable reason why - or how - a self-driving car's AI would have the capability to making decisions about it, even if it did.
Unless you're talking sci-fi. If you are, just say so.
I'm not talking Sci-Fi. I'm talking of what seems rationally conceivable given what we know today.
EB
 
There is no reason it should or should not.
Good, so you agree there's no reason that it should not.
I'm not sure why it was so difficult to get a straight response to that.
Let's take another example
No.
Good! So we agree that any such vehicle will be operating with a limited set of data, and will make its decisions based on that limited set of data, rather than any moral, social or medical parameters.
More to the point, you had absolutely zero reason to assume as you did that I somehow didn't agree with that. Feel free to apologise in your own time.
Because the trolley problem is a mental exercise that never happens in real life. It would be more useful to program the car to react well to a meteor strike (and that wouldn't be all that useful either.)
Good. You do as you please, I won't stop you.
I asked a sensible question in the context of the OP and I suggested the AI might be able not only to count how many people would die in the different alternative scenarios of the unfolding accident but how many years of potential life would be lost in each scenario depending on the age of each person involved. The car's AI might also be able to identify particular people as having greater value, say, a president/dictator/foreign diplomat, a renown and beloved scientist/artist/activist etc. Maybe each car will have it's own theory about how to go about it, perhaps with some inputs from the passengers, from the owner of the car, or from local legislation and case law.
I don't see that there's any logical problem. We may well decide in the end not to do it ever but that's an entirely different question.
EB
 
I asked a sensible question in the context of the OP and I suggested the AI might be able not only to count how many people would die in the different alternative scenarios of the unfolding accident but how many years of potential life would be lost in each scenario depending on the age of each person involved.
OK. The answer to that question is no, as explained above.
The car's AI might also be able to identify particular people as having greater value, say, a president/dictator/foreign diplomat, a renown and beloved scientist/artist/activist etc. Maybe each car will have it's own theory about how to go about it, perhaps with some inputs from the passengers, from the owner of the car, or from local legislation and case law.
No, it will not.
I don't see that there's any logical problem. We may well decide in the end not to do it ever but that's an entirely different question.
Agreed, there is no logical problem here. Autonomous cars will never do that.
 
I showed you why. You said "particular people" may be more valuable than other "particular people". Sounds very Nazi like to me.

Personally, I would refuse to write such code. However, I would write code to try and prevent the accident from happening.
 
Personally, I would refuse to write such code. However, I would write code to try and prevent the accident from happening.
Which is what every developer will do. Not just from a moral and legal requirement standpoint, but for simple economic reasons - software that prevent accidents will be vastly more profitable than software that does other things (like target certain people.)
 
Which is what every developer will do. Not just from a moral and legal requirement standpoint, but for simple economic reasons - software that prevent accidents will be vastly more profitable than software that does other things (like target certain people.)
No one will be surprised if it becomes illegal to run AI that does not protect certain people above others - it's easy to anticipate it being justified as protecting police, fire, and ambulance vehicles, for starters; then people in work zones, children in school zones, etc.
 
Back
Top