Should your self-driving car kill you to save others?

Like I said, developers will put all their efforts into avoiding accidents to begin with - not coming up with creative new ways of crashing.
This is entirely true, and cannot be overstated. It covers virtually 100% of real-world accidents.

We are talking about edge-cases, where there happens to be no safe route that doesn't endanger someone.
 
That being said...

Stopping is always the best thing to do. Even if you can't avoid a collision, the energy involved drops as the second power of the speed. No matter how little time there is to slow down, there will be a dramatic drop in damage.

And stopping is a really simple operation. None of this swerving off the road stuff.

The thing about AI is that it is designed to anticipate dangers while still having enough time to react. They won't be blasting through yellow lights at full-speed, and they won't be barreling down country roads mindless of cross-traffic. They don't have a problem with attention span or tunnel-vision or lack of peripheral attention. They can watch for all threats simultaneously well in advance of danger.

Something humans are really bad at.
 
That being said...

Stopping is always the best thing to do. Even if you can't avoid a collision, the energy involved drops as the second power of the speed. No matter how little time there is to slow down, there will be a dramatic drop in damage.

And stopping is a really simple operation. None of this swerving off the road stuff.

The thing about AI is that it is designed to anticipate dangers while still having enough time to react. They won't be blasting through yellow lights at full-speed, and they won't be barreling down country roads mindless of cross-traffic. They don't have a problem with attention span or tunnel-vision or lack of peripheral attention. They can watch for all threats simultaneously well in advance of danger.

Something humans are really bad at.
Incorrect. Imagine the NASCAR scenario.

Is stopping always a good thing?

Absolutely not.

The braking shifts weight to the front and induce a fishtail effect. Braking, even ABS braking, reduces the steerability of the car drastically also. But the main component that is important is the collision normal, you want a collision with the angle that is most parallel to the opposing body.

Second, relative velocity. If two cars are travelling in the same direction 100 mph, it is best to match relative velocities, and in this case, braking simply increases the difference in velocities of the two bodies, causing more damage.

Thirdly, steering into a collision can often save your life, because trying to completely avoid the collision may result in an understeer and further collisions, or the back end becoming lose and hitting it anyway. Race drivers will often "grind" a wall on purpose, rather than becoming destabilized and getting into a severe wreck.
 
Incorrect. Imagine the NASCAR scenario. Is stopping always a good thing? Absolutely not.
Nor is stopping a good thing if you are riding a unicycle, because you will fall over.

But since we're not talking about unicycles or NASCAR races, it is a much better option than hitting someone.

Second, relative velocity. If two cars are travelling in the same direction 100 mph, it is best to match relative velocities, and in this case, braking simply increases the difference in velocities of the two bodies, causing more damage.
No one suggested braking to cause a collision - only braking (and stopping) to avoid a collision. Note the important difference.

Thirdly, steering into a collision can often save your life, because trying to completely avoid the collision may result in an understeer and further collisions, or the back end becoming lose and hitting it anyway.
Only for incompetent drivers.
 
And hit the ditch if necessary to protect them
And kill the kid in the ditch?
Or roll the car, etc. Priorities will be written into the code.
Stopping is always the best thing to do. - - - -
And stopping is a really simple operation. None of this swerving off the road stuff.
Where I live, there is winter. Also, freeways, school zones, ambulances on roads with no shoulders, and animals of significant size and remarkable agility occasionally in evidence on the road.

Attempting to stop is not always the best thing to do. Even remaining stopped is not always the best option.
This is entirely true, and cannot be overstated. It covers virtually 100% of real-world accidents.
My personal rule of thumb is that it takes three mistakes or the equivalent in bad luck to crash. So by avoiding the common mistakes you will seldom even have a near miss, and your bad judgment under pressure will probably never matter.
So AI can drive safely in its designed circumstances.
What worries me is not the AI itself (it will not be used where it does not work well) , but the adjusting of circumstances it will instigate.
 
Nor is stopping a good thing if you are riding a unicycle, because you will fall over.

But since we're not talking about unicycles or NASCAR races, it is a much better option than hitting someone.


No one suggested braking to cause a collision - only braking (and stopping) to avoid a collision. Note the important difference.


Only for incompetent drivers.
Again some of what I said went over your head.

Your first fallacy is the bit about "hitting someone".

But what about the bit you failed to mention about "hitting something?" You know there are more dangers than just other humans on the road.

Second I doubt you know anything about Pajecka formula, weight shifting, or why grinding a wall (or another car, for that matter) is often more beneficial than trying to avoid the collision completely.

Your ideas will work in 25 mile hour land, but will not always gaurantee safety in the realm of 60-80 mph highways, most certainly they are irrelevant in the autobahn realms.
 
And before you know it, people will be paying to have themselves put on the "protected" list. Then wealthy people's lives will be worth more than poor people's.
I vote IQ. Easy way to quantify prioritized lists, plus good for the gene pool. Second, people with high IQ tend to be better drivers theoretically, therefore they most likely are owed their dues for their good safe driving records.


Now my main complaint of AI cars is lag. I know this is Nvidia gpus. But I had some Nvida gpus and they have occasional stutter. What if the GPU freezes for 2 seconds? And it freezes because it overheats (sounds like an oxymoron to the layman I know.) Overheats because of the hot summer of course. It's like game devs can't even make decent AI enemies in games, but we are expected to trust our lives to AI...that is the essence of humor...
 
But what about the bit you failed to mention about "hitting something?" You know there are more dangers than just other humans on the road.
Yes, there are. And again, the AI will do its best to avoid them. It will sometimes fail, just as human drivers do.
Second I doubt you know anything about Pajecka formula, weight shifting, or why grinding a wall (or another car, for that matter) is often more beneficial than trying to avoid the collision completely.
I have a feeling you are just looking up stuff in Wikipedia to have something to argue about.

First off, Pacejka's formulas apply to design of tires, and are often used in driving simulators to model the effects of friction. They have zero to to with collision avoidance. Once you know the G-ratings of a car/tire combination, that gives you the basic information about cornering ability and max braking effort possible. From there you can get whether a car will experience oversteer or understeer and how it reacts on different surfaces. All of that is then made available to the AI. No need for "Pacejka's formula" - for either drivers or AI's.

Second, "grinding a wall" is something that is independent of "deciding who to kill." They have nothing to do with each other.

Third, if you think that "weight shifting" is often more beneficial than trying to avoid the collision, I very much hope you don't drive.

Finally, if you are looking up stuff from Wikipedia to try to sound intelligent, it helps if you spell the guy's name correctly.
Your ideas will work in 25 mile hour land, but will not always gaurantee safety in the realm of 60-80 mph highways, most certainly they are irrelevant in the autobahn realms.
I am not presenting ideas for "guaranteed safety." That's impossible. You can't even keep up with the topics that you are discussing.[/QUOTE]
 
I am not presenting ideas for "guaranteed safety." That's impossible.
The question becomes: At what comparative level of calculated probability, if any, will an AI choose to risk the driver instead of the public.
And since that kind of decision (however it is settled) will be rare, will be made rare by design, my concern: how much alteration of the landscape and circumstances of human life will be necessary to accomplish that?

We are already at the point where I am reading in the newspaper quotes from AI promoters about the changes in the roadways and traffic handling setups this great advance - the driverless car - will require. The planned funding seems to be - as near as I can tell - diversion from "mass transit".
 
The question becomes: At what comparative level of calculated probability, if any, will an AI choose to risk the driver instead of the public.
Never. Like I said, normal human drivers* don't decide "hmm, I would rather hit that pedestrian than risk injury to myself!" They try to avoid the accident. So will AI's. At no point will the AI decide "hey, the driver is more valuable than the pedestrian, so I won't stop short and risk being rearended" or something similar.

Yes, you can contrive all sorts of situations where you might have to make a moral choice. "Do I make a hard right and hit the one child on the sidewalk, or continue straight and hit the five fleeing criminals?" Such 'trolley problems' do not come up in real life.

(*- criminals, psychopaths and the inept excepted, of course)
 
Never. Like I said, normal human drivers* don't decide "hmm, I would rather hit that pedestrian than risk injury to myself!" They try to avoid the accident. So will AI's
Human drivers have built in hierarchies, priorities, that they use to make split second moral decisions - including the risks of self-sacrifice. The question asked is about the analogous factors that will be built into an AI.
At no point will the AI decide "hey, the driver is more valuable than the pedestrian, so I won't stop short and risk being rearended" or something similar.
Of course it will. It will have to have that capability, in order to choose between different probabilities of mishap.

For example, if the pedestrian is just stepping off of the shoulder of a freeway ramp, and a semi is a bit too close behind the AI on the curve, the AI will have to make (or have built into its priorities) a calculation involving the probabilities of bad consequences regardless of its behavior. And that calculation will have to include the severity of the different bad consequences - if the pedestrian is a deer, a dog, a human, a human who appears competent and alert, a human about to jump back, the calculation will change.

At some point, an AI will be either capable of swerving the car off the ramp entirely, with all of the risk to the driver that entails, or not. Either that, or freeway ramps will be redesigned to make the situation impossible.

As you pointed out, we are not dealing with certainties here. The AI will have to make choices via calculated odds. And that means calculated severity of consequences. The worry here is that at some point in the ongoing expense and frustration of getting an AI to handle this stuff, in the extended frustration of interests that want AI to work - very badly -, the focus will change to altering the landscape instead: building a world AI can handle. Cheap AI. Minimal AI.
 
Last edited:
Human drivers have built in hierarchies, priorities, that they use to make split second moral decisions - including the risks of self-sacrifice. The question asked is about the analogous factors that will be built into an AI.
Those decisions are indeed based on hierarchies and priorities. People make driving decisions on a million variables - distance to other car, perceived relative speed, lane width, expected best braking/cornering effort etc etc. One of those variables is NOT the relative value of human life.
For example, if the pedestrian is just stepping off of the shoulder of a freeway ramp, and a semi is a bit too close behind the AI on the curve, the AI will have to make (or have built into its priorities) a calculation involving the probabilities of bad consequences regardless of its behavior.
No, it will not. It will not decide to kill the pedestrian to save the truck - any more than you would.
 
One of those variables is NOT the relative value of human life.
Sure it is. People will do things to avoiding hitting people, especially children, (or living animals, or even dead animals) that they do not do to avoid hitting inanimate objects. It's automatic.
No, it will not. It will not decide to kill the pedestrian to save the truck - any more than you would
You alter the situation to one of certainties.
The question is how much of a risk of killing the pedestrian it will take to avoid a given risk of killing the driver, or vice versa.
 
Back
Top