Should your self-driving car kill you to save others?

It'll hit the brakes, nothing more.

Agree

However would I be possible to monitor a large variety of driver types with multiple inputs over a long period

Somehow convert all those inputs into a computer program

Take a few thousand of these programs and meld into one

This is only a thought bubble because I don't think practical and surely ' hitting the brakes ' would be a programmed response not a learned response

:)
 
This is only a thought bubble because I don't think practical and surely ' hitting the brakes ' would be a programmed response not a learned response
It will almost surely be both. Autonomous driving systems already use neural networks to make decisions on driving, to recognize the shapes of cars in front of them, to figure out where the center of the lane is etc. They do this by using petabytes of training data to train a network. When the network is trained and validated through testing, the weights are read out and used to program all the individual cars. The cars then use the programmed network. Note that often, neural networks without the ability to learn (i.e. the ones in the final vehicles) are called inference engines - they are programmed by copying a trained neural network over, but are not themselves trained.
 
It will almost surely be both. Autonomous driving systems already use neural networks to make decisions on driving, to recognize the shapes of cars in front of them,
- unless it's the broadside of an 18-wheeler. :rolleyes:

2 deaths so far. Same cause in both.
 
Not sure that a self-driving car will be tasked with solving moral dilemmas...it will just be programmed to react to situations as they arise. I think humans react in similar ways, when we're faced with scenarios that require a split second decision. Not sure reactions to situations, are decisions.

If you swerve to miss a deer standing in the road for example, and end up causing an accident by hitting another car, you may not have seen the other car, as you were merely reacting to the deer. I wouldn't liken that to a moral decision. If you willfully cheat on your taxes, that would be a moral decision.
 
Last edited:
Not sure that a self-driving car will be tasked with moral dilemmas...it will just be programmed to react to situations as they arise.
Exactly.
If you swerve to miss a deer standing in the road for example, and end up causing an accident by hitting another car, you may not have seen the other car, as you were merely reacting to the deer.
Also exactly right. The driver didn't make a "moral" decision - they just reacted as best they could to avoid an accident, and 80% of the time, that likely WOULD have avoided the accident.

People have this idea that the computers that control autonomous vehicles are infinitely fast, capable and connected, and before they even twitch the wheel they have analyzed every possible outcome, looked up the resumes of the people in the car, analyzed how fast a Western Wisconsin buck can run and computed the exact angle to steer away from the deer so that when it bounds away the car will miss it by inches. Nope. The car will make its best effort to avoid the deer and any other cars it detects. If it can't, it will hit one of them. Very much like a person.
 
If you swerve to miss a deer standing in the road for example, and end up causing an accident by hitting another car, you may not have seen the other car, as you were merely reacting to the deer. I wouldn't liken that to a moral decision. If you willfully cheat on your taxes, that would be a moral decision.
This is a good point. We could even quantify it:

In physics, by definition, no two events can occur at the same time and the same place. If they did, it is because they are the same event.

So two events - such as those that might conceivably (though unlikely to) occur in an autonomous vehicle's range - will be separated in time, or in space, or in both.

An A-V makes only one decision at a time. If a deer jumps out in front of it, it brakes to avoid hitting the deer. If an unlikely and unfortunate second event subsequently occurs, it will deal with that, when it is presented.

In other words: an automated vehicle doesn't face the trolley problem, because it doesn't think ahead to abstract risks. It avoids an immediate threat. If another threat manifests, it avoids that one.
 
In other words: an automated vehicle doesn't face the trolley problem, because it doesn't think ahead to abstract risks
That is unknown.
It certainly thinks ahead, in some ways, and it seems clear that if trained on an appropriate data set it would face the Trolley Problem.
Whether an AVs training definitely lacked the Trolley Problem or anything similar could be a very difficult matter to determine. I doubt it's a completely solvable problem, in general.
 
It certainly thinks ahead, in some ways, and it seems clear that if trained on an appropriate data set it would face the Trolley Problem.
Given that human drivers are not, I tend to doubt it. Do you know of any driver training program that includes deciding who to kill? I have never heard of any. Nor have I heard of any autonomous vehicle algorithm to implement such a system - and there are a lot of them out there.
 
Given that human drivers are not, I tend to doubt it. Do you know of any driver training program that includes deciding who to kill? I have never heard of any. Nor have I heard of any autonomous vehicle algorithm to implement such a system - and there are a lot of them out there.
The problem would be that you can't tell. Nobody can. Just because people don't think they've included such decision making in the training set, does not mean they haven't. If we've learned anything at all from watching the struggles of AV development and similar AI applications, it's that the actual decision making process of a trained neural net is not necessarily what anyone intended it to be, or even generally similar, and that training sets are very, very difficult to "debug", as it were.

So that humans have, so far, proved themselves incapable of reliably anticipating or controlling what an AV actually learns and why it does what it does. ( Not just AVs. The surprising AI that was trained to play Go and recently proved capable of beating 9 Dan professionals plays games that must be analyzed and argued about. Nobody - not the pros, not the programmers, not the people who built it, not the people who oversaw its training - knows for sure why it makes the moves it does).
 
The problem would be that you can't tell. Nobody can. Just because people don't think they've included such decision making in the training set, does not mean they haven't.

It is important to realize one main item, in my opinion. Safety devices are intended to prevent motion, not allow it. A rogue programmer may be able to insert code defying this practice but it should be caught/deleted by an approval process before launch.

If not caught by an approval process then I am concerned as anyone else would be. Machine code should never (99.99% imo) allow further motion during a hazard condition.

Safety devices(prox sensors, light scrns, lasers, optics, mats, etc.) are engineered to detect a hazard(s) and immediately prevent further motion. Under very rare conditions is a safety device actually allowed to detect a hazard and allow motion... elevator doors are an example of this rare occurance. A pinch hazard exists during the hazard so the doors are allowed to remain energized and reverse their motion to counter the pinch hazard.

99.99% of all other hazard conditions (in industrial equipment for which I have a solid background) will prevent (controlled E-Stop is ok if warranted) motion during a hazard. Machine architecture (s/w & h/w) should not detect a hazard and then decide or implement further motion.
 
The problem would be that you can't tell. Nobody can. Just because people don't think they've included such decision making in the training set, does not mean they haven't. If we've learned anything at all from watching the struggles of AV development and similar AI applications, it's that the actual decision making process of a trained neural net is not necessarily what anyone intended it to be, or even generally similar, and that training sets are very, very difficult to "debug", as it were.

So that humans have, so far, proved themselves incapable of reliably anticipating or controlling what an AV actually learns and why it does what it does. ( Not just AVs. The surprising AI that was trained to play Go and recently proved capable of beating 9 Dan professionals plays games that must be analyzed and argued about. Nobody - not the pros, not the programmers, not the people who built it, not the people who oversaw its training - knows for sure why it makes the moves it does).
You are arguing "but it might be buggy." Definitely. Everything has had bugs, from seatbelts to airbags to emergency braking systems to OnStar systems. The solution is to fix them, not fret over whether airbag sensors don't like pedestrians, and are killing them to save oil company executives.
 
Machine code should never (99.99% imo) allow further motion during a hazard condition.

Under very rare conditions is a safety device actually allowed to detect a hazard and allow motion... elevator doors are an example of this rare occurance.
This is very good observation. 99.99% of the time, no matter what is happening, reducing momentum is going to reduce damage and injury (whether it's anticipated or not).

However:

It doesn't matter how rare or common it is, it is the most appropriate action that should be taken in a given set of circumstances. Sometimes that's to allow motion.

I would hate to have the logic of some system be "99.99% of the time we should stop motion, so that's pretty much the same as 'all the time'." No. It's a case-by-case basis, so you don't overlook that 0.01% circumstance.
 
I would hate to have the logic of some system be "99.99% of the time we should stop motion, so that's pretty much the same as 'all the time'." No. It's a case-by-case basis, so you don't overlook that 0.01% circumstance.

An application, within itself which needs no exceptions (ref, my elevator example earlier), should detect hazards and prevent motion 100% of the time imo. My 99.99% comment relates to those rare systems that allow a safety device to allow motion. Very rare.
Even my new SUV has me forced to place my foot on brake before I can start the engine.

If an autonomous vehicle is traveling at 55 mph...a fork in the road appears...a hazard is detected at the entrance of both forks simultaneously...the vehicle should drop power to all drive motors, allow power to brake systems for controlled deceleration and maintain a straight path and let it all play out as it may. This is my opinion.

Forget the fork. Straight road. 55mph, hazard detected straight ahead. Same safety shutdown (imo), no swerving to miss hazard (possibly causing a vehicle rollover to collision to undetected person or object). Disanle power to main drives, allow brakes and of course low voltage power to monitor and deploy air bags if needed.

You know what concerns me the most? That this technology is advancing fast. Imo, faster than standards can be agreed upon and implemented...including government regulation (state & federal for USA). This is worrisome because, like hinted at earlier in thread, some programmer or hardware designer can do things counter productive to insuring safety. For USA work environments there are decades and decades of safety standards (OSHA, etc). Not so for autonomous vehicles...it will take time. So it is important that these developers use safety standards from other areas.

One more lack of standards point to make please. It would be unfortunate for Brian, at ABC company in New York, might use (2) low resolution optical cameras for front detection. While Julie, at XYZ company in Seattle, might use (4) high resolution cameras with advanced infra red night vision. The silly point I am making is both may help engineer a system but without strict industry standards and regulations, one system can be inferior to another:(

Anyway, all my opinions are based on automotive plant experience. Moving vehicles, inside a manufacturing plant, are not new to me. Been engineering them for over 20 years. They are usually called AGV (automatic guided vehicle). They zipp along car plant pathways all the time...with people constantly walking near them (often visitors taking a factory tour). No of course they do not do 55 mph inside a factory. However, one thing is always certain...whether in a USA, Canada, Mexico, U.K. or Japan plant for my projects....safety devices shut down motion devices. No decision making (yes we have full blown PLC's onboardthe AGV and fully capable of decision making routines in code. No way though. Our safety devices are redundant and always stop motion. Software input on PLC and also a mandatory hardwire circuit (bypassing PLC i/o) directly to relays.

Post seems like it is getting long. Wrapping up, I remain hopeful these vehicle will be safe for all. As far as legal liability? I am of the opinion it less risky to follow the old tried and accepted practice of "safety devices prevent motion, not allow it". This has held up well in USA and Japan courtrooms for decades. The day a company allows a safety device to allow motion (like swerving a vehicle suddenly) and a pedestrian(s) dies, it will untested territory in courtrooms from what I understand. That may not bode well for the manufacturer of that vehicle.

What's your thought Dave? On the courtroom aspect?
 
if your car kills someone
who goes to jail ?
the car or your insurance company ?
They played that one out with Will Smith on Irobot.

Nobody was the answer.

They could have brought the guy who programmed the robot, but he was killed by a robot he programmed to kill himself...
 
Even autonomous cars will have horns, which will sound while the car brakes, to warn a pedestrian that the car is approaching.
 
Back
Top