Should your self-driving car kill you to save others?

Isn't the idea of self driving cars , a neither scenario would actually happen ?

Otherwise why have a self driving car in the first place ?
 
Isn't the idea of self driving cars , a neither scenario would actually happen ?
Close to correct. Autonomous vehicles will greatly reduce the odds of both, such that there won't be any "who should I kill?" decisions to make.

Again, you could come up with some bizarre scenario where aircraft autopilots could make such moral decisions. Do they? No, because the call to make such decisions never happens.
 
Close to correct. Autonomous vehicles will greatly reduce the odds of both, such that there won't be any "who should I kill?" decisions to make.

Again, you could come up with some bizarre scenario where aircraft autopilots could make such moral decisions. Do they? No, because the call to make such decisions never happens.

Man that is alot of decisions that this car can not make .

To me this is analogous to auto-pilot on a aircraft . But it is not perfect .

Auto-pilot has failed , through tech. And Human error .
 
Man that is alot of decisions that this car can not make.
The list of decisions that a car cannot make is almost infinite.
To me this is analogous to auto-pilot on a aircraft . But it is not perfect .
Auto-pilot has failed , through tech. And Human error .
Absolutely. And AI drivers will fail as well. The goal is to get that failure rate to way below the human failure rate.
 
The list of decisions that a car cannot make is almost infinite.

Absolutely. And AI drivers will fail as well. The goal is to get that failure rate to way below the human failure rate.

Agreed

But where does this put HUMANITY in the ability to think and be aware of ones enviroment ?
 
Isn't the idea of self driving cars , a neither scenario would actually happen ?
Nope. Those situations are beyond the control of the AI, and they will happen frequently for the entire foreseeable future in my geographical region.

Autonomous vehicles will greatly reduce the odds of both, such that there won't be any "who should I kill?" decisions to make.
Again: the decision involves probabilities, not certainties. Phrase it as "which risks should I take?".

And the changes involved in reducing their frequency will - prediction - involve significant changes in the environment in which AI vehicles operate. The low fruit of AI improvement or adaptation is apparently nearly all picked, and the AI folks are already - in my newspaper, etc - talking about adapting the highways, vehicles, rules of the road, and human behavior, to the needs of the AI.
 
Yes, there are. And again, the AI will do its best to avoid them. It will sometimes fail, just as human drivers do.

I have a feeling you are just looking up stuff in Wikipedia to have something to argue about.

First off, Pacejka's formulas apply to design of tires, and are often used in driving simulators to model the effects of friction. They have zero to to with collision avoidance. Once you know the G-ratings of a car/tire combination, that gives you the basic information about cornering ability and max braking effort possible. From there you can get whether a car will experience oversteer or understeer and how it reacts on different surfaces. All of that is then made available to the AI. No need for "Pacejka's formula" - for either drivers or AI's.

Second, "grinding a wall" is something that is independent of "deciding who to kill." They have nothing to do with each other.

Third, if you think that "weight shifting" is often more beneficial than trying to avoid the collision, I very much hope you don't drive.

Finally, if you are looking up stuff from Wikipedia to try to sound intelligent, it helps if you spell the guy's name correctly.

I am not presenting ideas for "guaranteed safety." That's impossible. You can't even keep up with the topics that you are discussing.
If I had looked up the guys name on Wikipedia, I would not have spelled it incorrectly. Time to do deductive reasoning.

Second, let me ask if you even know what Pacejkas formula is or why it is needed.

"First off, Pacejka's formulas apply to design of tires, and are often used in driving simulators to model the effects of friction. They have zero to to with collision avoidance. "

So models of friction have "zero to do with collision avoidance". Good high quality stuff right there.

You should become CEO of ACME, and design cars that send people careening off the edges of hills in a hilarious and cartoon manner.

?? The same place they always were. Having good AI drivers does not make humans less able to think.
Depends. Are we talking about autopilot cars, which allow you to steer in an emergency, or full out, full-crazy, steering-wheel-less, brake-pedal-less padded rooms.
 
Shortly we will be without keys to our houses. I doubt it will be fingerprints: they could be lifted from a glass. Perhaps retinal scanning, or D.N.A.?
 
Shortly we will be without keys to our houses. I doubt it will be fingerprints: they could be lifted from a glass. Perhaps retinal scanning, or D.N.A.?
I have no keys to my house. Haven't for about 10 years. Since I got fed up with the kids losing their keys.

It's a combo lock.

I have a key for my car, and a key for my boat.
 
... talking about adapting the highways, vehicles, rules of the road, and human behavior, to the needs of the AI.
Well let's be clear: adapting them for the safety of humans.

A change that makes it easier for AI but does not, ultimately, save property and lives, is not useful.
 
?? The same place they always were. Having good AI drivers does not make humans less able to think.
It almost certainly would make them less able to drive well.
A change that makes it easier for AI but does not, ultimately, save property and lives, is not useful.
The logic will be: AI saves property and lives. But AI does not work well in the current circumstances. So we change them - get the AI - and enjoy the benefits of the saved property and lives.
 
It almost certainly would make them less able to drive well.
The logic will be: AI saves property and lives. But AI does not work well in the current circumstances. So we change them - get the AI - and enjoy the benefits of the saved property and lives.
Which is a good thing. Right?
 
Yes, lets all put our faith in the hands of AI, and all the good people of google and facebook who know whats best for us!
mdZBENU.jpg

60veGRM.jpg
 
I come from an automotive machinery background (automation, robotics, presses, etc). Many of these systems had workers. Safety and osha requirements were always #1. These systems likely differ from autonomous car systems being developed. However, I would hope some practices will follow or flow over.

In an automotive tooling (or machinery) system...decisions are not made like some have posited for auto cars. Systems are designed to only operate with design conditions met (sensors, etc). A deviation from that (when humans are present) is considered a hazard. A hazard condition is never given choice to choose (the plc soft & hardware architecture). A hazard is a hazard and considered an "E-Stop" condition. No if, ands or buts! You move to a zero energy state and prevent further motion.

Due to equipment size and speeds, sometimes an E-stop and zero energy condition cannot be achieved instantaneously. In that case, you are allowed to "Control Stop" equipment by only supplying energy to devices needed for the controlled stop (or decelerating stop). Emergency brakes would be an example while servo drive motors would not. The point is to, when a hazard is detected, you get to "Zero" energy as quickly as possible.

How does the above automotive equipment operation relate to current autonomous cars? I can only speculate that the same ideas apply. If a car is in motion and detects a hazard, or hazards, no decisions will be made by a computer (or plc). Instead, immediate power to servo drives will be cut. As well as all other energy devices except controlled stoppage devices like brakes. I can speculate that possibly, low dc voltage power will remain for passenger escape paths (power windows for example). Other than that, the vehicle should get to a zero energy condition! This would not allow a decision by a computer and then allow power to a device, like steering, to swerve post hazard detection. That is risky engineering...allowing further or additional powered motion while a hazard is detected. Again, move to a zero energy state...not detect and allow motion or energy usage.

Hopefully this is all true for these systems. Courts of law are very punishing when it comes to lack or disregard of human safety. If a car actually chose one life over another I think society would be in an uproar. It is safer, in my opinion (with multiple hazards detected), to simply shut everything down and let things play out as they may. Implementing "Zero energy" practices, during emergencies, has held up well in courtrooms for robotic equipment (Nachi, Kuka, Motoman, Fanuc, etc). I think it could be for autonomous cars too!

Thank you all for letting me share my experience. I am retired and have no active role in autonomous vehicle development. My experience is strictly based upon automated machinery, with or without humans in the vicinity:)
 
Yes, lets all put our faith in the hands of AI, and all the good people of google and facebook who know whats best for us!
You already have. You take all the convenience of technology for granted, don't even think about it, just buy every new effort-saving thing that comes along, let computers regulate everything from hydro to traffic lights, your work life and social life, and then get all het up over your loss of autonomy three weeks after the last row-boat cast off. You wouldn't have enjoyed rowing - ask any galley-slave.
 
In a split-second, the car has to make a choice with moral—and mortal—consequences. Three pedestrians have just blindly stumbled into an oncoming crosswalk. With no time to slow down, your autonomous car will either hit the pedestrians or swerve off the road, probably crashing and endangering your life. Who should be saved?
A team of three psychologists and computer scientists at the University of Toulouse Capitole in France, just completed an extensive study on this ethical quandary. They ran half a dozen online surveys posing various forms of this question to U.S. residents, and found an ever-present dilemma in peoples' responses.
Surprisingly or not, the results of the study show that most people want to live a world in which everybody owns driverless cars that minimize casualties, but they want their own car to protect them at all costs.

http://www.popularmechanics.com/cars/a21492/the-self-driving-dilemma/

It'll hit the brakes, nothing more.
 
Back
Top