Will you like a "driverless car"?

I think destination would be helpful. The other cars would know the route that your car is taking. They could anticipate a merge for exampke.
The destination would be useful to Big Brother too (not necessarily in a bad way). Traffic control systems could anticipate routes and group cars accordingly, so you could go blasting through an intersection at 80 mph with just enough space.
 
However, what is the fun of driving when computer takes over human's role in driving? :?

I guess I'm a bit old fashioned, but I don't put a whole lot of faith in machines. They often break down at the most inconvenient time. Judging from the glitches I've experience with my PC's over the years, I'm not certain I'm willing to let one drive me to work.
 
Yes. That is what im getting at. I am skepical of this technology to be any safer with human drivers sharing the roads.
I don't see any reason NOT to assume that they would be at least SOMEWHAT safer than human drivers.

Sure, you and I are perfect and have never had a collision. But the average driver is merely... uh... average. They get distracted and drowsy, they miscalculate stopping distances, they do all kinds of stuff that causes accidents. And how about the millions of drivers who are BELOW average, like the ones who think they're safe to drive after having "only" four drinks? The ones who think we're silly for insisting that when they're behind the wheel they shouldn't talk on a cellphone (much less text), eat a Big Mac, put on makeup, or turn around to break up the kids' fight in the back seat?
I guess I'm a bit old fashioned, but I don't put a whole lot of faith in machines. They often break down at the most inconvenient time. Judging from the glitches I've experience with my PC's over the years, I'm not certain I'm willing to let one drive me to work.
But the question is not whether machines are PERFECT. The question is whether they would be BETTER than the average human driver.

As for letting one drive you to work, how about if they're driving EVERYBODY to work? In this scenario, the highway system has been tweaked to take advantage of the many things that machines do much better than humans. Linking them together physically, like a railroad train with cars that can separate from the train when needed, will surely not only make the roads safer, but vastly increase the average speed of the average passenger, giving him back a half hour of his life EVERY DAY, in addition to giving back the millions of acres of high-priced real estate which is now being wasted for parking cars that are only driven a couple of hours a day.
 
There are some interesting questions being raised about how driverless cars will deal with the Runaway Trolly Dilemma and its very real implications.
I don't think they are all that interesting. I don't know of a single person who has ever had to make the trolley decision in their driving careers; no reason to expect that a computer will be presented with the situation any more than a human will. Nor do I know many drivers who would do anything other than stomp on the brake in such cases.

Keep in mind that the best way to solve such dilemmas is to drive in such a way that one is not placed INTO such a dilemma. Doing 70mph on an icy road and trying to decide whether to plow into the SUV full of kids or the lone pedestrian is not an argument that computers must make moral decisions if we are to accept them - it is an argument to not drive 70mph on icy roads near pedestrians.
 
I will never want a diverless-car; since the SKILL of Driving to us and the experience of driving will of course will be lost.
 
I will never want a diverless-car; since the SKILL of Driving to us and the experience of driving will of course will be lost.
I wonder where you live. Here in the Washington-Baltimore metropolitan region, the "experience" of driving is roughly equivalent to the "experience" of taking a dump. The difference is that you only have to put up with crapping a couple of times a day for a few minutes. But with driving, you have to put up with gridlock, construction, road rage, drunks, crazy pedestrians, bicyclists with a death wish, and people texting when they should be watching the road, a couple of HOURS every day.

I'd be delighted to have a self-driving car. I'll still have to get to work, but I can read or do something else enjoyable while the car deals with the road and traffic conditions.
 
I will never want a diverless-car; since the SKILL of Driving to us and the experience of driving will of course will be lost.
Do you also regret that you can't (or don't) find the result of 12345 /4321 by long division as you probably learned as a child?

I have always seen wisdom in this brief admonishment my father taught me:

"Be not the first,
by which the new is tried,
Nor the last, the old to lay aside."

I must admit, every few months when by accident I come across my old slide rule, I use it a little, to keep it (and me) in shape.
I bought it just before going off to college (about $20 as I recall)* more than 6 decades ago. It as the top of the line - a log log deci trig unit of magnisium metal. I liked doing things with it, like finding the 6.3 root of 123 which lesser slid rules could not do, except by trial and error. At college there were contests in slide rule use, but I was never good enough to enter one.

* For half that now you can get smaller calculator that is accurate to more than 6 places. That is progress, I guess, but few now understand why logrythums were invented and how they work - a slide rule is the embodiment of that. - Skill and understanding lost.
 
Last edited:
"Be not the first,
by which the new is tried,
Nor the last, the old to lay aside.
My grandfather wasn't the first farmer in his district to buy a tractor but he was the first to get rid of his horses - he didn't want to feed both.

I expect that the transition period to driverless cars will be the most painful. If you have the only driverless car on the road, all the other idiots will still be out to get you and the poor thing will have its work cut out for it. Later on, when they're (practically) universal, they'll have it much easier.
 
I don't think they are all that interesting. I don't know of a single person who has ever had to make the trolley decision in their driving careers; no reason to expect that a computer will be presented with the situation any more than a human will.
You're taking it too literally. The general principle is that, sometime an accident will occur, injuring or killing a person. Very hard, very costly questions will be raised as to whether the AI of the car did the best thing to attempt to prevent the accident.

What if braking to avoid a collision results in someone plowing into you from behind killing those occupants? Did the car stop too fast? Did it not react in time?
What if the best thing to do is swerve, but it results in a rollover, killing an occupant in the driverless car? Is the AI meant to protect the occupants? Or bystanders?

This is qualitatively different than a human making these decisions, because humans are not programmed; they do the best they can in the given circumstances, and they don't have the ability to apply driving skills with precision. But in an accident with a driverless car, the victims will be after the car companies over how their policies - which are manifest in the car's programming - resulted in the collateral damage.

Its very hard to prove in court that a person made a mistake in how they handled a collision - but it's quite easy to analyze software and find fault with it, especially since it's repeatable and demonstrable in independent tests.

"Your programming is designed to save the occupants at the expense of the bystanders!" they can cry.

This eventuality is inevitable.
 
Last edited:
You're taking it too literally. The general principle is that, sometime an accident will occur, injuring or killing a person. Very hard, very costly questions will be raised as to whether the AI of the car did the best thing to attempt to prevent the accident.
Of course. But as with any dangerous endeavor, the best way to avoid questions/litigation is not to have excellent excuses/rationales for the accident - but rather to avoid the accident to begin with. Thus companies with an interest in not being held liable for damages will tend to choose AI drivers just because they will be sued less often, since statistically AI drivers are safer (at least with the small sample size we have so far.)
What if braking to avoid a collision results in someone plowing into you from behind killing those occupants? Did the car stop too fast?
No. The car behind is responsible for maintaining a safe following distance. No court in the country would ever find that a driver "braked too hard" and thus was liable for injuries in the car behind them. Having an AI driving the car in front will not change the responsibility of the driver following that car.
What if the best thing to do is swerve, but it results in a rollover, killing an occupant in the driverless car?
If the AI functioned within its parameters (which include lateral G-limits) but road conditions allowed the rollover to happen, then it would be no different than a human driver suffering the same fate.

If the AI was broken and exceeded its programmed limits, then it would be hardware failure - and would likely result in stricter reliability requirements.
Its very hard to prove in court that a person made a mistake in how they handled a collision
I disagree. Tens of thousands of cases like this are heard every year in the US - and in many cases human drivers are found at fault. That's one of the factors that will hasten, rather than delay, the acceptance of AI drivers.
 
No. The car behind is responsible for maintaining a safe following distance.
You are nitpicking, and avoiding the issue.

Are you unwilling to accept that fatal accidents involving AI will definitely question the validity of the AI software?

If the AI functioned within its parameters (which include lateral G-limits) but road conditions allowed the rollover to happen, then it would be no different than a human driver suffering the same fate.
The difference is that litigants can examine the parameters, and fault the designers of program for careless, negligent or other problems in how they chose to make the AI behave. This is much easier to do when you can show that it was designed to do what it did, and then killed someone.

I disagree. Tens of thousands of cases like this are heard every year in the US - and in many cases human drivers are found at fault. That's one of the factors that will hasten, rather than delay, the acceptance of AI drivers.
Of course humans do this all the time; I never suggested they don't.

Simply that AIs open a whole new can of worms.
 
Are you unwilling to accept that fatal accidents involving AI will definitely question the validity of the AI software?
Of course it will. The effectiveness of airbags, ABS systems, vehicle stability systems and seatbelts were similarly questioned. But since they all provably improved safety, they were all adopted to a large degree.
The difference is that litigants can examine the parameters, and fault the designers of program for careless, negligent or other problems in how they chose to make the AI behave. This is much easier to do when you can show that it was designed to do what it did, and then killed someone.
Agreed. But again, airbags and seatbelts have the same problems; they can (and have) killed people, and the designers could in theory be held to be negligent. And of course people will sue for anything - and have. But in the long run people will go with the safer technology.
 
Back
Top