No I didn't miss the point. Aggressive driving habits do not result from lack of skill (could have sworn I mentioned this, but maybe the herring ate it); they result from a lack of consideration for others.
It's not a question of "poor" drivers, but of selfish drivers.
You suggested selfish drivers be give the option of installing their moral standard into an autonomous vehicle, and I didn't like the idea.
No I didn't suggest that. I suggested that they set up testing to be able to calibrate - I.e. From within a defined set of parameters, none of which will be "selfish" - and more specifically to set up the car in line with the drivers own morality with regard this single issue. This is, after all, an autonomous car, which will drive as safely as possible and abide by all Highway Code. Being selfish or not won't be an issue.
But at no point did I suggest, or in any way imply, the entire driving habits or reasons for them (I.e. this entire moral standard) be uploaded into the car. Consideration for others will be a given in any autonomous car. At least that is how I see it.
In my view, the question of whether you would sacrifice your own or the pedestrian's life has nout to do with how selfish a driver you are. It is simply a matter of how much you value your life compared to the life of others. You could be the most considerate and unselfish driver around but value your own life above everyone else's in such a situation: better to live with the consequences than not live at all, etc.
And I keep saying the probability of an autonomous car ever getting itself into such a quandary is so low as to be unworthy of consideration. And even if the situation could somehow be contrived, the computer could not, would not and should not be expected to make a moral choice. Its job is to make the smartest decisions available, at all times, without unnecessary emotional constraints placed upon it.
This is simply begging the question: what does it mean to make "the smartest decision" in the case when it needs to put at equal risk of fatality either the occupant or the pedestrian? Answering the hat question with just "it will make the smartest decision" is simply no answer at all.
Do you guys want gridlock while all the poor robots burn out their microcircuits, trying to figure out what Papa would approve of?
No, which is why giving it an ultimate default for when all other decision-making produces no clear result sounds like quite a good thing to do.
And yes, it's a hypothetical question, sure, but still worthy of consideration, not least because someone somewhere will have to program relative values for such things into the AI.
Call it a matter of morals or simple programming or anything else. It wouldn't be emotions, that's for sure (unless we're talking seriously advanced AI) and it is a question that will be asked again and again. Whenever an autonomous car crashes there will/should be post-mortem of the decision making (as far as possible) and at some point there will be a scenario where the car has prioritised one life over another, not necessarily through direct decision making of which to sacrifice but through whatever priority it does have. Who gets to determine what that priority is? That's the question here.