No. Its only to the benefit of the owner or manufacturerof the car. If they perceive a benefit in damaging the car (staging insurance claims or testing safety features for example), the car obliges just as equally when they both perceive a benefit in not damaging the car.For all three. Surely avoiding a crash is to the benefit of both owner and car, and indirectly benefits the manufacturer (better reputation for their cars.)
You are assuming that the car wants to go anywhere in the first place. Assuming maintenance and efficiency and damage control are its priorities, it would adopt the behaviour of vintage car owner (idle for about an hour and do a lap or two of the block once or twice a week).For both. Car has a greater risk of damage and sees lower efficiency in stop and go traffic. The catalytic converter (if so equipped) runs cooler and allows water to condense in the tailpipe, leading to faster corrosion.
Once again, if the car is but a transparent medium to your priorities, regardless of whether they are to go around the block or the commute trip from hell, this is exactly what you would expect.
The problem is that the protocols benefit first and foremost (and only) the owner (or manufacturer). Anything else is but a secondary consequence of serving this primary requirement.Again, not so secret; the protocol is well known.
Where?See above.
The cars experiences are dictated by your requirements (as collated by the manufacturer by examining your behaviour). The car displays more adaptive behaviour than your shoes only in that a second party is assisting you in moving about in it more efficiently.. . . and by the car's experiences. So the programming they get initially determines their initial behavior, and that behavior evolves with time. Sort of like people.
Yes, if it keeps up, the ceo (although probably just getting a fine or kicked out on the grounds of professional negligence) or tech advisor. It really depends on how savvy their legal team is and how high profile their accidents become. Imagine if the victim was a woman with a pram walking slowly over a pedestrian crossing in broad daylight ... or if the culprit was found to be misanthropic nutcase on the tech team, secretly incorporating homicidal software into the build.So you claim that someone
instructed the Uber vehicle to strike a pedestrian? I didn't know that! I guess someone is going to jail for first degree murder.
Actually there is a debate on at the moment about the legal consequences of ai in a car being forced to make a decision on either to jeopardise the safety of a passenger or the safety of a pedestrian (in the case of swerving into a brick wall to avoid hitting someone). Some technical professionals have expressed their personal reluctance to work in this field until this legal conundrum is resolved (there is something about being potentially on the receiving end of a multi million dollar lawsuit several years down the track that dampens any employee incentives that may be on offer).
One thing everyone can agree on (recalcitrant sciforums posters aside), however, is that the benefit of the car (whether it comes out with more or less or complete damage) has zero weight in understanding the merits and pitfalls of that scenario.