I don't think they are all that interesting. I don't know of a single person who has ever had to make the trolley decision in their driving careers; no reason to expect that a computer will be presented with the situation any more than a human will.
You're taking it too literally. The general principle is that, sometime an accident will occur, injuring or killing a person. Very hard, very costly questions will be raised as to whether the AI of the car did the best thing to attempt to prevent the accident.
What if braking to avoid a collision results in someone plowing into you from behind killing those occupants? Did the car stop too fast? Did it not react in time?
What if the best thing to do is swerve, but it results in a rollover, killing an occupant in the driverless car? Is the AI meant to protect the occupants? Or bystanders?
This is qualitatively different than a human making these decisions, because humans are not programmed; they do the best they can in the given circumstances, and they don't have the ability to apply driving skills with precision. But in an accident with a driverless car, the victims will be after the car companies over how their
policies - which are manifest in the car's programming - resulted in the collateral damage.
Its very hard to prove in court that a person made a mistake in how they handled a collision - but it's quite easy to analyze software and find fault with it, especially since it's repeatable and demonstrable in independent tests.
"Your programming is designed to save the occupants at the expense of the bystanders!" they can cry.
This eventuality is inevitable.