Self would refer to a thought consciousness that has become aware of itself and its existence amongst its surroundings, as Descartes has said. As to whether it is an illusion that is self-deceiving or actually what is reflective of its senses and experience, as long as the "self" acknowledges that it exists within itself, then it fulfills that requirement.
In context, the AI would have to be programmed to have experienced human growth and emotional development. It seems to me morals and ethics are strongly influenced by human emotion and feelings. To replicate these human attributes would be fairly difficult to program, yet would be necessary to include as a prerequisite to forming "ethical" decisions. I am not doubting the possibility of an AI having the ability to make seemingly ethical decisions; in fact I am quite fascinated with the idea. However I am just pointing out that while getting to point C may not seem so complicated, it may just be that point B is the roadblock, that is programming some of the factors leading the AI to be able to make ethical decisions would be strongly influenced by the programmer of the AI.
That is, in the end the AI would "appear" or have the illusion of being able to make its own ethical decisions, when in fact it has taken on the persona/principles/ethics of its human programmer.