Can Robots Make Ethical Decisions?

Status
Not open for further replies.
One of my favourite examples of Ethics versus Logic comes from 'The Two Faces of Tomorrow'.

They're trying to 'grow' an artificial intelligence.
First the get it used to a simulated environment.
Then they introduce a simulated pet dog.
They give the simulated pet dog simulated fleas, and tell the AI to deal with it.
The solution? The AI puts the dog in the oven - nobody had told it the dog had to survive the treatment, and it was the most effective method of killing the fleas.
Obviously someone with less information will make poorer decisions. There are even cases of ignorant humans putting their pets in the microwave to "dry them off", not realizing that the microwave will cause serious burns in deep tissue instead of just warming up the surface of the animal.
 
Robots, or AI to be more specific, could be theoretically programmed to emulate human behavior. However, as far as ethics are concerned, the idea of ethical decisions differs between each individual. Ethics is a major dividing line between many political beliefs as well as personal ones.

With that being said, ethical decisions will differ between humans themselves, and thats not even bringing in AI comparisons. How a computer is programmed to respond to an ethical controversy will most likely reflect the beliefs of its human programmer and not of the AI itself, as the programmer could be seen as its "moral" teacher.

If you wanted to go further with that, whether AI could find a solution to human conflict, its possible, however as we all know very well of humans, we are not always so ready to agree or give up our beliefs or viewpoints. Ethics also comes into conflict with logic, as well as trust issues. In order for AI to emulate human thought processes, it would also need to have some algorithm under which it would base trust first before it could get to ethics. ie it would have to base some decisions without the support of experience, as well as go against a logical decision based on its prediction of the outcome. These things would be very hard to program unless we were to understand it amongst ourselves.

Needless to say, its hard to create something when we don't know what it is we are creating, because in the end even if we were able to create something, we wouldn't know whether if it was what we wanted to create or not. IOW the question is too vague and ambiguous to be debated in a serious manner.
 
How a computer is programmed to respond to an ethical controversy will most likely reflect the beliefs of its human programmer and not of the AI itself, as the programmer could be seen as its "moral" teacher.

Dont the way humans respond to an ethical controversy mos likely reflect the "beleifs" of how they was raized/programed by ther moral teechers.???

Do you thank humans have a "self" an a AI coudnt.???

How do you define "self" in the context of this discusson.???
 
Dont the way humans respond to an ethical controversy mos likely reflect the "beleifs" of how they was raized/programed by ther moral teechers.???

Yes.

Do you thank humans have a "self" an a AI coudnt.???

Not yet at least.

How do you define "self" in the context of this discusson.???


Self would refer to a thought consciousness that has become aware of itself and its existence amongst its surroundings, as Descartes has said. As to whether it is an illusion that is self-deceiving or actually what is reflective of its senses and experience, as long as the "self" acknowledges that it exists within itself, then it fulfills that requirement.
In context, the AI would have to be programmed to have experienced human growth and emotional development. It seems to me morals and ethics are strongly influenced by human emotion and feelings. To replicate these human attributes would be fairly difficult to program, yet would be necessary to include as a prerequisite to forming "ethical" decisions. I am not doubting the possibility of an AI having the ability to make seemingly ethical decisions; in fact I am quite fascinated with the idea. However I am just pointing out that while getting to point C may not seem so complicated, it may just be that point B is the roadblock, that is programming some of the factors leading the AI to be able to make ethical decisions would be strongly influenced by the programmer of the AI.

That is, in the end the AI would "appear" or have the illusion of being able to make its own ethical decisions, when in fact it has taken on the persona/principles/ethics of its human programmer.
 
Origionaly posted by cluelusshusbund
How do you define "self" in the context of this discusson.???

Self would refer to a thought consciousness that has become aware of itself and its existence amongst its surroundings...

As to whether it is an illusion that is self-deceiving or actually what is reflective of its senses and experience, as long as the "self" acknowledges that it exists within itself, then it fulfills that requirement.

How bout:::

A perception of existence equals "self".!!!

In context, the AI would have to be programmed to have experienced human growth and emotional development. It seems to me morals and ethics are strongly influenced by human emotion and feelings. To replicate these human attributes would be fairly difficult to program, yet would be necessary to include as a prerequisite to forming "ethical" decisions.

I am not doubting the possibility of an AI having the ability to make seemingly ethical decisions...

However I am just pointing out that while getting to point C may not seem so complicated, it may just be that point B is the roadblock, that is programming some of the factors leading the AI to be able to make ethical decisions would be strongly influenced by the programmer of the AI.

That is, in the end the AI would "appear" or have the illusion of being able to make its own ethical decisions, when in fact it has taken on the persona/principles/ethics of its human programmer.

So... because of bein influenced by ther programers... you can see an AI as makin "seemingly" ethical decisons... not "truly" ethical decisons as humans can make... but sinse humans are also influenced by ther "programers"... why woud a AI's moral decisons be considered somptin less than a human moral decision.???
 
Self would refer to a thought consciousness that has become aware of itself and its existence amongst its surroundings, as Descartes has said. As to whether it is an illusion that is self-deceiving or actually what is reflective of its senses and experience, as long as the "self" acknowledges that it exists within itself, then it fulfills that requirement.
In context, the AI would have to be programmed to have experienced human growth and emotional development. It seems to me morals and ethics are strongly influenced by human emotion and feelings. To replicate these human attributes would be fairly difficult to program, yet would be necessary to include as a prerequisite to forming "ethical" decisions. I am not doubting the possibility of an AI having the ability to make seemingly ethical decisions; in fact I am quite fascinated with the idea. However I am just pointing out that while getting to point C may not seem so complicated, it may just be that point B is the roadblock, that is programming some of the factors leading the AI to be able to make ethical decisions would be strongly influenced by the programmer of the AI.

That is, in the end the AI would "appear" or have the illusion of being able to make its own ethical decisions, when in fact it has taken on the persona/principles/ethics of its human programmer.
wrong.
have you even wondered what are these 'ethics 'you're talking about?
 
Can someone defiene me what is ethic ???

Can anyone define me what is an ethical decision ???
 
Last edited:
Excellent question. I would say it is the one you KNOW is right. The one you have complete peace about.
 
Have you ever known somptin to be right... an discover later that you was rong.???

No. Not that I can think of right now. If you make your decision based on what you know is right at the time, then that's the best you can do. I pray about most of my big decisions. Then they are never wrong.
 
No. Not that I can think of right now. If you make your decision based on what you know is right at the time, then that's the best you can do.
I pray about most of my big decisions. Then they are never wrong.

Have you'r religious beleifs ever changed.???
 
I'm going to take the position that robots don't have the genes to be ethical on their own in new situations. I'm not talking about group behavior where genes are passed on within groups that eventually tend to make group members protective of the group, memes. I talking about human qualities that cannot be incorporated into the production of robots. They can't be ethical by any human definition of ethics, they can't love, they cant hate, etc. You need genes and that doesn't even determine if you will act ethical or not, it just means you can establish the meaning of "ethical" for any and every situation that arises.

So making a decision and then deciding later that it was wrong has nothing to do with the initial decision which was based on your ethics at the time.
 
Prolly mos people still thank thers mor to genes than "biological"... that humans have a "magical-like" quality which cant be duplicated.!!!

Personaly... i dont have beleifs in "magic".!!!
 
Prolly mos people still thank thers mor to genes than "biological"... that humans have a "magical-like" quality which cant be duplicated.!!!

Personaly... i dont have beleifs in "magic".!!!
I presume since I was the one who mentioned genes that you are directing that comment to me. My reference to qualities like ethics being human and not robotic might have implied that humans were magic? or what? I said qualities like ethics, or love and hate require human equipment and cannot be built into a robot. Do you interpret that as invoking magic in the genetic make up of humans? Or are you saying what I think you are saying, that robots can be built that can make ethical considerations on their own without genes? Tell me that wouldn't be magic.
 
Last edited:
If it ant dew to som "magical-like" substance... what is it about genes that you thank coudnt be duplicated in ways other than biological.???
 
It took potentially billions of iterations over maybe billions of years for nature to make DNA that produces humans that understand the concept of ethics and can choose to be ethecial or not. And you think we humans can match nature in any finite period of time that we may have left to walk the Earth with our slow, trial and error techniques to duplicate that feat and match the genetic potential of humans in man made robots?
 
Status
Not open for further replies.
Back
Top