Can Robots Make Ethical Decisions?

Status
Not open for further replies.
It depends on whose mores or what mores are involved.

Some humans can't make ethical decisions ...
 
Ethical decisions are putrely subjective. my greatest problem wiht this is for the greater good. Many humans i know would in practice save friend far before hundreds of people they dont know and call this ethics. On the ohter hand if we programed a robot to make decisions based on whats good for us instead of what we want then alot of peole would say the decsions made were not ethical. Robots make dacisions based on peramiters, in the case of AI it introduces its own peramiters, in a modern computer those peramiters are set, at the bassist level with a line of IF command.

For a robot ethics are superfluous becasue there peramiters will simpley give a simulated better outcome.
EG. Tell it to cut climate change and a robot would say stop emitions. tell a robot to stop climate change secondairaly to preserving our present rate of ecanomic growth and it will say keep emiiting for the good of the economy.
 
A robot AI could make an ethical decision based on input from the programmer to begin with. But if that AI see's that Humans were not ethical in all aspects and decided to act on that. Would that then be unethical.
 
Robots and computers are often designed to act autonomously, that is, without human intervention.
Yes, that's the dream but what actually does that right now? by the way your link has expired.
 
Originally Posted by sandy
Robots and computers are often designed to act autonomously, that is, without human intervention.

Yes, that's the dream but what actually does that right now?.

I thank calculators an humans are equaly "autonomous"... humans are jus mor complex an generaly preceived to be mor autonomous... but in reality... nether can vary from ther "program".!!!
 
Originally Posted by sandy
Robots and computers are often designed to act autonomously, that is, without human intervention.



I thank calculators an humans are equaly "autonomous"... humans are jus mor complex an generaly preceived to be mor autonomous... but in reality... nether can vary from ther "program".!!!

Please explain how a human cannot vary from it's program. If the program is defined as Morals and a sense of right and wrong. Humans vary from their "program" everyday. If a true AI self learn machine not autonomous there is a difference big difference in them. As I can cerate a robot that can move on its own but does not learn from the decisions it makes to move around. A AI by definitions learns from its decisions and makes a judgment call based on past experiences as well as it's version or morals and it sense of right and wrong. Also throw in self preservation and you have a thinking machine at its base. So in this context a Robot or AI can make an ethical decision but as I asked before if it makes an ethical decision that is not in line with base ethics is that decision UN-ethical?
 
Please explain how a human cannot vary from it's program. If the program is defined as Morals and a sense of right and wrong. Humans vary from their "program" everyday.
Except that "morals" isn't a "programme" it's a catch-all term for something we think is inherent (at least partially).
We have no idea what the underlying structure (i.e. the ACTUAL "programme") is so whether we depart from it or not is unknowable.
 
Please explain how a human cannot vary from it's program.

Each thang a human does is an effect from a prevous cause.!!!

...a Robot or AI can make an ethical decision but as I asked before if it makes an ethical decision that is not in line with base ethics is that decision UN-ethical?

It depends on whos ox is gettin gored.!!!
 
Except that "morals" isn't a "programme" it's a catch-all term for something we think is inherent (at least partially).
We have no idea what the underlying structure (i.e. the ACTUAL "programme") is so whether we depart from it or not is unknowable.

Well you could say that but a program is after all a set of instructions to enable a system to navigate variables basically it is far more complex I know but the basic system is the instructions. So in the human case an argument could be made that morals and the sense of right and wrong would be the instructions as well as other primal instructions such as fear and others. So you are right in saying morals is not the program but it is part of the program all humans are supposed to have and humans every day stray from morals as well as other parts of the program so to say they dont every would be incorrect.
 
Each thang a human does is an effect from a prevous cause.!!!

Yes this is true and this could be observed as a stray from programming Dependant on external variables and causal effects.

It depends on whos ox is gettin gored.!!!

Yes it would but if the Ox getting gored was the original source of input then the decision being made would then have to be ethical.
 
Well you could say that but a program is after all a set of instructions to enable a system to navigate variables basically it is far more complex I know but the basic system is the instructions. So in the human case an argument could be made that morals and the sense of right and wrong would be the instructions as well as other primal instructions such as fear and others. So you are right in saying morals is not the program but it is part of the program all humans are supposed to have and humans every day stray from morals as well as other parts of the program so to say they dont every would be incorrect.
No I'm saying that "morals" is the term we use for something not completely understood.
And since we don't know the entire "programme" then it's impossible to say that we can go against it because we don't know what the programme "allows".
Humans go against stated (visible) morality: there's no way at all you can claim that we go against our "inner programming" since you have no idea what it is.
 
Status
Not open for further replies.
Back
Top