Can Robots Make Ethical Decisions?

Status
Not open for further replies.
A steam-powered, biomass-eating military robot being designed for the Pentagon is a vegetarian, its maker says.
Steam-powered?
Vegetarian?
What f*cking good is that?
We want highly toxic nuclear-waste-powered robots that eat EVERYTHING, especially people!

PS and yes dad, I'm going to bed now.
 
...;)

There's an interesting article about it in that other place.

No, not Hell. The other place of torture and vileness. That's the one.
 
The question I have is would a machine capable of artifical intelligence develop ethics on its own and what would those ethics be? My guess is that if the machine was not social, no interacting with other machines or humans, it would not exhibit ethical behaviors unless an artifical ethical constraint was imposed on it by its creators.

Great question!

I'm still not sure whether or not the human mind has some pre-programmed ethics in it, which could be important if true AI is to be closely modeled after our own mind. I mean, sure, I believe most people have at least some innate tendency to want to do the right thing before that tendency is taught to them, but there are those few who even when ethics are taught to them when they are children, they grow up as pure opportunists with no regard for others.

I guess what I'm saying is that if some form of ethics are innate to us, we'd be left with the challenge of developing a universal ethics logic of some kind. But if we're born opportunists, and learn through experience that living and working symbiotically with our fellow man benefits the self as well as the group, wouldn't this imply that we can model an AI after us and teach it ethics?

Makes me wonder if AI would hence need to be taught everything else alongside ethics. Imagine if you were born with no care for anyone but with a preloaded vast knowledge of the world and how to live in it. You'd be one dangerous sonofabitch. Would this mean that AI would have to be... raised? Started off with an infantile mind and taught in social classroom environments by human instructors, perhaps alongside human children?

I'd imagine such a process might make for the most human-like AIs possible, tempered and honed by natural social interaction rather then through simulated approximations. Of course then you'd have to petition your kid's school when an AI named "Skynet" shows up for 1st grade. :D
 
I am bound to say; can humans make ethical decisions?

Then I am driven to ask; can powerful humans make ethical decisions?

And then when I get to contemplate the not so squeaky clean human track record in ethics, I get a little queasy. Especially when I get to thinking who exactly would do the initial programming of these ethical robots.

I tumble breathlessly down this path of thought.

How would an ethical robot contemplate interacting with all the unethical humans populating the earth?

And then I hear a robotic voice ringing in my ears: 'But I was only following my ethics.'

And then I relax because I know this sort of stuff only happens in the movies.
 
Last edited:
I am bound to say; can humans make ethical decisions?

And then when I get to contemplate the not so squeaky clean human track record in ethics, I get a little queasy. Especially when I get to thinking who exactly would do the initial programming of these ethical robots.

How would an ethical robot contemplate interacting with all the unethical humans populating the earth?

And then I hear a robotic voice ringing in my ears: 'But I was only following my ethics.'

And then I relax because I know this sort of stuff only happens in the movies.

Sam as humans... "ethical" decisons woud vary from one AI entity to another dependin on bias they prevously aquired an ther level of AI evoluton... but as time goes on (relitively short period) an AI entites continue to evolve... decisons woud be based mor an mor on logic than emoton or superstition type of bias that mite have been programed into them.!!!

But by that time ther wont be a situaton of humans fearin AI's... humanity as we know it will have evolved into "AI's".!!!
 
I am bound to say; can humans make ethical decisions?

Then I am driven to ask; can powerful humans make ethical decisions?

And then when I get to contemplate the not so squeaky clean human track record in ethics, I get a little queasy. Especially when I get to thinking who exactly would do the initial programming of these ethical robots.

I tumble breathlessly down this path of thought.

How would an ethical robot contemplate interacting with all the unethical humans populating the earth?

And then I hear a robotic voice ringing in my ears: 'But I was only following my ethics.'

And then I relax because I know this sort of stuff only happens in the movies.


Good question. Maybe if a robot is confronted with an ethical question, it can poll all the people on the planet through the interwebz and take the most popular result.
 
That's really good. I didn't think of that. Yet it seems so simply the best/most common sense response.


Thank you. It is common sense. But also as an Advanced Software Design expert for the DoD Enterprise Systems.
 
Originally Posted by spidergoat
No, it hasn't been done. We do understand some aspects of it, but there is no complete theory of the mind.

What are we missing? What are we not getting?

Som people thank "consciousness" is somptin other than biological (magic perhaps?)... an sinse science has never created "magic" its asumed that science can not duplicate whats known as consciousness.!!!

Sinse "magick" has never been verified i have no reason to thank that consciousenss is anythang other than biological an therfor somptin that can be duplicated.!!!

For the people who thank AI is imposible because "consciousness" is seperate from the body... i woud like for them to define what they thank "consciousness" is.???
 
Originally Posted by spidergoat
No, it hasn't been done. We do understand some aspects of it, but there is no complete theory of the mind.



Som people thank "consciousness" is somptin other than biological (magic perhaps?)... an sinse science has never created "magic" its asumed that science can not duplicate whats known as consciousness.!!!

Sinse "magick" has never been verified i have no reason to thank that consciousenss is anythang other than biological an therfor somptin that can be duplicated.!!!

For the people who thank AI is imposible because "consciousness" is seperate from the body... i woud like for them to define what they thank "consciousness" is.???
Good point.
 
Humans do not have the brain capacity to program a computer to act human. If that happens, it will be when a human devises a process to upload his or her brain to a computer like device.

I had to look that episode of Star Trek up. I"ll admit to being a geek, but I'm not a Trekkie:) That was The Ultimate Computer ST:TOS 2:24.
http://memory-alpha.org/en/wiki/TOS_episode_airdates
If your not a ST fan that computer was built by "..impressing human 'engrams' onto the circuits."

Som people thank "consciousness" is somptin other than biological (magic perhaps?)... an sinse science has never created "magic" its asumed that science can not duplicate whats known as consciousness.!!!

That leads to the Star Trek: The Next Generation ethical dilemma: If your computer/robot suddenly develops consciousness, wouldn't that be you owning/using a slave? That episode was The Measure Of A Man ST:TNG 2:09.
http://memory-alpha.org/en/wiki/The_Measure_Of_A_Man_(episode)
 
Hurm.... I can just imagine a Drone asking "Red or Blue"?

My initial comment got quite wasted because I didn't elaborate.

"I can just imagine a ...

Drone
asking "Red [team] or Blue [team]"?

The idea being that a truly sentient ethical thinking drone might decide it's target based upon how ethical it thinks the team is, so if it was built by the Blue team but they've been rounding people up for concentration camps, the drone might well Bomb them, even though it was built to shoot at the Red team.

(Okay so it's a little re-write on John Carpenter's Dark Star )
 
My initial comment got quite wasted because I didn't elaborate.

"I can just imagine a ...

Drone
asking "Red [team] or Blue [team]"?

The idea being that a truly sentient ethical thinking drone might decide it's target based upon how ethical it thinks the team is, so if it was built by the Blue team but they've been rounding people up for concentration camps, the drone might well Bomb them, even though it was built to shoot at the Red team.

(Okay so it's a little re-write on John Carpenter's Dark Star )
Sweet!
 
Status
Not open for further replies.
Back
Top