The ethics of AI

Asguard

Kiss my dark side
Valued Senior Member
ok here is something i was thinking of the other day after reading stuff on how we are never going to create a clone army and stuff

Machines are slaves at the moment

We use them to do what we want without thought on what the think because they don't

what about when they do?

when we create AI will it be ethical to use machines like we have as slaves?
 
I just dont like the concept of having "machines" as army's or whatnot. Its kinda creepy. Plus if we start using them, certain people will start calling for "equality" and then they will end up having more rights than us.

Take care:)
 
Especially creepy when AI is being used to make Artificial Humans. Anybody seen the movie AI?

What if the AI's are not treated right and get a "bug" in their programme because of it?

What when they start acting in the opposite way then?

I think it's a pretty dangerous development. Humans think they are in control, sometimes it doesn't work out that way and programmes, made for AI, act different after being used a while, then people think when they created it.

AI gets more advanced, more chips and other stuff, so more complicated. More chances things go wrong too...
 
come come not so pessimistic.
I know humans have a reputation of letting good developments turn for the worst, but maybe this is different.
You see when there is real AI (and that is the scenario we are discussing) the computers will not 'live by their programming'. Probably they will be governed by evolving complex systems, somewhat like our brains. And they will be sentient. And therfor I believe thay should have the right to make their own decisions etc. Of course they can be made to have certain tendencies (and I hope that these are not agressive or selfish ones), but so do all sentient beings. These dispositions are probably the only "bugs" that can exist in real AI. A result should also be that these "bugs", like a disposition to be greedy or violent, should be treated much the same way as they are treated in humans.

Don't get me wrong, I will not be applauding when we in the future develop genuine AI. And I doubt whether we really will be able to do that in the near future. But when the situation is there, ethics must have a capital influence on our standards of our intercourse with them.
I strongly believe that we should then stop saying "what is the use of these machines?"

"AI gets more advanced, more chips and other stuff, so more complicated. More chances things go wrong too..."
-just like in humans:
"humans get more advanced, more neo-cortical structures and gray matter, so more complicated. More chances things go wrong too..."

live long and prosper
 
I know if I go into robotics, I'll want to build/programme something which can emulate human interaction.
 
And therfor I believe thay should have the right to make their own decisions etc. Of course they can be made to have certain tendencies (and I hope that these are not agressive or selfish ones), but so do all sentient beings. These dispositions are probably the only "bugs" that can exist in real AI. A result should also be that these "bugs", like a disposition to be greedy or violent, should be treated much the same way as they are treated in humans.

There always will be humans who have hidden agenda's. And that kind of people will use AI for selfish goals. I think it is a wrong development to even consider the fact of making AI-Humans.

A disposition to be greedy or violent should be treated the same way as in human beings? How you mean? A treatment in a psychological way? A lobotomy or something like that? Jees, it are machines! Turn them off if it's still possible, otherwise there's a serious problem.

You are right I guess, by saying it will probably not happen in the near future. Doesn't take away the fact, that nobody knows exactly how far scientists are with this AI. May be nearer then we think.

That's not pessimistic, it's knowing how sneaky humans can be in hiding their technology. All over the world...
 
Hello Banshee,

I read that you are here in North America. Welcome. I'm sorry to read also that you are not very impressed with us. I've no idea where you are, but I wanted to mention that before you return home you might want try to visit northern New England. The people are friendly and the land is quite beautiful here. We care very much about the enviroment. You might be surprised as well to learn that folks in my own state of Vermont are perhaps nearly as liberal minded as your own countrymen are in the Netherlands. Anyway, I would be sorry if you went home dissapointed when I know that America at it's best is simply beyond comparison. Just a thought.

Now on to your post about AI. It's a shame that you see more of the bad than the good in humans. I believe you are unduely pessimistic. Perhaps you listen to the news too much? To find good in humans you have to look for it. A news reporter won't tell you about it. I've seen many of the sides of mankind and have decided that the beauty in us far outweighs our propensity for evil.

I hope you allow for the possibility that the machines we make might one day be better than we are, rather than worse. I'd be very pleased to have my mind downloaded into a silicon based machine. All I might need would be a a bit of electricity to make me happy. If my arms were made in the form of wind generator I might even generate my own "green power".

Seriously Banshee, I don't live in a fantasy world where, "the women are strong, the men good looking, and the kids are all above average." I see the bad, but I choose to concentrate on the good. For me the glass is not half empty...

Michael
 
A disposition to be greedy or violent should be treated the same way as in human beings? How you mean? A treatment in a psychological way? A lobotomy or something like that? Jees, it are machines! Turn them off if it's still possible, otherwise there's a serious problem.
Banshee,
eeeh let me get this straight: Do you suggest we should lobotomize thiefs?
And my point was: we should not be allowed to just turn sentient machines off! that would be like killing them (or putting them into a coma in the best case). I thought you were opposed to capital punishment....
 
Hi Orthogonal,

Thank you for your kind words. :) I know I come off too hard on the American people. You are very right. There are a lot of good people here who are not as closed-minded as I say. It's just what I hear and get send to me by e-mail, about the government and their actions that upset me. It would be nice to visit Vermont. Don't see it happen in the near future. I'm in the south of North America, the neighbourhood of New Orleans. There are good people here. Friendly as can be. Maybe I'm just used to argue about the united states and it's government. I don't have a television, so I don't get any "wrong" idea's from that. I read the newspapers via internet. In the Netherlands it's not so pretty too, at the moment. A lot of BS going on there. So I guess I have to post a little more smart about the subject. It's good to hear you people in New England are doing something to care for the environment. :) The Land is beautiful here yes. And I'm not going back to the Netherlands either, I'm here to stay. So I better change my view.

It's not that I'm pessimistic, I know there's a lot of beauty to be found in humans also. I live for 42 years now and have met all kinds of people. It's that the human race is getting so selfish. Not all of them, a lot, though. The only thing that counts is Me, Myself and I, in much humans nowadays. I think that has to change. So in that way I post in another way than most of you. Oh and I know there are a lot of members here at the Forums who can shoot me because of it. Well, that's the risk that comes with it. I think everybody has to respect others, no matter how they present themselves. You can agree or disagree, then have a discussion on it, don't say a person is irrational or whatever other name-calling. Leave everyone in their value. Every human being is as valuable as any other human being. Don't try to hurt someone elses feelings, on purpose.

To go back to AI. I'm sorry, I'm not for artificial humans. The idea of my mind being downloaded into a machine gives me the creeps. I prefer humans of flesh and blood above a machine. Maybe I'll change my mind over the years, for now, no, I don't have to think about it.

I don't think you live in a fantasy world. I think you're a very kind person and thank you so much for the post... :) :)

(As Louis Armstrong sings: It's A Wonderful World.)
 
eeeh let me get this straight: Do you suggest we should lobotomize thiefs?
And my point was: we should not be allowed to just turn sentient machines off! that would be like killing them (or putting them into a coma in the best case). I thought you were opposed to capital punishment....

No, thiefs are human beings. Flesh and blood. I am against lobotomy anyways, it was meant sarcastic.

We should not be allowed to turn those machines off? Oh, ok. Let the sentient machines take over if they go out of hand? Whatever you want. It are machines for godsake! Killing a machine? Is that capital punishment? Well, I'm sorry, I see it different.

Downloading a human mind into machines is something different. Then you should actually be killing a human in some sort of way. I have to think about this.

Talk to you later...
 
"I have to think about this. "
At least we agree on that one ;)

the act of killing has in this case nothing to do with having someone's hart stop beating, but with a stopping of a consciousness, a mind. Does it really matter what causes this mind?

And why do you think do machines want to take over?
Have you sound reason for that, or did you watch too many scary movies, maybe you have listened too much to people like Sjanne Kooijmans?
 
And why do you think do machines want to take over?
Have you sound reason for that, or did you watch too many scary movies, maybe you have listened too much to people like Sjanne Kooijmans?

No, but it is plausible to think that eventually they would try to do something of that nature. If you create them to think like humans, well, how do you think humans think? Got to be the best, got to be the winner, got to have everything... those are some common human thoughts. That would be my main concern regarding AI - but Im no scientist so I really dont know too much about this subject. I just really dont like the idea. Its 'unnatural' but thats just my opinion.

What are some benefits associated with creating AI?

Take care
:)
 
As to the benefits of AI: I have no clue other than it may be fun to we can do it.
I think it is not worth the effort.
 
The benefits of A.I. Hmmmm. At the moment I doubt that we can design anything close to the human mind using anything inorganic, and I think that it will turn out that a computer will actually design what is needed to create artificial intelligence-kind of ironic when you think about it. We just can't do it, not now, maybe not ever. A focused, linear mind with only one purpose and lightning-thinking capability could, I think. Humans just don't possess this.

I once read a quote here that said 'Humans are just one step in the evolution of robots.' This may be true, right now our only advantage over machines is our ability to be nonlinear and random, to develop lifestyles, to think, to want to survive. We cannot think as fast as a 2 ghz computer, I doubt any human alive can. Our only independence from machines will be overcome, and then they really will be better than us.

When real artificial intelligence is created I really don't think a war is out of the question, a war that the human race would almost certainly lose. How do you fight someone who thinks faster than you, in fact, hundreds or thousands or millions of times faster? Try playing flash cards with a calculator. You'll see what I mean.

What's more is that the robots will know that they are better than us, and will either decide that we are not worth the trouble to be kept around or that they should be compassionate to their creators and allow them to live out their lives. History has a habit of repeating itself, but I think that unlike in A.I robots will not be our slaves, but we will be the slaves to the robots. This may be ranting, let's just hope it never comes true.
 
There ya go! :) Exactly! Finally someone who sees the light. Thank you Pollux V.

Originally posted by Merlijn
And why do you think do machines want to take over?
Have you sound reason for that, or did you watch too many scary movies, maybe you have listened too much to people like Sjanne Kooijmans?

I'm sorry to say so, don'r t remember to have ever listened to Sjanne Kooijmans. What's it about then? Scary movies? Do they make scary movies still? Where? Tell me. A real, good, scary movie is rare nowadays. Haven't seen anything that was worth the name "scary movie" in ages. I'm not that easily to scare... ;)

I don't think it's only fun when AI becomes a fact in such a way the robots or cyborgs or whatever you want to call them, get out of hand. Where humans are involved, the human nature is involved and because there always are the ones who want to be above others and want to be the "Master", humans will fuck up.

Originally posted by *stRgrL*
No, but it is plausible to think that eventually they would try to do something of that nature. If you create them to think like humans, well, how do you think humans think? Got to be the best, got to be the winner, got to have everything... those are some common human thoughts. That would be my main concern regarding AI - but Im no scientist so I really dont know too much about this subject. I just really dont like the idea. Its 'unnatural' but thats just my opinion.

I second that...:)
 
OK Guys, you seem to be lacking some basics.

First, the singularity –

http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html

This theory was proposed in 1993 with a prediction of 2023, and that still seems likely, and perhaps sooner.

You also should be aware of Moore’s law that says that computing power approximately doubles every 18 months. This has held up almost perfectly since 1940, and now seems to be speeding up somewhat. So what, you might say. Well, see if you can follow this -

A human neuron fires at around 200 times a second (200Hz), and there are 100 billion of them in the human brain operating in parallel. A powerful PC at present can operate at 2GHz, or the equivalent of 10 million neurons. That means we would have to link 10,000 PCs together to achieve human brain equivalence. Well we can’t quite do that yet. But in 10 years we should have a computing chip of around 200GHz, and at that point we only have to couple 100 of them together to achieve human brainpower. And that we can easily achieve, i.e. by 2012 we will have computing power to rival the human brain.

But computing power alone will not be enough; we also need the software. And that should also not be a problem. As CPUs have become increasingly powerful recently we have seen a corresponding increase in the interest in AI software. The best approach, I still believe, is to emulate the neural networks of the human brain, and that is actively being pursued.

So to the ethical issues: It seems inevitable that human equivalent machine intelligence will be with us within the next 10 to 20 years. Assuming their intelligence is based on human neural networks then it seems very likely that they will be self-aware and our equals. So enslaving them will be a real issue.

But that really isn’t the real problem since technology will not be standing still. If Moore’s law continues to hold then within a further two years their intelligence will be double that of humans. And at that point we have no way to imagine what will happen next. The emergence of super-intelligence means humans will cease to be the dominant intelligence on the planet.

The question of treating intelligent machines as slaves will just be a short transitory period. The real concern is whether they will enslave us or enable freedoms for us currently unimaginable. Or will they see us as a threat and eliminate us?

For those of us who have been following these issues we see our best hope of survival is to adapt ourselves to match the machines. And that essentially means we must make the transition from organic beings to machine beings. One term that has been coined is Robo-Sapiens. And that leads us into Mind-uploading which should be another discussion.

Hope that helps
Cris
 
this gets me to thining about the Terminator movies.

What would we do if the machine or computer or AI or whatever you wanna call it, decides one day that we are a threat. Come to think of it we are a threat to ourselves and each other and every living thing on this planet. Im tired and rambling, night
 
Don't you guys have ever heard about Isaac Asimov? He has formulate the law of 'ethinc' for AI since 1942.

Asimov Laws of Robotics:

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

3) A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
 
My goodness Cris, it reads like a real Sci-Fi story. :bugeye: It's a pretty good story too.

LOL
aetsch028.gif
aetsch028.gif
aetsch028.gif


And then Ismu follows with:

Asimov Laws of Robotics:

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

3) A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

Really, if you read the posts after one another it sketches a wonderful "movie." Better than Terminator or AI even.

Ismu, you really believe these Robo-Sapiens
aetsch028.gif
will obey Homo-Sapiens?
aetsch028.gif


I am sorry, my apologies. I'll be back later, after I'm done laughing. I can't help it...
aetsch028.gif
 
Back
Top