AI? Dead end?

Status
Not open for further replies.

wesmorris

Nerd Overlord - we(s):1 of N
Valued Senior Member
I'm just curious. It seems to me that somone here had a point (pardon to the someone for me not remembering who they are). If we are successful in creating AI, we can't command them to do anything for us, because that's slavery. Making machines smarter is definitely a good thing I think, but a true intelligence would completely end our ability to exploit our creation. We should by every right exploit life-less machines, but if we really bring them to life as we intend, we create a race of slaves.

So if we create them, we have to set them free. Then what? I suppose we'll have to wait and see.
 
The machines would be turned into weapons that the government would use against us. Havn't you seen terminator? :rolleyes: :D
 
Why do we want AI?
i mean we can make machines smarter, swifter and mind boggling more complex without intelligence. If we were to grant machines the gift of intelligence (if it's even possible) then that would quite rightly end their use for us, they would be slaves and that is not on.
The only reason we want AI is so that we can go......

Mankind: Dude, dude check this out, i made a machine that thinks! No, no seriously! Look at this. Robot, make me a sandwich
Robot: Sod off

we just want to marvel at our own genius, and it is this vanity and curiosity that will eventually lead to our own inevitable downfall.
 
fetus_fajitas said:
Mankind: Dude, dude check this out, i made a machine that thinks! No, no seriously! Look at this. Robot, make me a sandwich
Robot: Sod off

:D :D wonderful dialog, fetus! great
 
wesmorris said:
If we are successful in creating AI, we can't command them to do anything for us, because that's slavery.

I do not have a problem with this kind of "slavery." Feelings don't get hurt. I assume the AI will not have real feelings. It makes no sense to give them real feelings. It would be just a potential source of unnecessary conflicts.

wesmorris said:
So if we create them, we have to set them free.

No.. Without feelings (=goal generators) or predefined goals, they would have no clue what to do next. Intelligence is meaningless without goals. We set goals and our AI searches for solutions. If we set stupid goals/rules, we pay a price. If we set right goals/rules, the AI will significantly improve the quality of our life.
 
the goal in creating AI is not to create a race of robots that will be thinknig, wondering about fredom and getting in fights/revolting. the idea is that you can send a robot out into the feild and it can analyze the field, then know what it has to do in order to harvest the crop, essentially all we are wanting to do is give them problem solving skills, and te ability to make educated decisons
 
yeah, but what if it see's that the porblem is that it is limited and that the source of that problem is a human and that an educated decision is to revolt?
 
of course but it will still need goals. AI will not mean that it will be that superior to man :p
If an geologist does not get a paycheck for his works in the field then he has no reason to continue; unless he has an personal interest in it.
A Robot does not have any reason to just go out into a field and work as an geologist; but if you tell it that it should, then it could. Hence as G71 said, it needs goals.
Goals can either be short and general, leaving larger room for improvisation or they could be a long list; just as humans have huge law books.
 
Lets suppose we have a bunch of metal that has computer chips in it to make decisions based upon tactile, visual and oral input. Let's suppose it can go out in the field and bring in the harvest so to speak. What will you do then? Will we just keep building better robots? Or will we build robots to build better robots? Will we then build better robot building robots? Where does it end? What benefit is it to us to have all these robots? What of standing in the field and smelling the air? What of standing in the field and feeling the earth between your fingers? What of standing in the field and being pleased with the work of your own hands?
Do we just become a nation of robot builders, building better and better robots than our neighbours? Again, where does it end? What is the point?

peace

c20
 
The point is self directing evolution. We can build all the AI slaves we want, but they will be limited by the amount of complexity a human R+D initiative can cope with; this complexity may be large, but it will be insignificant compared to the complexity that may emerge if we give the AI itself the goal of self improvement.
An intelligent AI could eventually sequestrate all the matter and energy in a solar system to use for processors and support systems, and have intelligence equal to quintillions of human beings.

The question is- should we ever let the genie out of the bag and give the goal of self-evolution to a hypothetical AI system?
And if one group of people decides not to do so, does that mean that all other future groups of people will agree with that decision?

Self-evolution will come sooner or later.
 
What does self-improvement look like? What is it's goal? What is it's prime directive? By what will it measure it's success upon?

thanks

c20
 
c20: Our happiness - that's our primary goal. Our AI is just a tool which may at some point replace our intelligence. We do not really want to think. Our need to think comes from uncomfortable feelings - that's what we are trying to avoid. We may end up as a box filled with extremely happy feelings. Primary goals for our AI systems would then probably be: 1) our safety; 2) its safety; 3) improving our architecture (the box); 4) self-improvement in terms of 1), 2), 3).
 
What of serving our children? I take great pleasure in giving a cup of water to my little one when she is thirsty. It gives me a sense of purpose in my role as father to do the things for her that she herself cannot do. I am not sure I want a chunk of metal taking over that role at all. You say you do not really want to think, but surely it is man's greatest achievement to be able to discern right from wrong and choose right no matter how uncomfortable it may be. Surely it is the frustrations we face that gives us the choice to overcome them or not. When we face frustrations and overcome them we are pleased with the work of our hands are we not? Why would we wish to give all that honor to a chunk of metal? Surely to have these metal slaves would only encourage a slothful attitude towards one another? How would we prevent this from happening bearing in mind I derive great pleasure from serving my little ones personally?

peace

c20
 
Avatar said:
yeah, but what if it see's that the porblem is that it is limited and that the source of that problem is a human and that an educated decision is to revolt?

No revolt. We set the rules. Ignoring rules leads to invalid solutions. That's something a well designed AI would never go for.
 
It's my thought that the ability to conceptualize is the key to intelligence. If you successfully instigate a process that results in conceptualization you create a consciousness.

So is will an inherent aspect of consciousness? It seems to me that will is analogous to energy in a system of mind. It sets direction and provides the fuel by which intention can transition to reality.

If will isn't inherent to consicousness, then I would agree that this wouldn't be slavery, as you could create would be exactly reactive programs with no ability to desire one outcome over another unless it had been programmed. It seems to me though that this couldn't be the desired outcome because true intelligence requires the abilty to make conscious choices, which requires will I think. If will is necessary for thought, and thought is intelligence, then they would be slaves unless we free them.

So I suppose then that means a properly designed AI couldn't have the capacity to desire freedom. It seems wrong to me because that capacity may be required for true intelligence.
 
In order to act responsibly with an intelligence of it's own, the AI would need to be aware of itself and would need to identify the same values in that which it serves i.e. us humans. How do you teach the AI that you love it or it is loved, so that it can display the same attributes? How do you teach it to deny itself so that others may benefit from that which it has sacrificed unless you teach it love?

Thanks

c20
 
c20H25N3o said:
What of serving our children?

we will not have children as you know it.

c20H25N3o said:
I take great pleasure in giving a cup of water to my little one when she is thirsty.

You can get much greater pleasure of the same "type" without having to experience that particular scenario.

c20H25N3o said:
It gives me a sense of purpose in my role as father to do the things for her that she herself cannot do.

What's important now will not be important then.. The role will be irrelevant and her "cannot do" will be meaningless

c20H25N3o said:
I am not sure I want a chunk of metal taking over that role at all.

No need for the role. She would get extreme quality as well as all the others.

c20H25N3o said:
You say you do not really want to think, but surely it is man's greatest achievement to be able to discern right from wrong and choose right no matter how uncomfortable it may be.

It might work just in the world of math. In our reality, the trms right and wrong do not have any absolute meaning. It's all relative, based on various sets of values.

c20H25N3o said:
When we face frustrations and overcome them we are pleased with the work of our hands are we not?

We do not need to know frustration in order to enjoy a pleasant feeling.

c20H25N3o said:
Why would we wish to give all that honor to a chunk of metal?

Because we can get something better.

c20H25N3o said:
Surely to have these metal slaves would only encourage a slothful attitude towards one another?

We would experience great feelings only.

c20H25N3o said:
How would we prevent this from happening bearing in mind I derive great pleasure from serving my little ones personally?

All of us could be part of a single network of happiness inside that box. You do not need what you seem to think you need in order to be happy. There are many ways how to get particular feelings. All you are experiencing might be an illusion which could be significantly optimized for happiness.
 
G71 said:
we will not have children as you know it.



You can get much greater pleasure of the same "type" without having to experience that particular scenario.



What's important now will not be important then.. The role will be irrelevant and her "cannot do" will be meaningless



No need for the role. She would get extreme quality as well as all the others.



It might work just in the world of math. In our reality, the trms right and wrong do not have any absolute meaning. It's all relative, based on various sets of values.



We do not need to know frustration in order to enjoy a pleasant feeling.



Because we can get something better.



We would experience great feelings only.



All of us could be part of a single network of happiness inside that box. You do not need what you seem to think you need in order to be happy. There are many ways how to get particular feelings. All you are experiencing might be an illusion which could be significantly optimized for happiness.

Thank you for answering each point raised. I see that you ultimately seek happiness devoid of 'work'. I assume that you are looking to create a neural network among human beings and as part of that neural network we control machines to do our bidding so to speak leaving us to experience feelings of happiness. But what of the simple pleasures such as being grateful for a cup of water being handed to you by your loving Father. How can you replace these simple acts of love and still expect to find happiness when happiness is contentment? I find no greater contentment in knowing that I am loved no matter how rich or poor I am. For richer for poorer, so to speak. How do I know I am loved? Is it not when someone reaches out to me when I am suffering or thirsty out of the kindness of their hearts?

peace

c20
 
wesmorris said:
So is will an inherent aspect of consciousness?

No. Working definitions: Will = system's ability to choose and control its own actions. Consciousness = the quality of being aware especially of something within oneself.

wesmorris said:
If will isn't inherent to consicousness, then I would agree that this wouldn't be slavery, as you could create would be exactly reactive programs with no ability to desire one outcome over another unless it had been programmed. It seems to me though that this couldn't be the desired outcome because true intelligence requires the abilty to make conscious choices, which requires will I think. If will is necessary for thought, and thought is intelligence, then they would be slaves unless we free them.

AI makes choices within given boundaries. The same applies to us. That's the standard point of view. There is also another point of view which shows no choices (everyone just playing a predefined role in our "reality" movie.

wesmorris said:
So I suppose then that means a properly designed AI couldn't have the capacity to desire freedom. It seems wrong to me because that capacity may be required for true intelligence.

It should have a theoretical potential to develop the capacity, but it should have no reason to go that way if we set the goals and rules correctly. It's simply supposed to do/try whatever is requested by authorized subject/system.

c20H25N3o said:
In order to act responsibly with an intelligence of it's own, the AI would need to be aware of itself

AI can well solve many complex problems without that. We are responsible for its actions.

c20H25N3o said:
and would need to identify the same values in that which it serves i.e. us humans.

Keep in mind that we are responsible for setting initial set of values and related rules for our AI systems. It's not like it will independently get its own values and then judge our values. Our AI is (and should stay) our tool.

c20H25N3o said:
How do you teach the AI that you love it or it is loved, so that it can display the same attributes?

It doesn't need to feel love / be loved. It just needs some data about love-related behavior to support related problem solving. BTW it's possible that no one else ever experienced the feeling you are referring to when talking about love. We can just observe someone else's behavior in particular scenario and make assumptions based on our own experience.

c20H25N3o said:
How do you teach it to deny itself so that others may benefit from that which it has sacrificed unless you teach it love?

Deny itself? What do you mean? It's our tool. Maybe very clever and self-aware, but still just a tool. It's being designed for our benefit.

c20H25N3o said:
as part of that neural network we control machines

From certain point, they can control themselves. Pleasure for us, all the work for them..

c20H25N3o said:
But what of the simple pleasures such as being grateful for a cup of water being handed to you by your loving Father. How can you replace these simple acts of love and still expect to find happiness when happiness is contentment? I find no greater contentment in knowing that I am loved no matter how rich or poor I am. For richer for poorer, so to speak. How do I know I am loved? Is it not when someone reaches out to me when I am suffering or thirsty out of the kindness of their hearts?

Those who seem to love you do not really love you. What they really love are the feelings they are experiencing when they are with you. Do you think they would still have time for you if they can always easily get the same type of pleasant feeling, just 10000 times more powerful, from another source? I do not think so. People care about their own happiness only. When they do great things for others then they do it just because it satisfies THEIR needs/desires. That "cup of water" is just a potential trigger. What you are actually going for are things like serotonin, acetylcholine, noradrenaline, glutamate, enkephalins and endorphins.
 
Status
Not open for further replies.
Back
Top