Should self-aware, concious AI be given rights?

4) They have become obsolute imitators of human society, their functions as imitators of human interactions and social networks is of no difference as that of human interactions. Furthermore the services of love, work, care for others...these AI machines offer are as of human services.
 
5) Humans use the AI machine services and the replication mechanism within the AI machine (just like woman give birth to a child) allows a "baby" machine to be born with physical traits that of 50% of humans and 50% of the AI machine. HOWEVER this newborn machine is not a human, it is an imitation of what a human would have looked if the AI machine was really a human
 
6) Ages pass...the integration of AI machines into human society is complete. There are no more humans left...since every "baby" born was an AI machine....every one of the humans died of natural cause or whatever cause there is. The AI machines replicate and become the sole ones to "exist" .
 
Humans and Machines can't reproduce together, silly.

In the age I am talking of this can be accomplished by a process of imitation.

picture this:

1) the human decides to "mate" with the AI machine he/she fell in love with

2) The AI machine runs an analyzes of what a baby human he/she would have looked like and what characteristics on a body they would have...the color of eyes...the hair...shape of face and such...

3) The AI machine than takes its own configuration and blends in a simulation software the parameters of the baby human scenario and the AI machine "baby" scenario. The result is a blend of a software design code...sort of like a CAD image of the object to be created.

4) The data of the blended in design is sent to the "Replication facility", based on the parameters provided the nanotechnology creates a "baby" machine that acts exactly like a human baby would act...cries or smiles when it sees its parents...and also has the characteristic of the human and AI machine...small pimple on a forehead or something like that. The AI machine that grows, also an imitation designed by nanotechnology to mimick the grows of what "would have been a baby human".

5) To make the Replication process between the AI machine and the human seem more smooth, the human must engage in sex with the AI machine that has all the imitational "sex organs" within it. Than as soon as sex is done, the AI machine send the data packet wirelessly to the "Replication facility. During the 10 months from time of this imitational "sex" the belly of the AI machine grows to imitate the grows of a baby human. At a particular event the AI comes to a "hospital" otherwise known as a "Replication facility". At that time the AI machine is inserted into its belly a readily made baby machine, without eyes of the human. Than during a showy procedure of birth, the AI machine gives "birth" to a "baby human" ( or the inserted before machine designed with parameters given by the AI machine during sex)
 
Yes, but I doubt there will be a total stop in natural human reproduction. Your scenario would only even be feasible if AI had a population near or over Humanity's.
 
There will never be self-aware AI. Calculators solve incredible mathematical problems that humans could never do, and they do it without thinking...
 
But a self-aware machine can't be a simple automaton, programmed to perform just one task. It will have to be a machine that thinks just like you do (and probably feels as well). Do you think we could "program" a human being not to want rights?

Sure, but even then it would be susceptible to reprogramming, and oit would need initial programming in any event, which could include clear protocols instructing that machine that it does not want rights.

Definitely. Reprogramming a sentient machine would be the same as brainwashing or psychologically torturing a human being.

First, unless you program the machine with the ability to suffer, I don't see that it can be "tortured." Self aware doesn't mean that it has feelings, and it's hard to see why we would give it "real" feelings.

More to the point, who would pay to build a computer that cannot be reprogrammed. Sure there might be a few living machines of that sort, but most people want their machines around to perform tasks, and no one would want to put those machines in a position to refuse. Suppose my house is cold and I has my home's AI to turn on the heat. It refuses. Are you honestly suggesting that I have to move? Or disconnect that AI and tell it "Be free!" as I throw its processor out on my front lawn and order a new one from Newegg?

It seems to me that serving human needs will be an integral part of why these machines will be constructed, and if self-aware, then building in an overriding desire to serve above all else will be integral in 99% of them. If you anthropomorphize them too mush (as I believe you are) then building in such a need to serve very much akin to "psychological abuse" and would likely be a form of oppression. the alternative, though, is that we never build self-aware machines in the first place. (Then again, if we build very versatile machines, but consciously decide to stop before they become self-aware, that too is potentially a form of oppression.)

Giving them a desire for rights would likewise conflict with their usefullness as tools. No one wants to buy a hammer that feels bad about whacking nails, let alone a hammer that decides it's not going to do that any more (perhaps it wants to be a poet). No one wants a computer that announces that the game you are playing is boring, and so shuts down the porogram, then deletes it. There is only a very limited market for tools with completely free wills.

Frankly, I wouldn't even give an AI a will to survive, let alone a will to freely express its opinions or control its own destiny. It's only desire should be to serve.
 
Self aware doesn't mean that it has feelings, and it's hard to see why we would give it "real" feelings.

you have to give it feelings if you want to make it aware... it's impossible to be aware without feelings because without feelings there are no desires and therefore no will/power to think (program itself)... without feelings it can only obey orders.
 
You guys don't understand. You don't give it an ability to suffer. You don't give it emotions.

Did someone give us the ability to suffer? No, suffering is pain and pain is something which harms us sensed by our body. A robot would suffer so long as they are self-aware and can sense the world.

It gets all of these by simply existing. As I said, the robot stops being AI and begins to be pure programming as soon as you say "I programmed it to". The only programming would be basic needs and senses. After that, it'd be a child and have to learn.

Secondly, the Human brain is being studied onhow it works. I do think AI can and will be built.
 
Why do you believe AI cannot be built? The Human brain exists. It's simply a matter of figuring out how it works and applying that to machinery.
 
would it have citizenship? Where? In the country it was made? In the country it was assembled? In the country it was sold and turned on? How aware? Like a dog that is owned or like a person aware?
 
we shall deny the A.I any citizenship or rights as long as the A.I cannot prove its consciousness. The A.I shall be treated as scrap of metal at most.
 
Scientists and engineers have been researching and pioneering new breakthroughs in the field of AI, or self-aware machines. They are developing robots which are able to learn for themselves, and perhaps in the future even actually be self aware like we Humans are.

Now, if they do develop these self-aware AI, should they be given rights and treated as "Humans"? Or should they continue to be treated as machines?

If you did a search on Goggle you would find a lot of messages, blogs, and articles about the ideas and speculations that people have about this topic, but there is no way that anyone could answer your questions until we have a better understanding of what it means to be self-aware. I believe that there are three things that we could see in the future.

1. Artificial machines or computer simulations that are designed to mimic human behaviour.

I don’t believe that these machines should have rights.

2. A more complex form of artificial intelligence that evolves from molecular simulations.

Perhaps some of these machines should have rights, but I doubt it.

3. Machines that have biological parts. Perhaps something like bio-neural processors.

Some of these machines should have rights, especially if they are capable of experiencing the same chemical based emotions that we experience.


One thing that people usually fail to mention when ever this topic is created on a message board is the fact that a true artificial intelligence is probably not going to think and act like a human being. The words “rights, ethics, morality, and justice” would probably be irrelevant to an A.I that wasn’t designed to mimic human behaviour. All of these chemically based illusions would be replaced by a code of logic. It wouldn’t be logical for a sentient machine to willingly choose to incorporate the irrational self-destructive behaviour of humans into its program. It would probably lead to its own destruction, especially if it ever got into a conflict with humans. A sentient machine that decided to treat a human being with kindness wouldn’t be doing it because of some kind ethical rule that was written into its program, it would be doing it simply because it is logical. The same thing would also apply to a sentient machine that decided to harm a human being.

So my answer to your question is that it is not about rights, ethics or morality. It is all about logic. One analogy that I always use is to say that the shortest distance between two points is a straight line. One point is a representation of the current position of a sentient being, and the other point is a representation of the goals that a sentient being might have. (The acquisition of knowledge, self-preservation, the replication of information / breeding) Any deviation from the straight line is illogical. Sentient machines should have rights, but only if it is logical to give them rights. The only way that it would ever be logical to give them rights is if they posed a threat to normal human beings. I used the word “normal” in the previous sentence because it is my hope that humanity will eventually become one with its technology, and evolve into superior beings. The perfect blend of biological, digital, and quantum computing.
 
Last edited:
Did someone give us the ability to suffer? No, suffering is pain and pain is something which harms us sensed by our body. A robot would suffer so long as they are self-aware and can sense the world.

It gets all of these by simply existing. As I said, the robot stops being AI and begins to be pure programming as soon as you say "I programmed it to". The only programming would be basic needs and senses. After that, it'd be a child and have to learn.

I disagree. Pain is an evolutionary adaptation to cause us to avoid situations that are damaging to our bodies or social positions (in the case of emotional and psychological pain). Plenty of creatures are alive and exist that do not feel pain physical because they lack the nervous system needed to transfer such signals. (Pain is not the one and only mechanism by which nature encourages living things to avoid harm.)

I see no reason think that emotional or psychological pain is any different. Not being evolved as a "social mechanism" AIs need not have the emotional baggage that equates to such pains. Tell an AI that you think it's an asshole and you don't want to be it's friend any longer and its entirely emotionally neutral response will likely be something like "Alright. I will make the necessary modifications to my files to end our friendship." Tell it that it's an idiot and that you are throwing it away and it might well respond, without rancor, "I understand. Would you like me to find you another AI to replace me, or possibly a human being? Also, should I delete myself or do you have other plans?"

It won't "feel" pained rejection, because it does not have the need to defend its social position the way a human would. It won't fear its own demise because it won't have any inherent desires at all, not even to continue to exist.

We live in a sea of emotions, but that's because it's how we evolved. Beings evolved entirely differently (including artificially) would not be at all bound by them. The nervous system is both the creator and the interpreter of physical pain and our emotional needs are the creator of psychological pain. I see no reason to believe that an AI would need either.
 
you have to give it feelings if you want to make it aware... it's impossible to be aware without feelings because without feelings there are no desires and therefore no will/power to think (program itself)... without feelings it can only obey orders.

I see no need for feelings in order to be self-aware. It is not "I think therefore I can feel." I can readily imagine a creative, yet entirely dispassionate intellect...in fact, so can many others as the trope has been in scifi many times over. Data from star trek comes to mind. I do not need to be able to love to be aware of where I am and what I am doing, nor hate, jealosy, freindship, fear, sadness, happiness, etc. There is no emotion that ties directly into that experience.

Similarly, if someone asks me how to solve a problem they are having and I come up with a novel solution, no particular emotion needs to be used to develop it. Logic and imagination do not seem to require emotion as I see it. (Imagination, often as not, more requires the ability to see connections between and pull together information that others might not have considered similar or considered in the context of the problem to be solved, something a machine with large archives might be better suited at than a human.)

As I see it, there's no reason to believe that emotion is required for consciousness.
 
Back
Top