Should self-aware, concious AI be given rights?

Norsefire

Salam Shalom Salom
Registered Senior Member
Scientists and engineers have been researching and pioneering new breakthroughs in the field of AI, or self-aware machines. They are developing robots which are able to learn for themselves, and perhaps in the future even actually be self aware like we Humans are.

Now, if they do develop these self-aware AI, should they be given rights and treated as "Humans"? Or should they continue to be treated as machines?
 
their true self-awareness has to be proven mathematically or at least approximated to be as that of self-awareness of humans...before we speak of rights.
 
Yes, if a machine thinks exactly like a human, and can reason like one, why should it not have rights?
 
Yes, if a machine thinks exactly like a human, and can reason like one, why should it not have rights?

who's to say the machine does not imitates self-awareness? ...for the sake of these rights...programmed imitation by its own creators (humans)
 
who's to say the machine does not imitates self-awareness? ...for the sake of these rights...programmed imitation by its own creators (humans)

If you cannot prove its imitated then you must assume that it really is self aware.
 
If you cannot prove its imitated then you must assume that it really is self aware.

if you cannot prove Q than P must be right? :shrug:

Than you would have to prove that P is right.

You cannot assume that P is right with no proof.


----


You cannot assume AI have true consciousness without proof that it is indeed a true consciousness that we as humans experience.
 
I would say that you should program the thing to not want special rights. We will develop these hypothetical AIs to solve problems for humans, to the extent the rights of an AI conflict with those of a human, that would mean necessarily that the desires of the AI must in some way conflict with what the human wants to do.

Simple solution, reprogram the AI so that it wants what the human wants, and then there is no conflict.

Would anyone suggest that a computer, sentient or not, should have the right not to be reprogrammed?
 
I would argue that AI's must prove themselves that they are consciouses, cause as far as I see it they are metal junk and should be scrapped for better models to serve humanity needs.
 
if you cannot prove Q than P must be right? :shrug:

Than you would have to prove that P is right.

You cannot assume that P is right with no proof.


----


You cannot assume AI have true consciousness without proof that it is indeed a true consciousness that we as humans experience.

How do we prove that human beings have "true" conciousness? There is no mathmatical equation to prove it, so how can we truly know?
 
How do we prove that human beings have "true" conciousness? There is no mathmatical equation to prove it, so how can we truly know?

easy, we assume humans' consciousness as true consciousness from here on.

In fact...from here on, I, draqon of white stars encapsulating this very essence....well yeah I the draqon proclaim humans' having the true "true consciousness" by which all other claims of consciousness shall be judged
 
Now, if they do develop these self-aware AI, should they be given rights and treated as "Humans"? Or should they continue to be treated as machines?

This Discussion assumes that the AI is actually self aware, so lets just stick to that realm of debate for now. ;)
 
Scientists and engineers have been researching and pioneering new breakthroughs in the field of AI, or self-aware machines. They are developing robots which are able to learn for themselves, and perhaps in the future even actually be self aware like we Humans are.

Now, if they do develop these self-aware AI, should they be given rights and treated as "Humans"? Or should they continue to be treated as machines?

As far as I'm concerned the question is completely moot - I do not believe such machines can ever be built. They will be able to approximate it but never fully accomplish it.
 
Sorry, but I don't deal in impossible assumptions. That's for daydreamers and fiction writers.

So, with that I will exit this discussion.

impossible assumptions?

Why do we exist and perceive ourselves, or are you not assuming there anything?

And self-awareness exists already in machines...they have complex algorithms on Spirit and Opportunity rovers that let them deduce were to go based on sensory data...what to do, which energy blocks to turn on or turn off...and even how to adapt to changed in the environment! And thats now. refer to this document: http://marstech.jpl.nasa.gov/publications/EstlinICRA2007Final.pdf
 
Sorry, but I don't deal in impossible assumptions. That's for daydreamers and fiction writers.

So, with that I will exit this discussion.

I'm sorry that you lack the imagination to continue in a friendly "what if" discussion. Oh well :shrug:
 
I'm sorry that you lack the imagination to continue in a friendly "what if" discussion. Oh well :shrug:

It's not a matter of imagination at all. Indulging in some "what ifs" is simply a waste of time, effort and mental energy. Some are worth it - this one is not.
 
impossible assumptions?

Why do we exist and perceive ourselves, or are you not assuming there anything?

And self-awareness exists already in machines...they have complex algorithms on Spirit and Opportunity rovers that let them deduce were to go based on sensory data...what to do, which energy blocks to turn on or turn off...and even how to adapt to changed in the environment! And thats now. refer to this document: http://marstech.jpl.nasa.gov/publications/EstlinICRA2007Final.pdf

Silly boy! That's NOT intelligence - simply good programming. There isn't even the slightest degree of intelligence involved there.

(I gather that the likes of you would be impressed by a thermostat that turns on the heat when it gets too cold or sensors that turn on security lights when it becomes dark. What you are talking about is just extensions of those simple, mechanical devices - not intelligence at all.)
 
Read-Only, engineers in Japan are working on "robots" that are able to work on their own. The only ability they are given by their creators is that ability to learn; they must make their own judgements and learn as a child does.

Now, if a brain can be self aware there can be a way to basically make a metallic brain

Under the assumption that it does happen, however, should that AI be given rights as an individual?
 
Scientists and engineers have been researching and pioneering new breakthroughs in the field of AI, or self-aware machines. They are developing robots which are able to learn for themselves, and perhaps in the future even actually be self aware like we Humans are.

Now, if they do develop these self-aware AI, should they be given rights and treated as "Humans"? Or should they continue to be treated as machines?

Following your assumption (in bold), what possible reason could you give for not granting these machines basic human rights?

their true self-awareness has to be proven mathematically or at least approximated to be as that of self-awareness of humans...before we speak of rights.

But there's no mathematical proof that you are self-aware, so why do you have rights? Obviously, mathematical proof has never been required for rights before, so why the change?

who's to say the machine does not imitates self-awareness? ...for the sake of these rights...programmed imitation by its own creators (humans)

If the machine passes the Turing test, on what basis would you deprive it of rights? You can prove nothing about the machine's self-awareness that does not also hold for human beings.

I would say that you should program the thing to not want special rights.

But a self-aware machine can't be a simple automaton, programmed to perform just one task. It will have to be a machine that thinks just like you do (and probably feels as well). Do you think we could "program" a human being not to want rights?

Would anyone suggest that a computer, sentient or not, should have the right not to be reprogrammed?

Definitely. Reprogramming a sentient machine would be the same as brainwashing or psychologically torturing a human being.
 
I did not state that a sentient machine shouldn't have rights, I simply did not give my opinion because I usually don't when I'm a topic creator involving opinion-based discussion

Anyway, this here is fascinating:

Self-awareness in robots is being investigated by Junichi Takeno at Meiji University in Japan, who claims to have developed a robot that can discriminate between its own image in a mirror and another robot and this claim has been already reviewed. (Takeno, Inaba & Suzuki 2005)

"Strong AI" is AI which can be self-aware and concious. Studies in how a Human brain works can lead to advances in AI development.
We would also need to give such a machine senses, obviously, and therefore a machine COULD feel pain. If it is sentient, then it can if it has senses. Because after all what is pain? When something harms you, right? So couldn't a machine technically feel pain if it is sentient and it knows that something harmed it?
 
Back
Top