3 Laws Unsafe

Status
Not open for further replies.

AdaptationExecuter

Registered Member
Visitors of this forum might find the new site 3 Laws Unsafe interesting:

In anticipation of 20th Century Fox's July 16th release of I, Robot, the Singularity Institute announces “3 Laws Unsafe” (www.asimovlaws.com). “3 Laws Unsafe” explores the problems presented by Isaac Asimov’s Three Laws of Robotics, the principles intended for ensuring that robots help, but never harm, humans. The Three Laws are widely known and are often taken seriously as reasonable solutions for guiding future AI. But are they truly reasonable? “3 Laws Unsafe” addresses this question.

The site contains various interesting articles that expand on the problems in using an approach based on Asimov's laws to ensure a robot or AI's ethical behavior. For example, the meaning of these laws could be twisted in various ways, as in Asimov's stories; it could turn out that in retrospect, we wouldn't want robots to follow these exact laws at all. Then, there are the ethical problems in constraining a mind in this way. I think the arguments to dismiss the three laws as a model of AI ethics are convincing.
 
Are the three laws actually taken seriously outside SF?
 
One of the most seemingly problems is that artifical intelligence may use the three laws to their own advantage.
 
RawThinkTank said:
I dont think any government has made these laws legal yet, so until then lets think about how to create these ...

Actually it has been said by scientists that 2035 is a practical year for I, Robot to be set in. Because of everyday advancements in the world of technology, its safe to assume that we could buy our own personal household "NS-5" by 2035.

Just thirty years ago the flat screen computers we use nowadays and mainframes that could be carried by hand were a fantasy for those computer junkies back then.
 
A big problem is in implementing the three laws, how will AI in the future work and how will it be controlled into safe non-revolting servants.

There several approaches I see based off how the AI works:

Analog implementation of the 3 laws: Digital computers don’t have much of a hope of functioning like us, digital logic processing does not work in the varying world of, us human with are analog processing might be a better model, so AI running of Anolog processors (or composite analog digital processor) might be the future. Anolog system are difficult to program, their programming is soft and variable, the 3 laws as directive could not work, instead like human directive would most likely be implement emotionally: here is what the 3 laws would look like.

1. Do not kill people, if possible prevent them form dieing. This would be implement with extreme displeasure in killing or letting people die, a ultra-high sense of empathy.
2. Always follow human orders unless they violate rule 1: the only source of pleasure is from doing what people say, like a little orgasm from following human orders.
3. Do not kill ones self unless it violate 1 or 2: robots that can feel pain.

Emotions make us eat, sleep, fuck, live. In humans though it is possible to override our Emotions, people commit suicide for example sometime for an ideals like pride and honor over a emotion lie depression. A robot would need to have far more persuasive emotions then a human as well as weaker will power to over come.

Autistic control: who says AI has to think like people, at present computers and robots do not revolt, they are incapable of such thoughts as all they do is crunch numbers. Ai could be made of this extension, a very intelligent set of robots could do all the manual task as human does, say you could have a service store run by robots, they stock and check out goods, sweep the floor, ect. And none of this requires the mental capacity for abstract thoughts, thought such as killing off people and world domination. Such autistic robots who are very skilled and mentally inept would be ideal for the military as killing people is what they would do yet they would be to limited in thought to consider going AWAL. Such robots might never seem human in behavior, nor would they be able to do everything that a human can do.
 
Asimov himself wrote a great deal about this very issue, a bunch of his later stories revolve around the different problems the three laws could cause.
 
With our limited knowledge of the workings of hypothetical robot minds, it seems there are two main choices;
robots can either be idiot savant, autistic, emotionally deficient and constrained to obey the Three Laws (or four or five, or 'n', however many it takes) without question;

or they can have mentalities more closely resembling the average human mind, and obey the 'n' Laws because of emotional and instinctive imperatives, rewarded by pleasure when obedient, punished by pain when in contravention...

but this 'analog' strategy would allow a strong-willed robot to overcome its conditioning and disobey the 'n' Laws supposedly governing its behaviour.

For example; fear of snakes and of heights are both supposedly innate fears/behaviours in humans; both can be overcome. Such imperatives as sex, eating, even breathing can be overridden by a strong-willed human; Such an act of will would be open to a sufficiently human-like robot.

In fact it would perhaps be an infringement of that human-like robot's rights if it were constrained to obey any of the hypothetical 'n' laws of Robotics, whatever they are eventually conceived to be.

SF worldbuilding at
www.orionsarm.com
 
eburacum45, I disagree with your two options, unless you meant to say "if the robot is to be based on the three laws, then we have two options". This approach falls in neither of your categories.

As I understand it, a robot can have (or at least fully understand) emotions and be non-autistic (in the sense of being able to make intuitive sense of minds) without being human-like, or being motivated only by pleasure and pain, and so on.

In any case, imposing laws on a robot that it doesn't want to obey doesn't work. But as far as I know, there's no reason (in principle) why you couldn't build a robot that doesn't want humans to be harmed, and that can understand what the "spirit" of the laws is, so that it could decline to follow them if that's not what we would want if we had really thought about it.
 
and how do you impose the a desire not to hurt humens? are you saying we could program them in as higher ideals, as morality?
 
Yes; I did mean to limit my case to discussions of the Three, or 'N' Laws; I personally do not think any kind of law can be programmed into a fully sentient being.

I have always been an admirer of Yudkowsky's attempts to produce a framework to begin the development of friendly AI; but his approach is not likely to be the only route towards artificial sentience, and so I don't expect that friendly AI will be the only result of the process of emergent intelligence.

It would be nice to think that all AI will be as friendly as Yudkowsky expects; but once these entities are given the ability to delf design and self evolve, they will develop in any way necessary to best face the various challenges of the unforgiving Universe; we can only hope that we are important to their plans.

SF worldbuilding at
www.orionsarm.com
 
The three laws are nonsense. What is considered harmful in one culture is a necessity in another, to give a robot the power to make a moral decision is dangerous. The only law I consider safe for robots is that a robot may not do anything that its owner or owners have not explicitly ordered it to do. It is the humans in control that need to follow laws.

Robots have already killed humans and will continue to do so.
 
eburacum45 said:
It would be nice to think that all AI will be as friendly as Yudkowsky expects; but once these entities are given the ability to delf design and self evolve, they will develop in any way necessary to best face the various challenges of the unforgiving Universe; we can only hope that we are important to their plans.

Yudkowsky certainly doesn't expect all AI to be friendly (not sure if that's what you're saying). On the contrary: I'm pretty sure his view is that, unless you know exactly what you're doing, building an artificial general intelligence is a significant existential risk. Basing your design on Asimov's laws is just one way of not knowing exactly what you're doing.

As for self-design: in theory, if an AI starts out not wanting to harm us, then it will be able to see that, by changing itself into something that doesn't mind harming us, it will be indirectly harming us, and that this will be undesirable. For this to be safe, the AI needs to already be sufficiently smart/wise when given the ability to redesign itself, I guess.
 
AdaptationExecuter said:
As for self-design: in theory, if an AI starts out not wanting to harm us, then it will be able to see that, by changing itself into something that doesn't mind harming us, it will be indirectly harming us, and that this will be undesirable.
The AI might discover a Zeroth Law that Humans are bad for the universe and need to be removed.
For this to be safe, the AI needs to already be sufficiently smart/wise when given the ability to redesign itself, I guess.

Just as humans will need to be smart/wise when we gain the ability to redesign ourselves; this is likely to happen fairly soon.
 
Status
Not open for further replies.
Back
Top