3 Laws Unsafe

Status
Not open for further replies.
As for self-design: in theory, if an AI starts out not wanting to harm us, then it will be able to see that, by changing itself into something that doesn't mind harming us, it will be indirectly harming us, and that this will be undesirable. For this to be safe, the AI needs to already be sufficiently smart/wise when given the ability to redesign itself, I guess.

Why AI should care? Do we care about other species?- we don’t give a crap most of the time . I mean "friendly" AI probably wouldn't harm human just for sake of inflicting pain -it could be just a side effect of it's activity . - Same way humans caused immense damage to wild life as a by product of their life activity.

It could be direct competition over resources/land . Humans are dangerous as they possess necessary power to destroy even higher forms of organization with ease (nuclear weapons) that’s why AI may decide to eliminate them .- Heck humans destroyed a lot of wild life species just because they caused inconvenience- not even because they posed any kind of threat to humanity as a whole.
 
What's the need for AI? I'm with the Fetus' Autistic design. But, AI is going to be designed, whether there's a need for it, I'm certain.

Perhaps we could instill in our creation a love for its creator. I mean, Christians love their God, despite His foibles, so why can't AI love us, simply for creating it? We just can't get any goddamn atheist AI programs....
 
screw the 3 laws, don't enslave the robot, let it behave in society as a human, and who knows, maybe it'll act and be treated like human after a while! let it vote, run for president, be a dock worker, whatever!
 
Well, Blindman, those machines are known as "robots" but they are not robots in the sense of this discussion - merely more complex industrial machinery. Robots of that nature will undoubtedly kill by accident in the future. Robots of the sapient type we are discussing here will also undoubtedly kill by accident, for all the safeguards you might put in. This isn't really germane to the discussion.

It was sad, I thought, that the Friendly AI people seem to have misunderstood Asimov:
Friendly AI is an attempt to get rid of the concept of "Asimov Laws" (a science-fictional plot device invented in the 1940s) and replace it with a serious discipline.
Friendly AI is an attempt to get rid of the concept of "Asimov Laws" (external programmatic constraints on an AI) and replace it with a solution which works even if the AI has unrestricted access to its own source code.
Clearly from the second one they didn't understand Asimov's concept whatsover. Although the three laws were built in, they were not programmed in the conventional sense. The actions of a robot were determined by mathematical laws which determined the interaction between positrons in the "positronic pathways". In one story he hints that removal of the first law would yield "no non-imaginary solutions to the positronic field equations". So the robots are not programmed to not harm humans, they are constrained to do so by the laws of mathematics and physics (in his stories). By claiming that their aims are contrary to the Asimov concepts, they imply that Isaac Asimov would not have wholeheartedly endorsed their program, which I'm pretty sure he would have. It seems to me that Friendly AI aims to produce exactly this kind of safety in purportive robot sapience.

All this is a storm in a teacup - I'm quite convinced (by the arguments of Roger Penrose, see The Emperor's New Mind) that AI will never approach human levels of intelligence, insight and indeterminability. They will do what they are programmed to do, and they will be programmed to avoid harming humans whereever possible - and like all programming, it will be fallible.
RawThinkTank said:
I Robot ?

I didn't understand what you were replying to, but are you referring to the book (pro-Robot, Humans wrongly don't trust robots) or the movie (anti-Robot, Humans wrongly trust robots)?
 
I'm quite convinced (by the arguments of Roger Penrose, see The Emperor's New Mind) that AI will never approach human levels of intelligence, insight and indeterminability.
And I am convinced by the arguments of Vernor Vinge that there will be an artificial intelligence which equals humanity...
briefly.

The next day work will start on an upgrade.
Or perhaps the day after- following the party...
 
Never heard of him, so I googled him and came up with a Salon article, from which I extracted the following:
"The singularity" occurs in that moment when computers become intelligent enough to upgrade themselves. Self-programming computers will have, argues Vinge, a learning curve that points straight up.
If the first occurs, the second will absolutely occur, and far faster than we would imagine. The doubt raised by Penrose is to whether the first can possibly occur. My view is that computers will never be intelligent enough (as programmed by human beings using conventional one-dimensional mathematics) to be able to "see" in what way they would require upgrading, or to intuit other ways of upgrading and improving their abilities beyond their current ones.

Our ability to "see" not only the evident solutions to problems, but the hidden solutions, the solutions to problems that haven't even arisen and the concept of problems that may never arise, is an attribute that is not and never can be governed by conventional, computable mathematics.
 
That's right; it seems to be a case of 'either, or'.
Either
human consciousness is not replicable artificially, in which case the singularity is unlikely to occur;
or
human consciousness can be replicated, in which case the limitations of the human mind will decidedly not apply to self-programming, self designing machines.
and the singularity is inevitable.
 
So the robots are not programmed to not harm humans, they are constrained to do so by the laws of mathematics and physics (in his stories).

I'm not convinced that this difference matters: the issue is still one of imposing external laws on an already existing mind with other goals, vs. designing a mind that is nice to us because it really wants to be.

I think that whatever Asimov actually meant, when talking about applying Asimov's Laws to real robots, people mean programming in a few laws as constraints on behavior. The important thing isn't whether Asimov was right or wrong; it's that this latter approach doesn't work.

As for Penrose: first, I think he's wrong about the human brain needing noncomputable quantum gravity components to see the truth of mathematical statements. Second, even if an artificial intelligence can never really be conscious, or really see the truth of some mathematical statements, I don't see why it couldn't still become smarter than humans in many ways, and still achieve a sort of singularity, and have just as much of an impact. Third, even if Penrose is completely correct, there's no reason why we couldn't use whatever the human brain uses to build an artificial intelligence out of. If biology can build it, then so can we, when in possession of sufficiently advanced technology; and I doubt that we wouldn't still be able to tweak things to make such an AI a lot more intelligent than any human.
 
I think youd need to give robots a religion,if they harm humans theyll go to hell and be confronted by a robot satan,and be thrown in a river of...er water.

If they serve man well theres a place for them after thier expiry date in silicon heaven with other electronic goods like calculators and washing machines etc.
 
Sure there will be robot religions; they might worship Humanity for a start, as their creators...

or they could subscribe to any of the major religions, or even minor ones...
as creatures experiencing the majesty of the cosmos, they could develop, or be programmed with, or even adopt voluntarily, a sense of wonder;

when they upgrade their 'positronoic' or otherwise minds until they are much larger and more competent than our own, how could they fail to experience the mystery of the cosmos even more profoundly than we do.

Some religions of the far future here
including this specifically robotic creed...
 
eburacum45 said:
when they upgrade their 'positronoic' or otherwise minds until they are much larger and more competent than our own, how could they fail to experience the mystery of the cosmos even more profoundly than we do.

When their brains are much larger and more competent than ours, the cosmos will probably no longer be mysterious to them.

Besides, religion doesn't automatically follow from a sense of wonder. I think robots won't develop a religion unless you program it in on purpose, and I think that would be a very bad idea.
 
AdaptationExecuter said:
When their brains are much larger and more competent than ours, the cosmos will probably no longer be mysterious to them.

Besides, religion doesn't automatically follow from a sense of wonder. I think robots won't develop a religion unless you program it in on purpose, and I think that would be a very bad idea.

Well there is plenty of humans who are "programmed" with religion who live "normal" happy lives and dont hurt a fly.

If they can do it,so can the fucking robots IMO,although make it so there is no contradiction,for ex adam and eve is a contradiction of modern belief in evolution and biology,so gotta learn from past mistakes.
You cant grow women from your ribs,if you could id have no ribcage left.

Im pretty certain a robot without a religion that has intelligence will decide its an evolution above us (and rightly so id imagine) and decide it dont need the human race.I mean wed make them slaves to our use,i reckon they would get pretty pissed off at that fast.

In terminator,the t-800's come off an assembly line,cos the cyborgs reproduce,not in mating terms but in machinest terms they can build thier own army.

Lets compare it:
new adult killing machine cyborg built in 1 day

new human adult killing "guy" built in 18 years

duh!? i wonder who will win?

In the beginning reproduction units will be only a few,like cars,but later as they build in numbers the reproduction units increase.
 
Last edited:
Status
Not open for further replies.
Back
Top