Robots are going to take over

Status
Not open for further replies.
This is a very interesting thread.

Reading this thread actually changed my mind about robots.
 
In the future, you will all be taking orders from this guy:

blastarr2.jpg


He looks pretty mad, don't piss him off!
 
So far I don't think anyone has explained why they would ever want to take over (unless some nut programs them to want to take over, of course).
 
But John is still correct.;)

An AI cannot have an original thought, all it's responses have to be pre-programed into it.
Responses don't have to be pre-programmed. Complex responses can arise from simple rules:
julia.jpg
life_bw.png


And something many fail to see is that a great deal of human innovation was driven by emotion - like hunger, for example - and a machine can never be given true emotions.

No emotions, no original thought - yep, that's a machine, not intelligence.
People often say that machines and emotions are an impossible combination... but why?
How simple does an organism need to be before it can't have emotions?
How complex does a machine need to be before it can have emotions?
Why shouldn't those two questions have the same answer?
 
Responses don't have to be pre-programmed. Complex responses can arise from simple rules:
julia.jpg
life_bw.png



People often say that machines and emotions are an impossible combination... but why?
How simple does an organism need to be before it can't have emotions?
How complex does a machine need to be before it can have emotions?
Why shouldn't those two questions have the same answer?

I think the difference between machine and man is merely the means of transmission of the electrical impulses.

I've recently changed my mind and now believe that machines are capable of feeling.... the only problem is if they get a virus....
 
So far I don't think anyone has explained why they would ever want to take over (unless some nut programs them to want to take over, of course).
A seldom asked question is why people want machines not to inherit the world.
Why would it be a problem if machines eventually "take over" in the same way that our children will one day take over?


Why is it so important to us that our intellectual descendants are biological humans?
Is it the same reason that some people want their descendants to be the same race or religion as them?
 
A seldom asked question is why people want machines not to inherit the world.
Why would it be a problem if machines eventually "take over" in the same way that our children will one day take over?


Why is it so important to us that our intellectual descendants are biological humans?
Is it the same reason that some people want their descendants to be the same race or religion as them?
I hold your POV too. In fact it will be good development for intelligent "life" forms when their "IQ" can double each generation. I just wonder if there is a limit?
 
Fully functional, self-sufficient, self-reliant, self-replicating sentient robots will never happen- a robot cannot replace a person.

While it is possible to make robots, it would take up to 100 years to have a decent working model and cost trillions of dollars. Who is going to finance that project no less see it through? Who's got a trillion or two to spare?

It's much cheaper to keep the human touch in the loop. Considering in America, 80% of what we produce is services, we won't get out of the service industry anytime soon.

I would love to see I Robot robots in the world but only I see clunky R2D2's instead.
 
I hold your POV too. In fact it will be good development for intelligent "life" forms when their "IQ" can double each generation. I just wonder if there is a limit?

Yes- there is a limit to intelligence. I'd say it rounds out at around 500. Anything more than that and the language would make no sense to us humans- it's either be total gibberish or indecipherable math equasions.
 
Fully functional, self-sufficient, self-reliant, self-replicating sentient robots will never happen- a robot cannot replace a person.
Two assumptions with no supporting evidence.

While it is possible to make robots, it would take up to 100 years to have a decent working model and cost trillions of dollars. Who is going to finance that project no less see it through? Who's got a trillion or two to spare?
Another assumption: how about if someone integrates the results of several different programmes?
Way cheaper and way quicker.

It's much cheaper to keep the human touch in the loop.
Robots don't ask for pay rises...

Yes- there is a limit to intelligence.
It certainly looks that way.

I'd say it rounds out at around 500.
Based on...?
Wild-ass guess?

Anything more than that and the language would make no sense to us humans- it's either be total gibberish or indecipherable math equasions.
Showing you don't actually require an IQ of 500 to produce gibberish.
Intelligence is not a matter of being able to make sense to humans, and they could always "dumb down" for a while to conform to your requirements.
 
Thread Statement: Robots are going to take over


Response: Not if I get a computer implant which would allow me to think faster than them.

Downsides: Unless that implant got a virus. It would have to have it's own firewall software/hardware and some well written programming.
 
I think robots taking us over is nonsense.
But IF they would I think it would be more like in The Terminator :D

That film makes no sense. Surely if they destroy the chip thingy then the evil computer would never have created the Terminators in the first place. Moreover, John Connor's father would not have been able to get back to 1985 and impregnate John's mother. So there would have been no nuclear holocaust, and no resistance.
 
That film makes no sense. Surely if they destroy the chip thingy then the evil computer would never have created the Terminators in the first place. Moreover, John Connor's father would not have been able to get back to 1985 and impregnate John's mother. So there would have been no nuclear holocaust, and no resistance.
If I'm remembering it right, the lab they destroyed in T2 was studying the remains of the terminator from the first movie. When the remains of the terminator were left in the past for researchers to find and reverse engineer, it spawned a new timeline where skynet was created sooner (1997 instead of 2003). When they destroyed the lab, it merely pushed things back to the "original" timeline, where the scientists had to invent skynet on their own and didn't complete it until 2003.

As for the time traveling father paradox, my theory would be that the john connor we meet in T2 etc. is a completely different john connor from the "original," who was presumably originally fathered by some random guy. The new john connor still ended up being an important leader in the war against the machines, because he had basically been trained for it from birth and was the only guy who had his act together when the bombs dropped.
 
Impet,

Fully functional, self-sufficient, self-reliant, self-replicating sentient robots will never happen- a robot cannot replace a person.
Why not? A person is only a fragile carbon based machine.

While it is possible to make robots, it would take up to 100 years to have a decent working model and cost trillions of dollars. Who is going to finance that project no less see it through? Who's got a trillion or two to spare?
Honda has already developed the mechanics, all that remains is the software and that largely depends on adequate processing power, and assuming Moore’s law will continue for a few more years then we will have the compute power within the next decade at regular market prices.

AI software development is where we need to focus and many of those processes are being seriously hindered by the current state of our relatively slow computers. This area should also see a geometric increase in progress once we have more suitable test machines. I suspect we will see self-aware machines within the next 30 years.

I would love to see I Robot robots in the world but only I see clunky R2D2's instead.
I suspect you are not part of the industry then.
 
Why would they kill us?

We brought them into existance.

Seriously ask yourselves this, have you ever killed your mom or dad? No matter how mad you are you wont do it.

Thats the relationship robots would have, they would look up to us as parent figures.
 
Why would they kill us?

We brought them into existance.

Seriously ask yourselves this, have you ever killed your mom or dad? No matter how mad you are you wont do it.

Thats the relationship robots would have, they would look up to us as parent figures.
I don't see any reason why an artificial intelligence would feel anything like what humans feel (unless someone goes through the trouble of programming them to be that way). Their motivations, desires, and emotions would probably be very alien to us. They might not have any innate desire to continue to exist and kill themselves the first time they get bored or unhappy.
 
Who's to say human beings and animals aren't just extremely complex robots already.

Its not because we are made of flesh that it makes us exempt from such a possibility.

Just like you can program a machine to behave differently, drugs alter our behavior as well.

think about it....

what if you made a robot that had to ability to:

-understand language
-understand facial expressions, emotions
-create responses
-understand objects it sees, recognizes them
-is designed to survive
-has the ability to learn and input more information.

I don't see why this is such an impossibility, obviously creating something of such complexity is beyond our reach for the moment but I refuse to disregard in the future.

Now the question is, if a robot could do all those things, would it not be a sentient being? just like us?
 
...what if you made a robot that had to ability to:
-understand language
-understand facial expressions, emotions
-create responses
-understand objects it sees, recognizes them
-is designed to survive
-has the ability to learn and input more information.

I don't see why this is such an impossibility, ...Now the question is, if a robot could do all those things, would it not be a sentient being? ...
No one can say with certainity. This is a raging debate. Read about qualia, Philosophical_zombie, Mary's Room, inverted spectra, and several related items (all with long entries at wiki.)

I have never been much persuaded by the Mary's room argument for qualia, but pain does seem to me to be a qualia, that I can conceive of a absent from the philosophical zombie. I think that physicalism (both "type" & probably the "token" variants) and identity theory are wrong as I tend to the functionalism POV. For example, pain does not require "C-fibers" but whether it exists in others (including non-humans) exhibiting "pain behavior" or not is impossible to say. This is closely tied to the "other minds problem" which wiki brielfy addresses, but see wiki's Philosophy_of_mind entry for more complete discusion of "mind."

I certainly think it possible for a machine to pass the Turing test, someday, but that will not answer your question because of the other minds problem. I think your question will never be answered with certainity.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top