We're organic, so sex is unlikley too.
Responses don't have to be pre-programmed. Complex responses can arise from simple rules:But John is still correct.
An AI cannot have an original thought, all it's responses have to be pre-programed into it.
People often say that machines and emotions are an impossible combination... but why?And something many fail to see is that a great deal of human innovation was driven by emotion - like hunger, for example - and a machine can never be given true emotions.
No emotions, no original thought - yep, that's a machine, not intelligence.
Responses don't have to be pre-programmed. Complex responses can arise from simple rules:
People often say that machines and emotions are an impossible combination... but why?
How simple does an organism need to be before it can't have emotions?
How complex does a machine need to be before it can have emotions?
Why shouldn't those two questions have the same answer?
A seldom asked question is why people want machines not to inherit the world.So far I don't think anyone has explained why they would ever want to take over (unless some nut programs them to want to take over, of course).
I hold your POV too. In fact it will be good development for intelligent "life" forms when their "IQ" can double each generation. I just wonder if there is a limit?A seldom asked question is why people want machines not to inherit the world.
Why would it be a problem if machines eventually "take over" in the same way that our children will one day take over?
Why is it so important to us that our intellectual descendants are biological humans?
Is it the same reason that some people want their descendants to be the same race or religion as them?
I hold your POV too. In fact it will be good development for intelligent "life" forms when their "IQ" can double each generation. I just wonder if there is a limit?
Two assumptions with no supporting evidence.Fully functional, self-sufficient, self-reliant, self-replicating sentient robots will never happen- a robot cannot replace a person.
Another assumption: how about if someone integrates the results of several different programmes?While it is possible to make robots, it would take up to 100 years to have a decent working model and cost trillions of dollars. Who is going to finance that project no less see it through? Who's got a trillion or two to spare?
Robots don't ask for pay rises...It's much cheaper to keep the human touch in the loop.
It certainly looks that way.Yes- there is a limit to intelligence.
Based on...?I'd say it rounds out at around 500.
Showing you don't actually require an IQ of 500 to produce gibberish.Anything more than that and the language would make no sense to us humans- it's either be total gibberish or indecipherable math equasions.
I think robots taking us over is nonsense.
But IF they would I think it would be more like in The Terminator
If I'm remembering it right, the lab they destroyed in T2 was studying the remains of the terminator from the first movie. When the remains of the terminator were left in the past for researchers to find and reverse engineer, it spawned a new timeline where skynet was created sooner (1997 instead of 2003). When they destroyed the lab, it merely pushed things back to the "original" timeline, where the scientists had to invent skynet on their own and didn't complete it until 2003.That film makes no sense. Surely if they destroy the chip thingy then the evil computer would never have created the Terminators in the first place. Moreover, John Connor's father would not have been able to get back to 1985 and impregnate John's mother. So there would have been no nuclear holocaust, and no resistance.
Why not? A person is only a fragile carbon based machine.Fully functional, self-sufficient, self-reliant, self-replicating sentient robots will never happen- a robot cannot replace a person.
Honda has already developed the mechanics, all that remains is the software and that largely depends on adequate processing power, and assuming Moore’s law will continue for a few more years then we will have the compute power within the next decade at regular market prices.While it is possible to make robots, it would take up to 100 years to have a decent working model and cost trillions of dollars. Who is going to finance that project no less see it through? Who's got a trillion or two to spare?
I suspect you are not part of the industry then.I would love to see I Robot robots in the world but only I see clunky R2D2's instead.
I don't see any reason why an artificial intelligence would feel anything like what humans feel (unless someone goes through the trouble of programming them to be that way). Their motivations, desires, and emotions would probably be very alien to us. They might not have any innate desire to continue to exist and kill themselves the first time they get bored or unhappy.Why would they kill us?
We brought them into existance.
Seriously ask yourselves this, have you ever killed your mom or dad? No matter how mad you are you wont do it.
Thats the relationship robots would have, they would look up to us as parent figures.
No one can say with certainity. This is a raging debate. Read about qualia, Philosophical_zombie, Mary's Room, inverted spectra, and several related items (all with long entries at wiki.)...what if you made a robot that had to ability to:
-understand language
-understand facial expressions, emotions
-create responses
-understand objects it sees, recognizes them
-is designed to survive
-has the ability to learn and input more information.
I don't see why this is such an impossibility, ...Now the question is, if a robot could do all those things, would it not be a sentient being? ...