Well, Blindman, those machines are known as "robots" but they are not robots in the sense of this discussion - merely more complex industrial machinery. Robots of that nature will undoubtedly kill by accident in the future. Robots of the sapient type we are discussing here will also undoubtedly kill by accident, for all the safeguards you might put in. This isn't really germane to the discussion.
It was sad, I thought, that the Friendly AI people seem to have misunderstood Asimov:
Friendly AI is an attempt to get rid of the concept of "Asimov Laws" (a science-fictional plot device invented in the 1940s) and replace it with a serious discipline.
Friendly AI is an attempt to get rid of the concept of "Asimov Laws" (external programmatic constraints on an AI) and replace it with a solution which works even if the AI has unrestricted access to its own source code.
Clearly from the second one they didn't understand Asimov's concept whatsover. Although the three laws were built in, they were not programmed in the conventional sense. The actions of a robot were determined by mathematical laws which determined the interaction between positrons in the "positronic pathways". In one story he hints that removal of the first law would yield "no non-imaginary solutions to the positronic field equations". So the robots are not programmed to not harm humans, they are constrained to do so by the laws of mathematics and physics (in his stories). By claiming that their aims are contrary to the Asimov concepts, they imply that Isaac Asimov would not have wholeheartedly endorsed their program, which I'm pretty sure he would have. It seems to me that Friendly AI aims to produce exactly this kind of safety in purportive robot sapience.
All this is a storm in a teacup - I'm quite convinced (by the arguments of Roger Penrose, see
The Emperor's New Mind) that AI will never approach human levels of intelligence, insight and indeterminability. They will do what they are programmed to do, and they will be programmed to avoid harming humans whereever possible - and like all programming, it will be fallible.
RawThinkTank said:
I didn't understand what you were replying to, but are you referring to the book (pro-Robot, Humans wrongly don't trust robots) or the movie (anti-Robot, Humans wrongly trust robots)?