AdaptationExecuter
Registered Member
Visitors of this forum might find the new site 3 Laws Unsafe interesting:
The site contains various interesting articles that expand on the problems in using an approach based on Asimov's laws to ensure a robot or AI's ethical behavior. For example, the meaning of these laws could be twisted in various ways, as in Asimov's stories; it could turn out that in retrospect, we wouldn't want robots to follow these exact laws at all. Then, there are the ethical problems in constraining a mind in this way. I think the arguments to dismiss the three laws as a model of AI ethics are convincing.
In anticipation of 20th Century Fox's July 16th release of I, Robot, the Singularity Institute announces “3 Laws Unsafe” (www.asimovlaws.com). “3 Laws Unsafe” explores the problems presented by Isaac Asimov’s Three Laws of Robotics, the principles intended for ensuring that robots help, but never harm, humans. The Three Laws are widely known and are often taken seriously as reasonable solutions for guiding future AI. But are they truly reasonable? “3 Laws Unsafe” addresses this question.
The site contains various interesting articles that expand on the problems in using an approach based on Asimov's laws to ensure a robot or AI's ethical behavior. For example, the meaning of these laws could be twisted in various ways, as in Asimov's stories; it could turn out that in retrospect, we wouldn't want robots to follow these exact laws at all. Then, there are the ethical problems in constraining a mind in this way. I think the arguments to dismiss the three laws as a model of AI ethics are convincing.