Asimov’s Ethics Rules

Isaac Asimov formulated his famous three rules of robotics in order to give the robots in his science fiction stories a credible operating basis and the means for interesting plot twists based on apparent contradictions within the rules and how they might be resolved. The rules were as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

It’s interesting to note that the third rule protects the survival of the individual robot itself, only after both the survival and well being of any human is protected (first law) and after orders are followed (second law). In normal human ethics, a generalization could be assumed that personal survival is more important than group survival and that a human being would have the intelligence to know when this is not true and would be willing to at least consider sacrificing on the personal level for the greater good of the group or other beings. Asimov probably composed his rules to diffuse distrust of intelligent robots with his readers.

Leave a Reply

You must be logged in to post a comment.