Isaac Asimov, a prolific writer of many diverse subjects like science fiction, history, chemistry, and Shakespeare, would also have a major impact on robots. In a short story he wrote in 1942 (“Runaround”), he set forth his Three Laws of Robotics:
- 1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- 2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- 3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Note
Asimov would later add another one, the zeroth law, which stated: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” He considered this law to be the most important.
Asimov would write more short stories that reflected how the laws would play out in complex situations, and they would be collected in a book called I, Robot. All these took place in the world of the 21st century.
The Three Laws represented Asimov’s reaction to how science fiction portrayed robots as malevolent. But he thought this was unrealistic. Asimov had the foresight that there would emerge ethical rules to control the power of robots.
As of now, Asimov’s vision is starting to become more real—in other words, it is a good idea to explore ethical principles. Granted, this may not necessarily mean that his approach is the right way. But it is a good start, especially as robots get smarter and more personal because of the power of AI.

Leave a Reply