Machine Ethics: Similarities and Differences between Artificial Morality and Human Morality
Abstract
Autonomous artificial systems such as bots, chatbots, robots, virtual assistants and even artificial animals (animats), cyborgs or computer platforms are increasingly part of our environment. This machine kingdom - the taxonomy of digital entities - grows very fast and can have unintended consequences not always good. Machine ethics (also known as computational ethics or artificial morality) is a new area of research that seeks to implement moral principles and preferences into the decision-making of machines and artificial systems. In this article, I compare both, machines ethics and human morality, and seek to understand their similarities and differences but also whether ethics is computable or it is plausible to code morality in artificial systems.
Downloads
Published
How to Cite
Issue
Section
License
All contents of this electronic edition, except where otherwise noted, are licensed under a “Creative Commons Reconocimiento-No Comercial 3.0 Spain” (CC-by-nc).