Machine Ethics: Similarities and Differences between Artificial Morality and Human Morality

Authors

  • Aníbal Monasterio Astobiza UPV/EHU

Abstract

Autonomous artificial systems such as bots, chatbots, robots, virtual assistants and even artificial animals (animats), cyborgs or computer platforms are increasingly part of our environment. This machine kingdom - the taxonomy of digital entities - grows very fast and can have unintended consequences not always good. Machine ethics (also known as computational ethics or artificial morality) is a new area of research that seeks to implement moral principles and preferences into the decision-making of machines and artificial systems. In this article, I compare both, machines ethics and human morality, and seek to understand their similarities and differences but also whether ethics is computable or it is plausible to code morality in artificial systems.

Published

2019-05-31

How to Cite

Monasterio Astobiza, A. (2019). Machine Ethics: Similarities and Differences between Artificial Morality and Human Morality. Dilemata, (30), 129–147. Retrieved from https://dilemata.net/revista/index.php/dilemata/article/view/412000295