Moral Uncanny Valley – A Robot’s Appearance Moderates How its Decisions are Judged

Research output: Contribution to journalArticlepeer-review

Abstract

Artificial intelligence and robotics are rapidly advancing. Humans are increasingly often affected by autonomous machines making choices with moral repercussions. At the same time, classical research in robotics shows that people are adverse to robots that appear eerily human – a phenomenon commonly referred to as the uncanny valley effect. Yet, little is known about how machines’ appearances influence how human evaluate their moral choices. Here we integrate the uncanny valley effect into moral psychology. In two experiments we test whether humans evaluate identical moral choices made by robots differently depending on the robots’ appearance. Participants evaluated either deontological (“rule based”) or utilitarian (“consequence based”) moral decisions made by different robots. The results provide first indication that people evaluate moral choices by robots that resemble humans as less moral compared to the same moral choices made by humans or non-human robots: a moral uncanny valley effect. We discuss the implications of our findings for moral psychology, social robotics and AI-safety policy.
Original languageEnglish
JournalInternational Journal of Social Robotics
Volume13
Pages (from-to)1679–1688
Number of pages10
ISSN1875-4791
DOIs
Publication statusPublished - 2021
MoE publication typeA1 Journal article-refereed

Bibliographical note

A Correction to this article was published on 25 March 2021, DOI: 10.1007/s12369-020-00738-6.

Fields of Science

  • AI
  • Decision-making
  • Moral psychology
  • Uncanny valley
  • 113 Computer and information sciences
  • 515 Psychology
  • 611 Philosophy

Cite this