Artificial intelligence and robotics are rapidly advancing. Humans are increasingly often affected by autonomous machines making choices with moral repercussions. At the same time, classical research in robotics shows that people are adverse to robots that appear eerily human – a phenomenon commonly referred to as the uncanny valley effect. Yet, little is known about how machines’ appearances influence how human evaluate their moral choices. Here we integrate the uncanny valley effect into moral psychology. In two experiments we test whether humans evaluate identical moral choices made by robots differently depending on the robots’ appearance. Participants evaluated either deontological (“rule based”) or utilitarian (“consequence based”) moral decisions made by different robots. The results provide first indication that people evaluate moral choices by robots that resemble humans as less moral compared to the same moral choices made by humans or non-human robots: a moral uncanny valley effect. We discuss the implications of our findings for moral psychology, social robotics and AI-safety policy.
LisätietojaA Correction to this article was published on 25 March 2021, DOI: 10.1007/s12369-020-00738-6.
- 113 Tietojenkäsittely- ja informaatiotieteet
- 515 Psykologia
- 611 Filosofia