Innocence over utilitarianism - Heightened Moral Standards for Robots in Rescue Dilemmas

Jukka Sundvall, Maria-Anna Drosinou, Ivar Hannikainen, Kaisa Maria Elovaara, Juho Halonen, Volo Herzon, Robin Kopecký, Michaela Jirout Košová, Mika Koverola, Anton Johannes Olavi Kunnari, Silva Liisa Loviisa Perander, Teemu Juhani Saikkonen, Jussi Petteri Palomäki, Michael Laakasuo

Research output: Contribution to journalArticleScientificpeer-review

Abstract

Research in moral psychology has found that robots, more than humans, are expected to make utilitarian decisions. This expectation is found specifically when contrasting utilitarian action to deontological inaction. In a series of eight experiments (total N = 3752), we compared judgments about robots' and humans' decisions in a rescue dilemma with no possibility of deontological inaction. A robot's decision to rescue an innocent victim of an accident was judged more positively than the decision to rescue two people culpable for the accident (Studies 1-2b). This pattern repeated in a large-scale web survey (Study 3, N = similar to 19,000) and reversed when all victims were equally culpable/innocent (Study 5). Differences in judgments about humans' and robots' decisions were largest for norm-violating decisions. In sum, robots are not always expected to make utilitarian decisions, and their decisions are judged differently from those of humans based on other moral standards as well.
Original languageEnglish
JournalEuropean Journal of Social Psychology
Volume53
Issue number4
Pages (from-to)779-804
Number of pages26
ISSN0046-2772
DOIs
Publication statusPublished - Jun 2023
MoE publication typeA1 Journal article-refereed

Fields of Science

  • 515 Psychology
  • Folk ethics
  • Folk justice
  • Moral dilemma
  • Rescue robotics
  • Utilitarianism

Cite this