TY - JOUR
T1 - Innocence over utilitarianism - Heightened Moral Standards for Robots in Rescue Dilemmas
AU - Sundvall, Jukka
AU - Drosinou, Maria-Anna
AU - Hannikainen, Ivar
AU - Elovaara, Kaisa Maria
AU - Halonen, Juho
AU - Herzon, Volo
AU - Kopecký, Robin
AU - Jirout Košová, Michaela
AU - Koverola, Mika
AU - Kunnari, Anton Johannes Olavi
AU - Perander, Silva Liisa Loviisa
AU - Saikkonen, Teemu Juhani
AU - Palomäki, Jussi Petteri
AU - Laakasuo, Michael
PY - 2023/6
Y1 - 2023/6
N2 - Research in moral psychology has found that robots, more than humans, are expected to make utilitarian decisions. This expectation is found specifically when contrasting utilitarian action to deontological inaction. In a series of eight experiments (total N = 3752), we compared judgments about robots' and humans' decisions in a rescue dilemma with no possibility of deontological inaction. A robot's decision to rescue an innocent victim of an accident was judged more positively than the decision to rescue two people culpable for the accident (Studies 1-2b). This pattern repeated in a large-scale web survey (Study 3, N = similar to 19,000) and reversed when all victims were equally culpable/innocent (Study 5). Differences in judgments about humans' and robots' decisions were largest for norm-violating decisions. In sum, robots are not always expected to make utilitarian decisions, and their decisions are judged differently from those of humans based on other moral standards as well.
AB - Research in moral psychology has found that robots, more than humans, are expected to make utilitarian decisions. This expectation is found specifically when contrasting utilitarian action to deontological inaction. In a series of eight experiments (total N = 3752), we compared judgments about robots' and humans' decisions in a rescue dilemma with no possibility of deontological inaction. A robot's decision to rescue an innocent victim of an accident was judged more positively than the decision to rescue two people culpable for the accident (Studies 1-2b). This pattern repeated in a large-scale web survey (Study 3, N = similar to 19,000) and reversed when all victims were equally culpable/innocent (Study 5). Differences in judgments about humans' and robots' decisions were largest for norm-violating decisions. In sum, robots are not always expected to make utilitarian decisions, and their decisions are judged differently from those of humans based on other moral standards as well.
KW - 515 Psychology
KW - Folk ethics
KW - Folk justice
KW - Moral dilemma
KW - Rescue robotics
KW - Utilitarianism
U2 - 10.1002/ejsp.2936
DO - 10.1002/ejsp.2936
M3 - Article
SN - 0046-2772
VL - 53
SP - 779
EP - 804
JO - European Journal of Social Psychology
JF - European Journal of Social Psychology
IS - 4
ER -