TY - JOUR
T1 - Moral psychological exploration of the asymmetry effect in AI-assisted euthanasia decisions
AU - Laakasuo, Michael
AU - Kunnari, Anton
AU - Francis, Kathryn
AU - Košová, Michaela Jirout
AU - Kopecký, Robin
AU - Buttazzoni, Paolo
AU - Koverola, Mika
AU - Palomäki, Jussi
AU - Drosinou, Maria-Anna
AU - Hannikainen, Ivar
N1 - Publisher Copyright:
© 2025
PY - 2025
Y1 - 2025
N2 - A recurring discrepancy in attitudes toward decisions made by human versus artificial agents, termed the Human-Robot moral judgment asymmetry, has been documented in moral psychology of AI. Across a wide range of contexts, AI agents are subject to greater moral scrutiny than humans for the same actions and decisions. In eight experiments (total N = 5837), we investigated whether the asymmetry effect arises in end-of-life care contexts and explored the mechanisms underlying this effect. Our studies documented reduced approval of an AI doctor's decision to withdraw life support relative to a human doctor (Studies 1a and 1b). This effect persisted regardless of whether the AI assumed a recommender role or made the final medical decision (Studies 2a and 2b and 3), but, importantly, disappeared under two conditions: when doctors kept on rather than withdraw life support (Studies 1a, 1b and 3), and when they carried out active euthanasia (e.g., providing a lethal injection or removing a respirator on the patient's demand) rather than passive euthanasia (Study 4). These findings highlight two contextual factors–the level of automation and the patient's autonomy–that influence the presence of the asymmetry effect, neither of which is not predicted by existing theories. Finally, we found that the asymmetry effect was partly explained by perceptions of AI incompetence (Study 5) and limited explainability (Study 6). As the role of AI in medicine continues to expand, our findings help to outline the conditions under which stakeholders disfavor AI over human doctors in clinical settings.
AB - A recurring discrepancy in attitudes toward decisions made by human versus artificial agents, termed the Human-Robot moral judgment asymmetry, has been documented in moral psychology of AI. Across a wide range of contexts, AI agents are subject to greater moral scrutiny than humans for the same actions and decisions. In eight experiments (total N = 5837), we investigated whether the asymmetry effect arises in end-of-life care contexts and explored the mechanisms underlying this effect. Our studies documented reduced approval of an AI doctor's decision to withdraw life support relative to a human doctor (Studies 1a and 1b). This effect persisted regardless of whether the AI assumed a recommender role or made the final medical decision (Studies 2a and 2b and 3), but, importantly, disappeared under two conditions: when doctors kept on rather than withdraw life support (Studies 1a, 1b and 3), and when they carried out active euthanasia (e.g., providing a lethal injection or removing a respirator on the patient's demand) rather than passive euthanasia (Study 4). These findings highlight two contextual factors–the level of automation and the patient's autonomy–that influence the presence of the asymmetry effect, neither of which is not predicted by existing theories. Finally, we found that the asymmetry effect was partly explained by perceptions of AI incompetence (Study 5) and limited explainability (Study 6). As the role of AI in medicine continues to expand, our findings help to outline the conditions under which stakeholders disfavor AI over human doctors in clinical settings.
KW - 515 Psychology
KW - AI ethics
KW - Moral judgment
KW - Moral psychology of AI
KW - Moral psychology of robotics
KW - Passive euthanasia
U2 - 10.1016/j.cognition.2025.106177
DO - 10.1016/j.cognition.2025.106177
M3 - Article
AN - SCOPUS:105004941437
SN - 0010-0277
VL - 262
JO - Cognition
JF - Cognition
M1 - 106177
ER -