Abstrakti

In this paper, we ask how ‘cognitive extenders’, based on AI technology, affect their users’ status as moral agents and the moral evaluation of their actions. We study how ‘AI-extenders’ can either enhance or diminish their users’ moral agency. On the one hand, they can broaden the scope of agential features and on the other hand, they can undermine the agent’s autonomy and lead to decreased responsibility. Our focus is on moral agency and responsibility of the AI-extended human being as a hybrid, coupled system. Assuming standard conditions for responsible agency, we will look at specific cases where the extender would make a difference to the agent’s moral status. The thought-experimental extenders we are dealing with are enabled by already existing technologies. The obvious motivation behind the exercise is that these devices might be useful for people suffering from psychiatric conditions complicating the expression of their moral agency. We analyze the moral status of AI-extended agents as coupled systems and argue that the functioning of an AI-extender can make a difference to an agent’s fitness to be held morally responsible. This should be considered in the responsible design and development of AI-extenders.
Alkuperäiskielienglanti
LehtiSocial Epistemology : a journal of knowledge, culture and policy
Sivumäärä13
ISSN0269-1728
DOI - pysyväislinkit
TilaJulkaistu - 18 huhtik. 2025
OKM-julkaisutyyppiA1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä, vertaisarvioitu

Tieteenalat

  • 611 Filosofia

Siteeraa tätä