Trust and Coordinated Group Action

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

Trust between the participants of a group action is a presupposition of smooth coordination. We study whether robots could participate in joint action, and whether there is conceptual space for robots to figure as trustworthy cooperators in light of various philosophical trust accounts. We discuss different notions of joint action and group action and locate robots into the scale of such notions of varying strength. We suggest that when we use trust notions of normative strength in the context of AI or robotics, the normative component of such talk boils down to talk of responsible robotics or AI on the human side, the reliance, while reliability or predictability component of such trust notion applies to machines and algorithms as well.
Original languageEnglish
Title of host publicationSocial Robots with AI: Prospects, Risks, and Responsible Methods : Proceedings of Robophilosophy 2024, 19–23 August 2024, Aarhus University, Denmark, and online
EditorsJohanna Seibt, Peter Fazekas, Oliver Santiago Quick
Number of pages10
PublisherIOS PRESS
Publication date2025
Pages633-642
ISBN (Print)978-1-64368-567-0
ISBN (Electronic)978-1-64368-568-7
DOIs
Publication statusPublished - 2025
MoE publication typeA4 Article in conference proceedings
EventRobophilosophy Conference 2024: Social Robots With AI: Prospects, Risks, and Responsible Methods - Aarhus, Denmark
Duration: 20 Aug 202423 Aug 2024
Conference number: 6
https://cas.au.dk/en/robophilosophy/conferences/rpc2024

Publication series

NameFrontiers in Artificial Intelligence and Applications
PublisherIOS Press
Volume397
ISSN (Print)0922-6389
ISSN (Electronic)1879-8314

Fields of Science

  • 611 Philosophy
  • 113 Computer and information sciences

Cite this