Sammanfattning
The moral importance of liability to harm has so far been ignored in the lively debate about what self-driving vehicles should be programmed to do when an accident is inevitable. But as discussions in the context of self-defense have highlighted, liability matters a great deal to just distribution of risk of harm. While it is sometimes morally required simply to minimize the risk of relevant harms, this is not so when one party is responsible for creating the risky situation, which is common in real-world traffic scenarios. In particular, insofar as possible, those who have voluntarily engaged in activity that foreseeably poses the risk of harm should be the ones who bear it, other things being equal. This should not be controversial when someone intentionally or recklessly creates a risky situation. But I argue that on plausible assumptions, merely choosing to use a self-driving vehicle typically gives rise to a degree of liability, so that such vehicles should be programmed to shift the larger share of the risk from innocent outsiders to users by default. Insofar as automated vehicles cannot be programmed to take all the factors affecting liability into account, there is a pro tanto moral reason to restrict their use.
Originalspråk | engelska |
---|---|
Tidskrift | Journal of Applied Philosophy |
Volym | 38 |
Nummer | 4 |
Sidor (från-till) | 630-645 |
Antal sidor | 16 |
ISSN | 0264-3758 |
DOI | |
Status | Publicerad - aug. 2021 |
MoE-publikationstyp | A1 Tidskriftsartikel-refererad |
Vetenskapsgrenar
- 611 Filosofi
- 113 Data- och informationsvetenskap