Liability Rules for AI-related Harm: Law and Economics Lessons for a European Approach

Shu Li, Michael Faure, Katri Havu

Research output: Contribution to journalArticleScientificpeer-review

Abstract

The potential of artificial intelligence (AI) has grown exponentially in recent years, which not only generates value but also creates risks. AI systems are characterised by their complexity, opacity and autonomy in operation. Now and in the foreseeable future, AI systems will be operating in a manner that is not fully autonomous. This signifies that providing appropriate incentives to the human par- ties involved is still of great importance in reducing AI-related harm. Therefore, liability rules should be adapted in such a way to provide the relevant parties with incentives to efficiently reduce the social costs of potential accidents. Relying on a law and economics approach, we address the theo- retical question of what kind of liability rules should be applied to different parties along the value chain related to AI. In addition, we critically analyse the ongoing policy debates in the European Union, discussing the risk that European policymakers will fail to determine efficient liability rules with regard to different stakeholders.
Original languageEnglish
JournalEuropean Journal of Risk Regulation
Volume13
Issue number4
Pages (from-to)618-634
Number of pages17
ISSN1867-299X
DOIs
Publication statusPublished - Dec 2022
MoE publication typeA1 Journal article-refereed

Fields of Science

  • 513 Law
  • Law and economics
  • AI-related harm
  • Artificial intelligence
  • Deterrence
  • Developers
  • Liability rules
  • Operators
  • Risk-bearing

Cite this