Time for AI (Ethics) maturity model is now

Ville Vakkuri, Marianna Jantunen, Erika Halme, Kai Kristian Kemell, Anh Nguyen-Duc, Tommi Mikkonen, Pekka Abrahamsson

Forskningsoutput: TidskriftsbidragKonferensartikelVetenskapligPeer review


There appears to be a common agreement that ethical concerns are of high importance when it comes to systems equipped with some sort of Artificial Intelligence (AI). Demands for ethical AI are declared from all directions. As a response, in recent years, public bodies, governments, and universities have rushed in to provide a set of principles to be considered when AI based systems are designed and used. We have learned, however, that high-level principles do not turn easily into actionable advice for practitioners. Hence, also companies are publishing their own ethical guidelines to guide their AI development. This paper argues that AI software is still software and needs to be approached from the software development perspective. The software engineering paradigm has introduced maturity model thinking, which provides a roadmap for companies to improve their performance from the selected viewpoints known as the key capabilities. We want to voice out a call for action for the development of a maturity model for AI software. We wish to discuss whether the focus should be on AI ethics or, more broadly, the quality of an AI system, called a maturity model for the development of AI systems.

TidskriftCEUR Workshop Proceedings
StatusPublicerad - 2021
MoE-publikationstypA4 Artikel i en konferenspublikation
Evenemang2021 Workshop on Artificial Intelligence Safety, SafeAI 2021 - Virtual, Online
Varaktighet: 8 feb. 2021 → …

Bibliografisk information

Publisher Copyright:
Copyright © 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).


  • 113 Data- och informationsvetenskap

Citera det här