Time for AI (Ethics) maturity model is now

Ville Vakkuri, Marianna Jantunen, Erika Halme, Kai Kristian Kemell, Anh Nguyen-Duc, Tommi Mikkonen, Pekka Abrahamsson

Research output: Contribution to journalConference articleScientificpeer-review


There appears to be a common agreement that ethical concerns are of high importance when it comes to systems equipped with some sort of Artificial Intelligence (AI). Demands for ethical AI are declared from all directions. As a response, in recent years, public bodies, governments, and universities have rushed in to provide a set of principles to be considered when AI based systems are designed and used. We have learned, however, that high-level principles do not turn easily into actionable advice for practitioners. Hence, also companies are publishing their own ethical guidelines to guide their AI development. This paper argues that AI software is still software and needs to be approached from the software development perspective. The software engineering paradigm has introduced maturity model thinking, which provides a roadmap for companies to improve their performance from the selected viewpoints known as the key capabilities. We want to voice out a call for action for the development of a maturity model for AI software. We wish to discuss whether the focus should be on AI ethics or, more broadly, the quality of an AI system, called a maturity model for the development of AI systems.

Original languageEnglish
JournalCEUR Workshop Proceedings
Publication statusPublished - 2021
MoE publication typeA4 Article in conference proceedings
Event2021 Workshop on Artificial Intelligence Safety, SafeAI 2021 - Virtual, Online
Duration: 8 Feb 2021 → …

Bibliographical note

Publisher Copyright:
Copyright © 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

Fields of Science

  • 113 Computer and information sciences

Cite this