Abstrakti
In recent years the use of complex machine learning has increased drastically. These complex black box models trade interpretability for accuracy. The lack of interpretability is troubling for, e.g., socially sensitive, safety-critical, or knowledge extraction applications. In this paper, we propose a new explanation method, SLISE, for interpreting predictions from black box models. SLISE can be used with any black box model (model-agnostic), does not require any modifications to the black box model (post-hoc), and explains individual predictions (local). We evaluate our method using real-world datasets and compare it against other model-agnostic, local explanation methods. Our approach solves shortcomings in other related explanation methods by only using existing data instead of sampling new, artificial data. The method also generates more generalizable explanations and is usable without modification across various data domains.
Alkuperäiskieli | englanti |
---|---|
Artikkeli | 1143904 |
Lehti | Frontiers in computer science |
Vuosikerta | 5 |
Sivumäärä | 17 |
ISSN | 2624-9898 |
DOI - pysyväislinkit | |
Tila | Julkaistu - 8 elok. 2023 |
OKM-julkaisutyyppi | A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä, vertaisarvioitu |
Tieteenalat
- 113 Tietojenkäsittely- ja informaatiotieteet