Abstract
In recent years the use of complex machine learning has increased drastically. These complex black box models trade interpretability for accuracy. The lack of interpretability is troubling for, e.g., socially sensitive, safety-critical, or knowledge extraction applications. In this paper, we propose a new explanation method, SLISE, for interpreting predictions from black box models. SLISE can be used with any black box model (model-agnostic), does not require any modifications to the black box model (post-hoc), and explains individual predictions (local). We evaluate our method using real-world datasets and compare it against other model-agnostic, local explanation methods. Our approach solves shortcomings in other related explanation methods by only using existing data instead of sampling new, artificial data. The method also generates more generalizable explanations and is usable without modification across various data domains.
Original language | English |
---|---|
Article number | 1143904 |
Journal | Frontiers in computer science |
Volume | 5 |
Number of pages | 17 |
ISSN | 2624-9898 |
DOIs | |
Publication status | Published - 8 Aug 2023 |
MoE publication type | A1 Journal article-refereed |
Fields of Science
- 113 Computer and information sciences
- HCI (human-computer interaction)
- XAI (explainable artificial intelligence)
- Explanations
- Interpretability
- Interpretable machine learning
- Local explanation
- Model-agnostic explanation