EURO-Online login
- New to EURO? Create an account
- I forgot my username and/or my password.
- Help with cookies
(important for IE8 users)
3202. Explaining hidden variable interactions inside a model: a comparison study between different methods.
Invited abstract in session WB-27: Unraveling the Black Box: Advances in Model Explainability, stream Mathematical Optimization for XAI.
Wednesday, 10:30-12:00Room: 047 (building: 208)
Authors (first author is the speaker)
1. | Pablo Morala
|
Department of Statistics, Universidad Carlos III de Madrid | |
2. | Jenny Alexandra Cifuentes Quintero
|
Quantitative Methods, Universidad Pontificia Comillas | |
3. | Rosa Elvira Lillo Rodríguez
|
Statstics, Universidad Carlos III de Madrid | |
4. | Iñaki Ucar
|
UC3M-Santander Big Data Institute, Universidad Carlos III de Madrid |
Abstract
Being able to explain feature importance when explaining model predictions has been the main focus of Explainable Artificial Intelligence (XAI) methods. However, many of them assign importance values to single variables instead of taking interactions between variables into account, an effect that usually appears in real life problems. In this work we present a comparison study between some extensions of SHAP values (one of the most widely used interpretability methods) to include interactions, and a novel interpretability approach for neural networks named NN2Poly, which in this study is also used in a surrogate manner to explain other kind of models. Extensive simulations are carried out under different settings, both local and global explanations are compared and ways of computing comparable importance order metrics are presented.
Keywords
- Artificial Intelligence
- Expert Systems and Neural Networks
- Machine Learning
Status: accepted
Back to the list of papers