191. Coherent Local Explanations for Mathematical Optimization
Invited abstract in session WC-12: Explainability and Interpretability in Optimization, stream Artificial Intelligence, Machine Learning and Optimization.
Wednesday, 13:30-15:00Room: H10
Authors (first author is the speaker)
| 1. | Daan Otto
|
| Amsterdam Business School, University of Amsterdam | |
| 2. | Jannis Kurtz
|
| Amsterdam Business School, University of Amsterdam | |
| 3. | Ilker Birbil
|
| Business Analytics, Amsterdam University |
Abstract
The surge of explainable artificial intelligence methods seeks to enhance transparency and explainability in machine learning models. At the same time, there is a growing demand for explaining decisions taken through complex algorithms used in mathematical optimization. However, current explanation methods do not take into account the structure of the underlying optimization problem, leading to unreliable outcomes. In response to this need, we introduce Coherent Local Explanations for Mathematical Optimization (CLEMO). CLEMO provides explanations for multiple components of optimization models, the objective value and decision variables, which are coherent with the underlying model structure. Our sampling-based procedure can provide explanations for the behavior of exact and heuristic solution algorithms. The effectiveness of CLEMO is illustrated by experiments for the shortest path problem, the knapsack problem, and the vehicle routing problem using parametric regression models as explanations. The concept of CLEMO can be extended to provide explanations using decision tree models, as we will show.
Keywords
- Artificial Intelligence
- Machine Learning
- Simulation
Status: accepted
Back to the list of papers