Online presentation.
684. Explainable Time Series Classification
Invited abstract in session MD-32: Fair and explainable models 2, stream Multiple Criteria Decision Analysis.
Area: Decision support
Monday, 14:30-16:00Room: Virtual Room 32
Authors (first author is the speaker)
1. | Yichang Wang
|
Univ Rennes, Inria, CNRS, IRISA |
Abstract
In this talk, we will study different existing methods that can be used to explain decisions taken by time series classification models. We argue that, in the case of time series, the best explanations should take the form of sub-series (also called shapelets) since it is « pattern language » familiar to a time series user.
We review state-of-the-art classification methods that can jointly learn a shapelet-based representation of the series in the dataset and classify the series according to this representation. However, although the learned shapelets are discriminative, they are not always similar to pieces of a real series in the dataset. This makes them difficult to use to explain the classifier’s decision. We make use of a simple convolutional network to tackle the time series classification task and we introduce an adversarial regularization to constrain the model to learn meaningful shapelets.
Our classification results, on many univariate time series benchmark datasets, are comparable with the results obtained by state-of-the-art shapelet-based classification algorithms. However, we show, by comparing to other black box explanation methods that our adversarially regularized method learns shapelets that are, by design, better suited to explain decisions.
Keywords
- Machine Learning
Status: accepted
Back to the list of papers