684. Explainable Time Series Classification
Area: Decision supportMonday, 14:30-16:00
Room: Virtual Room 32
Authors (first author is the speaker)
|Univ Rennes, Inria, CNRS, IRISA|
In this talk, we will study different existing methods that can be used to explain decisions taken by time series classification models. We argue that, in the case of time series, the best explanations should take the form of sub-series (also called shapelets) since it is « pattern language » familiar to a time series user.
We review state-of-the-art classification methods that can jointly learn a shapelet-based representation of the series in the dataset and classify the series according to this representation. However, although the learned shapelets are discriminative, they are not always similar to pieces of a real series in the dataset. This makes them difficult to use to explain the classifier’s decision. We make use of a simple convolutional network to tackle the time series classification task and we introduce an adversarial regularization to constrain the model to learn meaningful shapelets.
Our classification results, on many univariate time series benchmark datasets, are comparable with the results obtained by state-of-the-art shapelet-based classification algorithms. However, we show, by comparing to other black box explanation methods that our adversarially regularized method learns shapelets that are, by design, better suited to explain decisions.
- Machine Learning
Back to the list of papers