1884. Making Machine Learning Explanations Truthful and Intelligible
Area: Decision supportMonday, 14:30-16:00
Room: Virtual Room 32
Authors (first author is the speaker)
|University of Bristol|
Predictive models come with a myriad of well-defined performance metrics that guide us through their development, validation and deployment. While the multiplicity of these measurements poses challenges in itself, the lack of agreed-upon evaluation criteria in machine learning explainability creates even more fundamental issues. For one, transparency of predictive algorithms tends to be elusive and notoriously difficult to measure. Without universal and objective interpretability metrics, our evaluation of such systems may be subject to personal preferences exhibited by "I know it when I see it" attitude and human cognitive biases, for example, the illusory truth effect and the confirmation bias. Resorting to user studies -- considered the field's gold standard -- may not be of much help either when the assumptions of test and deployment environments are misaligned.
Shall we take machine learning explanations at their face value? What to do when we are shown multiple, possibly conflicting, explanations? What prior (technical) knowledge do we need to appreciate their insights and limitations? With all of these questions and not many definitive answers, how do we go beyond naive compliance with legal frameworks such as the GDPR? In this talk I will show how to identify obscure assumptions and overcome inherent limitations of black-box explainers to generate truthful and intelligible insights that can be harnessed to satisfy our scientific curiosity and create business value.
- Artificial Intelligence
- Machine Learning
Back to the list of papers