3191. iSee: the challenge of designing Explanation Strategies That Users Trust and Understand
Invited abstract in session TB-26: Explainability for decision support tools : The what, why and how, stream Making an Impact: The Practitioners' Stream 1.
Tuesday, 10:30-12:00Room: Maurice Keyworth 1.33
Authors (first author is the speaker)
| 1. | Anne Liret
|
| R&T, British Telecom |
Abstract
Explainability is the ability of an AI system to explain why it took a certain decision. By introducing AI within OR systems, we expect improving the robustness but we as well add non deterministic behaviour for instance caused by the learning from experience aspect of AI.
The aim of this session is to provide the attendees some insights on the challenges of building a convincing explanation in Operational Research based decision-making systems, some examples and tools that can help, and lessons learnt from practitioners and for practitioners. Time for panel discussion, questions and answers will be reserved so that attendees can share their thoughts and take away useful knowledge.
First a workshop will detail the challenge of explaining the outcome of decision-making systems including an overview of the iSee project, the explanation experience sharing platform built with this project (isee4xai.com) and case studies.
We will hear about how to respond to the need for explaining the result of AI and OR -based systems, the difficult task of building fit-for-user explanations for decision-making models in business, when an explanation does actually make the best impact, and whether bridging AI and OR could help explaining decision support tools and results.
Hands-on examples will demonstrate how to define explanation needs for a given model, activate recommended strategies for generating explanations based on past explanation experience, and how to evaluate their quality from a user experience perception point of view.
Keywords
- Practice of OR
- Artificial Intelligence
Status: accepted
Back to the list of papers