EURO 2025 Leeds
Abstract Submission

2107. Clues on building convincing explanation for your OR results or systems : Knowledge sharing and panel discussion

Invited abstract in session TB-26: Explainability for decision support tools : The what, why and how, stream Making an Impact: The Practitioners' Stream 1.

Tuesday, 10:30-12:00
Room: Maurice Keyworth 1.33

Authors (first author is the speaker)

1. Anne Liret
R&T, British Telecom
2. Inès SAAD
Laboratory MIS (UPJV), ESC Amiens
3. Ilker Birbil
Business Analytics, Amsterdam University

Abstract

Explainability is the ability of an AI system to explain why it took a certain decision. By introducing AI within OR systems, we expect improving the robustness but we as well add non deterministic behaviour for instance caused by the learning from experience aspect of AI.

The aim of this session is to provide the attendees some insights on the challenges of building a convincing explanation in Operational Research based decision-making systems, some examples and tools that can help, and lessons learnt from practitioners and for practitioners. Time for panel discussion, questions and answers will be reserved so that attendees can share their thoughts and take away useful knowledge.

In the second part of the session, time will be dedicated to a panel discussion. Attendees and subject matter experts will be able to discuss questions such as : What can make algorithms work with human users? Do we need Human-in-the-loop OR for this ? What is the technical ability to even provide explanations ? How do we know that we can trust a decision-making system?

We will hear thoughts and advices from the following panellists :

-Prof. Ines SAAD, professor of Information Systems and researcher at MIS laboratory, at University of Picardie Jules Verne in France. The focus of her research is on the knowledge management, information system and multiple criteria decision making. She has several publications in international conferences and journals. Inès SAAD is also a reviewer in many international journals and conferences such as EJOR, IEEE Transactions on Systems, Man, and Cybernetics, Decision Support System. Ines will share her thoughts about the need to capturing the explanation goals from decision makers and how to use such knowledge to adapt the explanation content and format to the user prefence, and also to generate trust in multi-criteria recommendation tools.

-Prof. Ilker BIRBIL, professor of AI and Optimisation Techniques for Business and Society, at Amsterdam University, Netherlands. Since February 2022, he has been the head of the Business Analytics section of the university of Amsterdam. His research interests center around optimization methods in data science and decision making, interpretable machine learning and data privacy in operations research. Ilker will share about the need for coherent explanation approach aligned with the behaviour of exact and approximative solving methods.

-Dr. Anne LIRET, research manager and principal research lead in sustainable resource optimisation at British Telecom in Paris, France. Anne will speak about the iSee European project on explainable AI and more specifically first how to elevate decision-support tools with the ability of generate explanations that meet users intent, are meaningful in the form and content, second the needs for repository helping practitioners to reuse explanation methods for their needs, to evaluate their quality from user perspective, and to build an adaptable explanation strategy.

Keywords

Status: accepted


Back to the list of papers