718. Beyond the Black Box: Unraveling the Role of Explainability in Human-AI Collaboration
Invited abstract in session TA-33: Decision Analysis and Artificial Intelligence (AI), stream Decision Analysis.
Tuesday, 8:30-10:00Room: Maurice Keyworth 1.31
Authors (first author is the speaker)
| 1. | Caner Canyakmaz
|
| TBS Education | |
| 2. | Tamer Boyaci
|
| ESMT Berlin | |
| 3. | Francis de Véricourt
|
| Management Science, ESMT Berlin |
Abstract
Explainable AI models have been increasingly studied to bring transparency to decision-making processes and alleviate users’ inappropriate reliance on AI inputs. Despite the promise, evidence from recent empirical studies has been quite mixed. We develop an analytical model that incorporates the limited but flexible nature of human cognition with imperfect machine recommendations and explanations that reflect the quality of these predictions. We investigate the impact of explainability on decision accuracy, underreliance, overreliance as well as cognitive effort. We find that while low explainability levels have no impact on accuracy and reliance levels, they lessen users’ cognitive burden. On the other hand, providing higher explainability levels enhances accuracy by improving overreliance but at the expense of higher underreliance. Furthermore, the incremental impact of explainability is higher when the decision-maker is more cognitively constrained and the decision task is sufficiently complex. Surprisingly, we find that higher explainability levels can escalate the overall cognitive burden, especially when the decision-maker is particularly pressed for time to complete a complex task and initially doubts the machine's quality, scenarios where explanations are most needed. By eliciting the comprehensive effects of explainability, our study contributes to our understanding of designing effective human-AI systems in diverse decision-making environments.
Keywords
- Decision Analysis
- Artificial Intelligence
Status: accepted
Back to the list of papers