1979. Cognitive Costs and Ethical Choices: Fairness in Human-Machine Decision Systems
PENDINGInvited abstract in session MD-28: Human-AI Collaboration and Ethics, stream Decision Support Systems.
Monday, 14:30-16:00Room: Maurice Keyworth 1.03
Authors (first author is the speaker)
| 1. | Seyedmohammadreza Shahsaheni
|
| Haskayne School of Business, University of Calgary | |
| 2. | Osman Alp
|
| Haskayne School of Business, University of Calgary | |
| 3. | Justin Weinhardt
|
| Haskayne School of Business, University of Calgary | |
| 4. | Alireza Sabouri
|
| University of Calgary |
Abstract
Advances in machine learning have led to the widespread use of algorithms in high-stakes decision-making domains such as healthcare, finance, and human resources. This paper applies a rational inattention framework to examine how human cognitive limits affect decision-making when interacting with algorithmic outputs that exhibit bias. The study compares situations where decision-makers are aware of group-specific bias with those in which the decision-makers are unaware. When bias is overlooked, the assumption of uniform error rates across sensitive groups in data can reinforce existing inequalities, favoring those already advantaged. Awareness of these biases leads decision-makers to adjust their evaluations, which may help reduce disparities. The research identifies specific belief thresholds that influence outcomes, noting that moderate qualification rates tend to disadvantage underrepresented groups while more extreme prior beliefs produce different effects. The findings can assist policymakers, organizational leaders, and practitioners in designing fairer decision systems that effectively balance human judgment with machine outputs.
Keywords
- Artificial Intelligence
- Decision Support Systems
- Ethics
Status: accepted
Back to the list of papers