EURO-Online login
- New to EURO? Create an account
- I forgot my username and/or my password.
- Help with cookies
(important for IE8 users)
2595. Shaping Programmer Practices: Effective Strategies for Mitigating Bias in Machine Learning Algorithm Development
Invited abstract in session TD-7: Behaviour and decision support, stream Behavioural OR.
Tuesday, 14:30-16:00Room: 1019 (building: 202)
Authors (first author is the speaker)
1. | Seyedmohammadreza Shahsaheni
|
Haskayne School of Business, University of Calgary | |
2. | Osman Alp
|
Haskayne School of Business, University of Calgary | |
3. | Justin Weinhardt
|
Haskayne School of Business, University of Calgary | |
4. | Alireza Sabouri
|
University of Calgary |
Abstract
In recent years, increased awareness of gender bias in machine learning has led developers to focus more on gender equality in model outcomes. This study investigates how fairness norms and accuracy thresholds impact developers' model choices and reporting decisions. The experiment design in this study replicates the machine learning algorithm development process. Results from 604 programmers participating in this experiment show that fairness norms led to the selection of fairer models (Hypothesis H1). Moreover, developers in companies valuing accuracy less prioritized fairness when informed of its norm (Hypothesis H2). Interestingly, we found that high accuracy thresholds with fairness norms encouraged participants to prioritize fairness, even at the expense of personal benefits. As an exploratory analysis, we study the reporting provided by programmers and the effect of their backgrounds and demographics on their models' performance. This study highlights the importance of fairness norms in guiding machine learning development and reporting, aiding gender equality in AI applications.
Keywords
- Machine Learning
- Ethics
- Decision Analysis
Status: accepted
Back to the list of papers