1602. Fairness stability and its connections with explainability
Invited abstract in session TA-49: Fair and Interpretable Machine Learning, stream Analytics.
Tuesday, 8:30-10:00Room: Parkinson B10
Authors (first author is the speaker)
| 1. | Pablo Casas
|
| Decision analytics and risk, University of Southampton | |
| 2. | Huan Yu
|
| University of southampton | |
| 3. | Christophe Mues
|
| Southampton Business School, University of Southampton |
Abstract
Ensuring fairness in credit scoring models, in the context of regulatory frameworks such as the AI Act, has attracted considerable attention in the literature. However, to date, there is a lack of understanding of the temporal stability of fairness, and the factors that affect it. This is especially relevant in nonlinear models where data shifts can cause counterintuitive results.
To bridge this gap, we investigate the temporal evolution of fairness for XGBoost with fairness and stability enhancements, focusing on the interplay between fairness stability metrics under conditions of data drift and the stability of the explanations. In doing so, we establish a connection between fairness declines and the stability in proxy variables. To measure this, we employ the Population Stability Index (PSI) to quantify changes in Shapley value distributions, providing insights into the evolving feature interpretations and their impact on fairness. Furthermore, we address the limitations of traditional fairness metrics, introducing generalized measures that accommodate diverse data types and nonlinear fairness concerns, building upon the principles of separation and independence.
By quantifying the effects of data shifts on both fairness and model interpretability, this research thus aims to provide practical methodologies for monitoring and maintaining equitable credit scoring systems.
Keywords
- Finance and Banking
- Ethics
- Machine Learning
Status: accepted
Back to the list of papers