EURO-Online login
- New to EURO? Create an account
- I forgot my username and/or my password.
- Help with cookies
(important for IE8 users)
625. Fair mixed effects support vector machine
Invited abstract in session TA-28: Fairness and responsible AI, stream Advancements of OR-analytics in statistics, machine learning and data science.
Tuesday, 8:30-10:00Room: 065 (building: 208)
Authors (first author is the speaker)
1. | João Vitor Pamplona
|
Economic and Social Statistics Department, Trier University | |
2. | Jan Pablo Burgard
|
Trier University |
Abstract
When using machine learning for automated prediction, it is important to account for fairness in the prediction. Fairness in machine learning aims to ensure that biases in the data and model inaccuracies do not lead to discriminatory decisions. E.g., predictions from fair machine learning models should not discriminate against sensitive variables such as sexual orientation and ethnicity.
A fundamental assumption in machine learning is the independence of observations. However, this assumption often doesn't hold true for data describing social phenomena, where data points are often clustered based. Hence, if the machine learning models do not account for the cluster correlations, the results may be biased. Especially high is the bias in cases where the cluster assignment is correlated to the variable of interest.
We present a fair mixed effects support vector machine algorithm that can handle both problems simultaneously. With a reproducible simulation study we demonstrate the impact of clustered data on the quality of fair machine learning predictions.
Keywords
- Machine Learning
- Analytics and Data Science
- Programming, Quadratic
Status: accepted
Back to the list of papers