EURO 2024 Copenhagen
Abstract Submission

EURO-Online login

625. Fair mixed effects support vector machine

Invited abstract in session TA-28: Fairness and responsible AI, stream Advancements of OR-analytics in statistics, machine learning and data science.

Tuesday, 8:30-10:00
Room: 065 (building: 208)

Authors (first author is the speaker)

1. João Vitor Pamplona
Economic and Social Statistics Department, Trier University
2. Jan Pablo Burgard
Trier University

Abstract

When using machine learning for automated prediction, it is important to account for fairness in the prediction. Fairness in machine learning aims to ensure that biases in the data and model inaccuracies do not lead to discriminatory decisions. E.g., predictions from fair machine learning models should not discriminate against sensitive variables such as sexual orientation and ethnicity.

A fundamental assumption in machine learning is the independence of observations. However, this assumption often doesn't hold true for data describing social phenomena, where data points are often clustered based. Hence, if the machine learning models do not account for the cluster correlations, the results may be biased. Especially high is the bias in cases where the cluster assignment is correlated to the variable of interest.

We present a fair mixed effects support vector machine algorithm that can handle both problems simultaneously. With a reproducible simulation study we demonstrate the impact of clustered data on the quality of fair machine learning predictions.

Keywords

Status: accepted


Back to the list of papers