Fair Regression

Fair regression aims to develop machine learning models that produce accurate predictions while mitigating biases against sensitive groups (e.g., based on race or gender). Current research focuses on achieving fairness through various methods, including post-processing techniques like Wasserstein barycenters to adjust model outputs and counterfactual approaches that estimate outcomes independent of sensitive attributes, often employing double machine learning. These advancements are crucial for ensuring equitable outcomes in high-stakes applications like loan applications and risk assessment, promoting both fairness and accountability in algorithmic decision-making.

Papers