Fair Regression
Fair regression aims to develop machine learning models that produce accurate predictions while mitigating biases against sensitive groups (e.g., based on race or gender). Current research focuses on achieving fairness through various methods, including post-processing techniques like Wasserstein barycenters to adjust model outputs and counterfactual approaches that estimate outcomes independent of sensitive attributes, often employing double machine learning. These advancements are crucial for ensuring equitable outcomes in high-stakes applications like loan applications and risk assessment, promoting both fairness and accountability in algorithmic decision-making.
Papers
September 4, 2024
May 7, 2024
November 3, 2023
March 21, 2023
February 21, 2023
November 4, 2022
September 1, 2022
August 16, 2022