Fair Representation Learning
Fair representation learning aims to create data representations that minimize bias based on sensitive attributes (e.g., race, gender) while preserving utility for downstream tasks. Current research focuses on developing methods that address limitations of existing approaches, such as instability in adversarial training and overfitting to proxy tasks, often employing techniques like variational autoencoders, contrastive learning, and normalizing flows to achieve this. This field is crucial for mitigating algorithmic discrimination in various applications, impacting both the development of fairer machine learning models and the broader understanding of bias in data and algorithms.
Papers
October 5, 2024
September 2, 2024
July 4, 2024
June 5, 2024
May 28, 2024
April 21, 2024
March 4, 2024
February 16, 2024
September 28, 2023
March 15, 2023
February 26, 2023
November 16, 2022
November 15, 2022
October 13, 2022
September 15, 2022
August 25, 2022
August 4, 2022
June 17, 2022
May 26, 2022