Fair Representation Learning
Fair representation learning aims to create data representations that minimize bias based on sensitive attributes (e.g., race, gender) while preserving utility for downstream tasks. Current research focuses on developing methods that address limitations of existing approaches, such as instability in adversarial training and overfitting to proxy tasks, often employing techniques like variational autoencoders, contrastive learning, and normalizing flows to achieve this. This field is crucial for mitigating algorithmic discrimination in various applications, impacting both the development of fairer machine learning models and the broader understanding of bias in data and algorithms.
Papers
April 1, 2022
March 16, 2022
February 7, 2022
January 17, 2022