Fair Representation
Fair representation learning aims to create data representations that are unbiased with respect to sensitive attributes like race or gender, while preserving predictive accuracy for downstream tasks. Current research focuses on developing algorithms and model architectures (including GANs, diffusion models, and graph neural networks) that mitigate bias through techniques such as adversarial learning, information bottleneck methods, and data pre-processing, often addressing the inherent trade-off between fairness and accuracy. This field is crucial for ensuring equitable outcomes in AI applications, impacting areas like loan applications, hiring processes, and criminal justice, and driving methodological advancements in machine learning and causal inference.
Papers
Renyi Fair Information Bottleneck for Image Classification
Adam Gronowski, William Paul, Fady Alajaji, Bahman Gharesifard, Philippe Burlina
Probabilistic Rotation Representation With an Efficiently Computable Bingham Loss Function and Its Application to Pose Estimation
Hiroya Sato, Takuya Ikeda, Koichi Nishiwaki
Fair Interpretable Representation Learning with Correction Vectors
Mattia Cerrato, Alesia Vallenas Coronel, Marius Köppel, Alexander Segner, Roberto Esposito, Stefan Kramer
Learning fair representation with a parametric integral probability metric
Dongha Kim, Kunwoong Kim, Insung Kong, Ilsang Ohn, Yongdai Kim