Model Fairness
Model fairness addresses the ethical concern of algorithmic bias in machine learning, aiming to ensure that models perform equally well across different demographic groups. Current research focuses on developing and comparing methods to mitigate bias during various stages of model development, including data preprocessing, model training (e.g., using techniques like contrastive learning, regularization, and multi-task learning), and post-processing, often employing graph neural networks and large language models. This research is crucial for building trustworthy AI systems, impacting fields like healthcare, criminal justice, and education by promoting equitable outcomes and reducing discriminatory practices.
Papers
Diagnosing failures of fairness transfer across distribution shift in real-world medical settings
Jessica Schrouff, Natalie Harris, Oluwasanmi Koyejo, Ibrahim Alabdulmohsin, Eva Schnider, Krista Opsahl-Ong, Alex Brown, Subhrajit Roy, Diana Mincu, Christina Chen, Awa Dieng, Yuan Liu, Vivek Natarajan, Alan Karthikesalingam, Katherine Heller, Silvia Chiappa, Alexander D'Amour
Fairness of Machine Learning Algorithms in Demography
Ibe Chukwuemeka Emmanuel, Ekaterina Mitrofanova