Model Fairness
Model fairness addresses the ethical concern of algorithmic bias in machine learning, aiming to ensure that models perform equally well across different demographic groups. Current research focuses on developing and comparing methods to mitigate bias during various stages of model development, including data preprocessing, model training (e.g., using techniques like contrastive learning, regularization, and multi-task learning), and post-processing, often employing graph neural networks and large language models. This research is crucial for building trustworthy AI systems, impacting fields like healthcare, criminal justice, and education by promoting equitable outcomes and reducing discriminatory practices.
Papers
November 8, 2024
November 2, 2024
October 30, 2024
October 29, 2024
October 17, 2024
October 12, 2024
October 7, 2024
October 6, 2024
October 2, 2024
September 27, 2024
August 18, 2024
July 3, 2024
June 13, 2024
June 4, 2024
May 2, 2024
April 12, 2024
March 15, 2024
March 8, 2024
February 27, 2024
February 18, 2024