Model Fairness
Model fairness addresses the ethical concern of algorithmic bias in machine learning, aiming to ensure that models perform equally well across different demographic groups. Current research focuses on developing and comparing methods to mitigate bias during various stages of model development, including data preprocessing, model training (e.g., using techniques like contrastive learning, regularization, and multi-task learning), and post-processing, often employing graph neural networks and large language models. This research is crucial for building trustworthy AI systems, impacting fields like healthcare, criminal justice, and education by promoting equitable outcomes and reducing discriminatory practices.
Papers
February 1, 2024
January 26, 2024
January 23, 2024
January 18, 2024
January 15, 2024
January 8, 2024
January 7, 2024
January 4, 2024
December 24, 2023
December 15, 2023
December 5, 2023
November 14, 2023
November 7, 2023
October 12, 2023
August 26, 2023
July 9, 2023
June 7, 2023
May 31, 2023