Model Fairness
Model fairness addresses the ethical concern of algorithmic bias in machine learning, aiming to ensure that models perform equally well across different demographic groups. Current research focuses on developing and comparing methods to mitigate bias during various stages of model development, including data preprocessing, model training (e.g., using techniques like contrastive learning, regularization, and multi-task learning), and post-processing, often employing graph neural networks and large language models. This research is crucial for building trustworthy AI systems, impacting fields like healthcare, criminal justice, and education by promoting equitable outcomes and reducing discriminatory practices.
Papers
May 1, 2023
March 25, 2023
March 22, 2023
March 13, 2023
March 8, 2023
March 6, 2023
March 1, 2023
February 16, 2023
February 5, 2023
November 23, 2022
November 8, 2022
October 31, 2022
September 17, 2022
July 20, 2022
July 13, 2022
July 7, 2022
June 29, 2022
June 26, 2022
June 7, 2022