Model Fairness
Model fairness addresses the ethical concern of algorithmic bias in machine learning, aiming to ensure that models perform equally well across different demographic groups. Current research focuses on developing and comparing methods to mitigate bias during various stages of model development, including data preprocessing, model training (e.g., using techniques like contrastive learning, regularization, and multi-task learning), and post-processing, often employing graph neural networks and large language models. This research is crucial for building trustworthy AI systems, impacting fields like healthcare, criminal justice, and education by promoting equitable outcomes and reducing discriminatory practices.
Papers
June 4, 2024
May 2, 2024
April 12, 2024
March 15, 2024
March 8, 2024
February 27, 2024
February 18, 2024
February 1, 2024
January 26, 2024
January 23, 2024
January 18, 2024
January 15, 2024
January 8, 2024
January 7, 2024
January 4, 2024
December 24, 2023
December 15, 2023
December 5, 2023