Algorithmic Unfairness
Algorithmic unfairness describes how algorithms, despite appearing objective, can perpetuate and amplify existing societal biases, leading to discriminatory outcomes in various applications. Current research focuses on identifying and mitigating these biases through improved data collection, model design, and post-processing techniques, with a growing emphasis on understanding the interplay between data biases and model behavior across the entire machine learning pipeline. This field is crucial for ensuring fairness and equity in AI systems, impacting not only the development of responsible AI but also the design of ethical and inclusive technologies across sectors like education, finance, and social media.
Papers
August 21, 2024
July 7, 2024
May 28, 2024
September 29, 2023
July 18, 2023
June 27, 2023
May 5, 2023
January 3, 2023
July 13, 2022
February 16, 2022