Fairness Objective
Fairness objectives in machine learning aim to mitigate biases in algorithms and datasets, ensuring equitable outcomes across different demographic groups. Current research focuses on developing and comparing various fairness metrics (e.g., demographic parity, equalized odds), exploring the trade-offs between fairness and accuracy, and designing algorithms (including those based on optimization, meta-learning, and game theory) to achieve fairness goals in diverse applications like clustering, classification, and recommendation systems. This work is crucial for building trustworthy and responsible AI systems, addressing societal concerns about algorithmic bias and promoting equitable access to AI-driven services.
Papers
September 12, 2022
September 8, 2022
August 17, 2022
August 11, 2022
August 7, 2022
July 20, 2022
June 6, 2022
April 27, 2022
April 26, 2022
February 28, 2022