Fairness Measure
Fairness measures in machine learning and decision-making aim to quantify and mitigate bias, ensuring equitable outcomes across different demographic groups. Current research focuses on developing new metrics that account for uncertainty in predictions, handle multiple sensitive attributes, and address the incompatibility between various fairness criteria, often employing manifold-based approaches or statistical equivalence testing. These advancements are crucial for building trustworthy AI systems and promoting fairness in high-stakes applications like loan approvals, hiring processes, and biometric authentication, ultimately impacting the ethical deployment of AI in society.
Papers
November 13, 2024
November 10, 2024
October 12, 2024
September 19, 2024
August 21, 2024
August 12, 2024
May 15, 2024
May 2, 2024
April 10, 2024
March 21, 2024
December 18, 2023
August 22, 2023
July 17, 2023
June 19, 2023
June 8, 2023
June 6, 2023
April 12, 2023
February 9, 2023
February 5, 2023