Fairness Measure
Fairness measures in machine learning and decision-making aim to quantify and mitigate bias, ensuring equitable outcomes across different demographic groups. Current research focuses on developing new metrics that account for uncertainty in predictions, handle multiple sensitive attributes, and address the incompatibility between various fairness criteria, often employing manifold-based approaches or statistical equivalence testing. These advancements are crucial for building trustworthy AI systems and promoting fairness in high-stakes applications like loan approvals, hiring processes, and biometric authentication, ultimately impacting the ethical deployment of AI in society.
Papers
January 25, 2023
November 21, 2022
October 30, 2022
October 9, 2022
September 16, 2022
June 29, 2022
June 13, 2022
May 23, 2022