Fairness Condition
Fairness conditions in machine learning and resource allocation aim to ensure equitable outcomes across different groups, mitigating biases that may disadvantage certain populations. Current research focuses on developing and analyzing fairness metrics, exploring the trade-offs between fairness and accuracy (e.g., through multi-exit frameworks in image recognition), and investigating the impact of strategic behavior by individuals on the effectiveness of fairness-enhancing algorithms. This work is crucial for building trustworthy AI systems and promoting fairness in various applications, from medical diagnosis to social resource distribution, by identifying and addressing implicit assumptions within existing fairness frameworks.
Papers
August 1, 2024
February 25, 2024
June 26, 2023
January 8, 2023
November 18, 2022
May 11, 2022
February 24, 2022