Fairness Benchmark
Fairness benchmarks are crucial tools for evaluating and improving the fairness of machine learning models, aiming to mitigate biases against sensitive attributes like race or gender. Current research focuses on developing comprehensive benchmarks across diverse data types (images, text, time series, graphs) and model architectures (including LLMs, GNNs, and foundation models), often incorporating multiple fairness metrics and addressing issues like the fairness-accuracy trade-off and the impact of data characteristics. These benchmarks are vital for advancing the field by enabling rigorous comparisons of fairness-enhancing techniques and promoting the development of more equitable AI systems across various applications.
Papers
November 5, 2024
October 28, 2024
October 25, 2024
October 23, 2024
September 25, 2024
September 2, 2024
July 6, 2024
July 1, 2024
June 20, 2024
June 2, 2024
May 24, 2024
May 3, 2024
March 29, 2024
March 22, 2024
November 15, 2023
November 10, 2023
June 15, 2023
May 12, 2023
April 21, 2023