Fairness Benchmark

Fairness benchmarks are crucial tools for evaluating and improving the fairness of machine learning models, aiming to mitigate biases against sensitive attributes like race or gender. Current research focuses on developing comprehensive benchmarks across diverse data types (images, text, time series, graphs) and model architectures (including LLMs, GNNs, and foundation models), often incorporating multiple fairness metrics and addressing issues like the fairness-accuracy trade-off and the impact of data characteristics. These benchmarks are vital for advancing the field by enabling rigorous comparisons of fairness-enhancing techniques and promoting the development of more equitable AI systems across various applications.

Papers