Distributional Inequality Metric
Distributional inequality metrics quantify the differences in data distributions, addressing the limitations of traditional metrics that focus on point-wise comparisons or ignore the overall shape of the distributions. Current research explores these metrics across diverse applications, including evaluating the fairness of machine learning models, assessing the performance of natural language generation systems, and analyzing the impact of recommendation algorithms. This work is significant because it provides more nuanced and robust ways to measure disparities and model performance, leading to improved fairness, accuracy, and understanding of complex systems in various fields.
Papers
October 23, 2023
February 28, 2023
January 31, 2023
September 15, 2022
June 12, 2022
February 3, 2022