Community Fairness
Community fairness in machine learning focuses on ensuring algorithmic systems perform equitably across different groups and communities, addressing both group-level disparities (e.g., based on race or gender) and disparities in model accuracy across communities. Current research emphasizes developing methods, often involving post-processing techniques or linear programming, to simultaneously achieve both group and community fairness while maintaining model utility. This work is crucial for mitigating bias and promoting equitable outcomes in applications like healthcare, finance, and criminal justice, where algorithmic decisions impact diverse populations.
Papers
May 28, 2024
February 19, 2024