Fair Clustering

Fair clustering aims to partition data into groups while mitigating biases stemming from sensitive attributes like race or gender, ensuring equitable representation across clusters. Current research focuses on developing algorithms that incorporate fairness constraints into established clustering methods like k-means and spectral clustering, often employing integer programming, exponential tilting, or matrix factorization techniques to achieve this. These advancements address concerns about algorithmic bias in various applications, from resource allocation to personalized federated learning, promoting more equitable outcomes and advancing the understanding of fairness in unsupervised machine learning. The field is also actively exploring different fairness metrics and their trade-offs with clustering quality, as well as the robustness of fair clustering algorithms to adversarial attacks.

Papers