Paper ID: 2207.08336

When Fairness Meets Privacy: Fair Classification with Semi-Private Sensitive Attributes

Canyu Chen, Yueqing Liang, Xiongxiao Xu, Shangyu Xie, Ashish Kundu, Ali Payani, Yuan Hong, Kai Shu

Machine learning models have demonstrated promising performance in many areas. However, the concerns that they can be biased against specific demographic groups hinder their adoption in high-stake applications. Thus, it is essential to ensure fairness in machine learning models. Most previous efforts require direct access to sensitive attributes for mitigating bias. Nonetheless, it is often infeasible to obtain large-scale users' sensitive attributes considering users' concerns about privacy in the data collection process. Privacy mechanisms such as local differential privacy (LDP) are widely enforced on sensitive information in the data collection stage due to legal compliance and people's increasing awareness of privacy. Therefore, a critical problem is how to make fair predictions under privacy. We study a novel and practical problem of fair classification in a semi-private setting, where most of the sensitive attributes are private and only a small amount of clean ones are available. To this end, we propose a novel framework FairSP that can achieve Fair prediction under the Semi-Private setting. First, FairSP learns to correct the noise-protected sensitive attributes by exploiting the limited clean sensitive attributes. Then, it jointly models the corrected and clean data in an adversarial way for debiasing and prediction. Theoretical analysis shows that the proposed model can ensure fairness under mild assumptions in the semi-private setting. Extensive experimental results on real-world datasets demonstrate the effectiveness of our method for making fair predictions under privacy and maintaining high accuracy.

Submitted: Jul 18, 2022