Algorithmic Fairness
Algorithmic fairness focuses on developing and deploying machine learning models that avoid perpetuating or amplifying existing societal biases, aiming for equitable outcomes across different demographic groups. Current research emphasizes understanding and mitigating bias throughout the entire machine learning pipeline, from data collection and preprocessing to model training and deployment, often employing techniques like adversarial learning and constrained optimization to achieve fairness while maintaining accuracy. This field is crucial for ensuring the responsible and ethical use of AI in high-stakes applications like healthcare, criminal justice, and education, impacting both the trustworthiness of AI systems and the fairness of societal decision-making.
Papers
Factoring the Matrix of Domination: A Critical Review and Reimagination of Intersectionality in AI Fairness
Anaelia Ovalle, Arjun Subramonian, Vagrant Gautam, Gilbert Gee, Kai-Wei Chang
Fairness-aware Differentially Private Collaborative Filtering
Zhenhuan Yang, Yingqiang Ge, Congzhe Su, Dingxian Wang, Xiaoting Zhao, Yiming Ying