Procedural Fairness
Procedural fairness in artificial intelligence focuses on ensuring that AI systems make decisions equitably across different demographic groups, mitigating biases that can lead to discriminatory outcomes. Current research emphasizes developing and evaluating fairness-aware algorithms and models, including those based on adversarial learning, data augmentation techniques like mixup, and distributionally robust optimization, across various applications like healthcare, process analytics, and recommender systems. This research is crucial for building trustworthy AI systems and addressing societal concerns about algorithmic bias, impacting both the development of ethical AI guidelines and the practical deployment of AI in sensitive domains.
Papers
Fairness: from the ethical principle to the practice of Machine Learning development as an ongoing agreement with stakeholders
Georgina Curto, Flavio Comim
Predicting and Enhancing the Fairness of DNNs with the Curvature of Perceptual Manifolds
Yanbiao Ma, Licheng Jiao, Fang Liu, Maoji Wen, Lingling Li, Wenping Ma, Shuyuan Yang, Xu Liu, Puhua Chen
Fairness Improves Learning from Noisily Labeled Long-Tailed Data
Jiaheng Wei, Zhaowei Zhu, Gang Niu, Tongliang Liu, Sijia Liu, Masashi Sugiyama, Yang Liu
Fairness for Workers Who Pull the Arms: An Index Based Policy for Allocation of Restless Bandit Tasks
Arpita Biswas, Jackson A. Killian, Paula Rodriguez Diaz, Susobhan Ghosh, Milind Tambe
FAIR-Ensemble: When Fairness Naturally Emerges From Deep Ensembling
Wei-Yin Ko, Daniel D'souza, Karina Nguyen, Randall Balestriero, Sara Hooker