Procedural Fairness
Procedural fairness in artificial intelligence focuses on ensuring that AI systems make decisions equitably across different demographic groups, mitigating biases that can lead to discriminatory outcomes. Current research emphasizes developing and evaluating fairness-aware algorithms and models, including those based on adversarial learning, data augmentation techniques like mixup, and distributionally robust optimization, across various applications like healthcare, process analytics, and recommender systems. This research is crucial for building trustworthy AI systems and addressing societal concerns about algorithmic bias, impacting both the development of ethical AI guidelines and the practical deployment of AI in sensitive domains.
Papers
A Survey on Group Fairness in Federated Learning: Challenges, Taxonomy of Solutions and Directions for Future Research
Teresa Salazar, Helder Araújo, Alberto Cano, Pedro Henriques Abreu
Understanding Decision Subjects' Engagement with and Perceived Fairness of AI Models When Opportunities of Qualification Improvement Exist
Meric Altug Gemalmaz, Ming Yin
FMBench: Benchmarking Fairness in Multimodal Large Language Models on Medical Tasks
Peiran Wu, Che Liu, Canyu Chen, Jun Li, Cosmin I. Bercea, Rossella Arcucci
Towards Fairness and Privacy: A Novel Data Pre-processing Optimization Framework for Non-binary Protected Attributes
Manh Khoi Duong, Stefan Conrad
Evaluating the fairness of task-adaptive pretraining on unlabeled test data before few-shot text classification
Kush Dubey
Positive-Sum Fairness: Leveraging Demographic Attributes to Achieve Fair AI Outcomes Without Sacrificing Group Gains
Samia Belhadj, Sanguk Park, Ambika Seth, Hesham Dar, Thijs Kooi