Procedural Fairness
Procedural fairness in artificial intelligence focuses on ensuring that AI systems make decisions equitably across different demographic groups, mitigating biases that can lead to discriminatory outcomes. Current research emphasizes developing and evaluating fairness-aware algorithms and models, including those based on adversarial learning, data augmentation techniques like mixup, and distributionally robust optimization, across various applications like healthcare, process analytics, and recommender systems. This research is crucial for building trustworthy AI systems and addressing societal concerns about algorithmic bias, impacting both the development of ethical AI guidelines and the practical deployment of AI in sensitive domains.
Papers - Page 22
Fair Infinitesimal Jackknife: Mitigating the Influence of Biased Training Data Points Without Refitting
Prasanna Sattigeri, Soumya Ghosh, Inkit Padhi, Pierre Dognin, Kush R. VarshneyFairRoad: Achieving Fairness for Recommender Systems with Optimized Antidote Data
Minghong Fang, Jia Liu, Michinari Momma, Yi Sun
Equality of Effort via Algorithmic Recourse
Francesca E. D. Raimondi, Andrew R. Lawrence, Hana ChocklerFairness Increases Adversarial Vulnerability
Cuong Tran, Keyu Zhu, Ferdinando Fioretto, Pascal Van HentenryckBursting the Burden Bubble? An Assessment of Sharma et al.'s Counterfactual-based Fairness Metric
Yochem van Rosmalen, Florian van der Steen, Sebastiaan Jans, Daan van der Weijden