Paper ID: 2409.16371
Do the Right Thing, Just Debias! Multi-Category Bias Mitigation Using LLMs
Amartya Roy, Danush Khanna, Devanshu Mahapatra, Vasanthakumar, Avirup Das, Kripabandhu Ghosh
This paper tackles the challenge of building robust and generalizable bias mitigation models for language. Recognizing the limitations of existing datasets, we introduce ANUBIS, a novel dataset with 1507 carefully curated sentence pairs encompassing nine social bias categories. We evaluate state-of-the-art models like T5, utilizing Supervised Fine-Tuning (SFT), Reinforcement Learning (PPO, DPO), and In-Context Learning (ICL) for effective bias mitigation. Our analysis focuses on multi-class social bias reduction, cross-dataset generalizability, and environmental impact of the trained models. ANUBIS and our findings offer valuable resources for building more equitable AI systems and contribute to the development of responsible and unbiased technologies with broad societal impact.
Submitted: Sep 24, 2024