Contextual Biasing
Contextual biasing refers to the influence of surrounding information (context) on model outputs, a significant concern across various machine learning domains. Current research focuses on mitigating biases in large language models (LLMs) and automatic speech recognition (ASR) systems, employing techniques like counterfactual inference, attention mechanisms, and data augmentation to improve fairness and accuracy. This work is crucial for developing reliable and unbiased AI systems, impacting fields ranging from social science research (using LLMs for public opinion analysis) to medical AI (fair analysis of medical datasets) and improving the accuracy and robustness of speech recognition technologies.
Papers
COBIAS: Contextual Reliability in Bias Assessment
Priyanshul Govil, Hemang Jain, Vamshi Krishna Bonagiri, Aman Chadha, Ponnurangam Kumaraguru, Manas Gaur, Sanorita Dey
LLM-Assisted Content Conditional Debiasing for Fair Text Embedding
Wenlong Deng, Blair Chen, Beidi Zhao, Chiyu Zhang, Xiaoxiao Li, Christos Thrampoulidis