Contextual Biasing
Contextual biasing refers to the influence of surrounding information (context) on model outputs, a significant concern across various machine learning domains. Current research focuses on mitigating biases in large language models (LLMs) and automatic speech recognition (ASR) systems, employing techniques like counterfactual inference, attention mechanisms, and data augmentation to improve fairness and accuracy. This work is crucial for developing reliable and unbiased AI systems, impacting fields ranging from social science research (using LLMs for public opinion analysis) to medical AI (fair analysis of medical datasets) and improving the accuracy and robustness of speech recognition technologies.
Papers
Contextual Biasing with the Knuth-Morris-Pratt Matching Algorithm
Weiran Wang, Zelin Wu, Diamantino Caseiro, Tsendsuren Munkhdalai, Khe Chai Sim, Pat Rondon, Golan Pundak, Gan Song, Rohit Prabhavalkar, Zhong Meng, Ding Zhao, Tara Sainath, Pedro Moreno Mengibar
Batch Calibration: Rethinking Calibration for In-Context Learning and Prompt Engineering
Han Zhou, Xingchen Wan, Lev Proleev, Diana Mincu, Jilin Chen, Katherine Heller, Subhrajit Roy