Position Bias
Position bias in large language models (LLMs) refers to the tendency of these models to disproportionately weight information based on its location within an input sequence, often favoring information at the beginning or end. Current research focuses on mitigating this bias through techniques like parameter-efficient fine-tuning, data augmentation, and unsupervised debiasing methods that leverage the models' own responses. Addressing position bias is crucial for improving the reliability and accuracy of LLMs in various applications, particularly those involving long-context processing such as question answering and document retrieval, ultimately leading to more robust and trustworthy AI systems.
Papers
October 18, 2024
April 5, 2024
April 1, 2024
January 2, 2024