Language Prior
Language priors refer to the tendency of language models, especially in multimodal contexts, to rely on textual patterns and biases rather than directly processing visual or other non-textual information. Current research focuses on mitigating this bias through methods like incorporating image-biased decoding, developing benchmarks to quantify language prior influence (e.g., VLind-Bench), and employing architectures such as Mixture-of-Experts to improve multilingual capabilities while preserving existing knowledge. Addressing language priors is crucial for improving the robustness, reliability, and generalizability of AI models across various applications, including visual question answering, image captioning, and cross-lingual tasks.
Papers
October 17, 2024
October 4, 2024
September 18, 2024
August 21, 2024
July 13, 2024
June 13, 2024
May 21, 2024
April 12, 2024
April 4, 2024
February 28, 2024
February 27, 2024
November 13, 2023
October 4, 2023
June 24, 2023
June 4, 2023
June 2, 2023
May 17, 2023
April 20, 2023
March 6, 2023