Language Prior
Language priors refer to the tendency of language models, especially in multimodal contexts, to rely on textual patterns and biases rather than directly processing visual or other non-textual information. Current research focuses on mitigating this bias through methods like incorporating image-biased decoding, developing benchmarks to quantify language prior influence (e.g., VLind-Bench), and employing architectures such as Mixture-of-Experts to improve multilingual capabilities while preserving existing knowledge. Addressing language priors is crucial for improving the robustness, reliability, and generalizability of AI models across various applications, including visual question answering, image captioning, and cross-lingual tasks.
Papers
October 10, 2022
September 18, 2022
July 24, 2022
June 22, 2022
May 28, 2022
May 16, 2022
February 20, 2022
November 29, 2021