Cross Lingual Generalization
Cross-lingual generalization in large language models (LLMs) focuses on enabling models trained primarily on one language (often English) to effectively perform tasks in other languages. Current research investigates methods to improve this ability, including techniques like instruction tuning, preference tuning, and meta-learning, often applied to multilingual models such as mBERT, XLM-R, and BLOOM. This research is crucial for broadening the accessibility and applicability of LLMs globally, addressing biases stemming from data imbalances and promoting fairness and inclusivity in natural language processing.
Papers
October 23, 2024
September 20, 2024
August 9, 2024
July 10, 2024
June 29, 2024
June 23, 2024
June 19, 2024
June 13, 2024
February 26, 2024
February 22, 2024
October 19, 2023
October 2, 2023
August 17, 2023
May 24, 2023
May 23, 2023
May 19, 2023
April 3, 2023
November 3, 2022
September 26, 2022