Multilingual Instruction
Multilingual instruction tuning aims to enhance large language models (LLMs) to effectively follow instructions across diverse languages, addressing the performance gap between English and low-resource languages. Current research focuses on developing efficient methods for creating multilingual instruction datasets, often leveraging techniques like reverse instruction generation and cross-lingual transfer learning, and employing model architectures such as mT5 and various parameter-efficient fine-tuning methods. This work is significant because it strives to make LLMs more equitable and accessible globally, impacting both scientific understanding of cross-lingual transfer and the practical application of LLMs in multilingual contexts.
Papers
October 21, 2024
September 19, 2024
September 16, 2024
July 13, 2024
May 30, 2024
April 7, 2024
February 22, 2024
February 21, 2024
January 15, 2024
January 11, 2024
January 3, 2024
December 20, 2023