Multilingual Instruction Tuning
Multilingual instruction tuning aims to enhance large language models' (LLMs) ability to follow instructions across multiple languages, overcoming the current dominance of English in training data. Research focuses on creating high-quality, diverse multilingual instruction datasets, often leveraging translation techniques and incorporating N-shot learning or reinforcement learning from human feedback to improve model performance and consistency across languages. This work is significant because it expands LLMs' accessibility and utility globally, impacting both scientific understanding of cross-lingual generalization and the development of practical applications like multilingual chatbots and question-answering systems.
Papers
November 7, 2024
October 10, 2024
July 13, 2024
July 1, 2024
June 18, 2024
June 13, 2024
June 4, 2024
April 6, 2024
February 21, 2024
February 9, 2024
September 16, 2023
July 29, 2023
June 7, 2023