Multilingual Task
Multilingual task research focuses on enabling language models to effectively process and generate text across multiple languages, aiming to overcome limitations imposed by the dominance of English in training data. Current efforts concentrate on improving model performance across diverse languages, particularly low-resource ones, through techniques like instruction fine-tuning, cross-lingual transfer learning, and the development of novel multilingual datasets and evaluation metrics. These advancements are crucial for bridging the language gap in various applications, including machine translation, question answering, and sentiment analysis, ultimately fostering greater inclusivity and accessibility in natural language processing.
Papers
Are More LLM Calls All You Need? Towards Scaling Laws of Compound Inference Systems
Lingjiao Chen, Jared Quincy Davis, Boris Hanin, Peter Bailis, Ion Stoica, Matei Zaharia, James Zou
Breaking the Language Barrier: Can Direct Inference Outperform Pre-Translation in Multilingual LLM Applications?
Yotam Intrator, Matan Halfon, Roman Goldenberg, Reut Tsarfaty, Matan Eyal, Ehud Rivlin, Yossi Matias, Natalia Aizenberg