Multilingual Knowledge
Multilingual knowledge research focuses on enabling artificial intelligence models to effectively understand and utilize information across multiple languages, aiming to overcome the limitations of English-centric models. Current efforts concentrate on improving multilingual large language models (LLMs) through techniques like mixture-of-experts architectures, knowledge distillation, and targeted neuron manipulation for knowledge editing and cross-lingual transfer. This research is crucial for bridging the language gap in information access and processing, impacting fields like machine translation, question answering, and cross-lingual knowledge graph completion, ultimately fostering greater inclusivity and accessibility of information globally.
Papers
The Privileged Students: On the Value of Initialization in Multilingual Knowledge Distillation
Haryo Akbarianto Wibowo, Thamar Solorio, Alham Fikri Aji
Multilingual Knowledge Editing with Language-Agnostic Factual Neurons
Xue zhang, Yunlong Liang, Fandong Meng, Songming Zhang, Yufeng Chen, Jinan Xu, Jie Zhou
$\mu$PLAN: Summarizing using a Content Plan as Cross-Lingual Bridge
Fantine Huot, Joshua Maynez, Chris Alberti, Reinald Kim Amplayo, Priyanka Agrawal, Constanza Fierro, Shashi Narayan, Mirella Lapata
Condensing Multilingual Knowledge with Lightweight Language-Specific Modules
Haoran Xu, Weiting Tan, Shuyue Stella Li, Yunmo Chen, Benjamin Van Durme, Philipp Koehn, Kenton Murray
Polyglot or Not? Measuring Multilingual Encyclopedic Knowledge in Foundation Models
Tim Schott, Daniel Furman, Shreshta Bhat