Multilingual Language Model
Multilingual language models (MLLMs) aim to create AI systems capable of understanding and generating text across multiple languages, overcoming the limitations of English-centric models. Current research focuses on improving MLLM performance in low-resource languages, mitigating biases towards dominant languages, and developing techniques for efficient knowledge editing and unlearning to address privacy and ethical concerns. These advancements are crucial for broadening access to AI-powered tools and fostering more equitable and inclusive natural language processing applications globally.
Papers
One For All & All For One: Bypassing Hyperparameter Tuning with Model Averaging For Cross-Lingual Transfer
Fabian David Schmidt, Ivan Vulić, Goran Glavaš
Cross-Lingual Consistency of Factual Knowledge in Multilingual Language Models
Jirui Qi, Raquel Fernández, Arianna Bisazza
Investigating Bias in Multilingual Language Models: Cross-Lingual Transfer of Debiasing Techniques
Manon Reusens, Philipp Borchert, Margot Mieskes, Jochen De Weerdt, Bart Baesens