Multilingual Language Model
Multilingual language models (MLLMs) aim to create AI systems capable of understanding and generating text across multiple languages, overcoming the limitations of English-centric models. Current research focuses on improving MLLM performance in low-resource languages, mitigating biases towards dominant languages, and developing techniques for efficient knowledge editing and unlearning to address privacy and ethical concerns. These advancements are crucial for broadening access to AI-powered tools and fostering more equitable and inclusive natural language processing applications globally.
Papers
Applying Multilingual Models to Question Answering (QA)
Ayrton San Joaquin, Filip Skubacz
Cross-lingual Similarity of Multilingual Representations Revisited
Maksym Del, Mark Fishel
Languages You Know Influence Those You Learn: Impact of Language Characteristics on Multi-Lingual Text-to-Text Transfer
Benjamin Muller, Deepanshu Gupta, Siddharth Patwardhan, Jean-Philippe Fauconnier, David Vandyke, Sachin Agarwal