Multilingual Language Model
Multilingual language models (MLLMs) aim to create AI systems capable of understanding and generating text across multiple languages, overcoming the limitations of English-centric models. Current research focuses on improving MLLM performance in low-resource languages, mitigating biases towards dominant languages, and developing techniques for efficient knowledge editing and unlearning to address privacy and ethical concerns. These advancements are crucial for broadening access to AI-powered tools and fostering more equitable and inclusive natural language processing applications globally.
Papers
October 21, 2022
October 13, 2022
October 12, 2022
October 11, 2022
October 10, 2022
September 21, 2022
September 15, 2022
September 14, 2022
July 5, 2022
May 31, 2022
May 25, 2022
May 24, 2022
May 22, 2022
May 21, 2022
May 19, 2022
May 12, 2022
Beyond Static Models and Test Sets: Benchmarking the Potential of Pre-trained Models Across Tasks and Languages
Kabir Ahuja, Sandipan Dandapat, Sunayana Sitaram, Monojit Choudhury
On the Economics of Multilingual Few-shot Learning: Modeling the Cost-Performance Trade-offs of Machine Translated and Manual Data
Kabir Ahuja, Monojit Choudhury, Sandipan Dandapat
April 27, 2022