Multilingual Language Model
Multilingual language models (MLLMs) aim to create AI systems capable of understanding and generating text across multiple languages, overcoming the limitations of English-centric models. Current research focuses on improving MLLM performance in low-resource languages, mitigating biases towards dominant languages, and developing techniques for efficient knowledge editing and unlearning to address privacy and ethical concerns. These advancements are crucial for broadening access to AI-powered tools and fostering more equitable and inclusive natural language processing applications globally.
Papers
May 25, 2022
May 24, 2022
May 22, 2022
May 21, 2022
May 19, 2022
May 12, 2022
Beyond Static Models and Test Sets: Benchmarking the Potential of Pre-trained Models Across Tasks and Languages
Kabir Ahuja, Sandipan Dandapat, Sunayana Sitaram, Monojit Choudhury
On the Economics of Multilingual Few-shot Learning: Modeling the Cost-Performance Trade-offs of Machine Translated and Manual Data
Kabir Ahuja, Monojit Choudhury, Sandipan Dandapat
April 27, 2022
April 20, 2022
April 11, 2022
April 8, 2022
April 7, 2022
March 22, 2022
March 19, 2022
March 18, 2022
March 3, 2022
February 14, 2022
January 29, 2022
January 9, 2022