Multilingual Model
Multilingual models aim to process and generate text across multiple languages, overcoming limitations of monolingual approaches and expanding access to natural language processing (NLP) for low-resource languages. Current research focuses on improving the performance of these models, particularly for low-resource languages, using architectures like transformer-based models (e.g., BERT, mT5) and exploring techniques such as instruction tuning, knowledge distillation, and targeted multilingual adaptation. This work is significant because it addresses biases inherent in predominantly English-centric models and enables broader access to NLP tools and applications across diverse linguistic communities.
Papers
Mix Data or Merge Models? Optimizing for Diverse Multi-Task Learning
Aakanksha, Arash Ahmadian, Seraphina Goldfarb-Tarrant, Beyza Ermis, Marzieh Fadaee, Sara Hooker
Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts
Guorui Zheng, Xidong Wang, Juhao Liang, Nuo Chen, Yuping Zheng, Benyou Wang
The Same But Different: Structural Similarities and Differences in Multilingual Language Modeling
Ruochen Zhang, Qinan Yu, Matianyu Zang, Carsten Eickhoff, Ellie Pavlick
Enhancing Indonesian Automatic Speech Recognition: Evaluating Multilingual Models with Diverse Speech Variabilities
Aulia Adila, Dessi Lestari, Ayu Purwarianti, Dipta Tanaya, Kurniawati Azizah, Sakriani Sakti
From N-grams to Pre-trained Multilingual Models For Language Identification
Thapelo Sindane, Vukosi Marivate
AraDiCE: Benchmarks for Dialectal and Cultural Capabilities in LLMs
Basel Mousi, Nadir Durrani, Fatema Ahmad, Md. Arid Hasan, Maram Hasanain, Tameem Kabbani, Fahim Dalvi, Shammur Absar Chowdhury, Firoj Alam
Cross-lingual transfer of multilingual models on low resource African Languages
Harish Thangaraj, Ananya Chenat, Jaskaran Singh Walia, Vukosi Marivate