Multilingual Model
Multilingual models aim to process and generate text across multiple languages, overcoming limitations of monolingual approaches and expanding access to natural language processing (NLP) for low-resource languages. Current research focuses on improving the performance of these models, particularly for low-resource languages, using architectures like transformer-based models (e.g., BERT, mT5) and exploring techniques such as instruction tuning, knowledge distillation, and targeted multilingual adaptation. This work is significant because it addresses biases inherent in predominantly English-centric models and enables broader access to NLP tools and applications across diverse linguistic communities.
Papers
Transfer Learning of Transformer-based Speech Recognition Models from Czech to Slovak
Jan Lehečka, Josef V. Psutka, Josef Psutka
Multilingual Clinical NER: Translation or Cross-lingual Transfer?
Xavier Fontaine, Félix Gaschi, Parisa Rastin, Yannick Toussaint
XSemPLR: Cross-Lingual Semantic Parsing in Multiple Natural Languages and Meaning Representations
Yusen Zhang, Jun Wang, Zhiguo Wang, Rui Zhang