Multilingual Model
Multilingual models aim to process and generate text across multiple languages, overcoming limitations of monolingual approaches and expanding access to natural language processing (NLP) for low-resource languages. Current research focuses on improving the performance of these models, particularly for low-resource languages, using architectures like transformer-based models (e.g., BERT, mT5) and exploring techniques such as instruction tuning, knowledge distillation, and targeted multilingual adaptation. This work is significant because it addresses biases inherent in predominantly English-centric models and enables broader access to NLP tools and applications across diverse linguistic communities.
Papers
Speaking Multiple Languages Affects the Moral Bias of Language Models
Katharina Hämmerl, Björn Deiseroth, Patrick Schramowski, Jindřich Libovický, Constantin A. Rothkopf, Alexander Fraser, Kristian Kersting
Findings of the Covid-19 MLIA Machine Translation Task
Francisco Casacuberta, Alexandru Ceausu, Khalid Choukri, Miltos Deligiannis, Miguel Domingo, Mercedes García-Martínez, Manuel Herranz, Guillaume Jacquet, Vassilis Papavassiliou, Stelios Piperidis, Prokopis Prokopidis, Dimitris Roussis, Marwa Hadj Salah
Combining Contrastive Learning and Knowledge Graph Embeddings to develop medical word embeddings for the Italian language
Denys Amore Bondarenko, Roger Ferrod, Luigi Di Caro
Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes
Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar
Intriguing Properties of Compression on Multilingual Models
Kelechi Ogueji, Orevaoghene Ahia, Gbemileke Onilude, Sebastian Gehrmann, Sara Hooker, Julia Kreutzer
Federated Multilingual Models for Medical Transcript Analysis
Andre Manoel, Mirian Hipolito Garcia, Tal Baumel, Shize Su, Jialei Chen, Dan Miller, Danny Karmon, Robert Sim, Dimitrios Dimitriadis