Multilingual Large Language Model
Multilingual large language models (MLLMs) aim to extend the capabilities of large language models to multiple languages, improving cross-lingual understanding and generation. Current research focuses on enhancing performance for low-resource languages through techniques like continued pre-training on massive multilingual datasets, parameter-efficient fine-tuning with knowledge graphs, and mitigating biases and improving factual accuracy across languages. These advancements are significant for bridging the language gap in AI applications, fostering inclusivity, and enabling more equitable access to advanced language technologies globally.
Papers
Role-Play Zero-Shot Prompting with Large Language Models for Open-Domain Human-Machine Conversation
Ahmed Njifenjou, Virgile Sucal, Bassam Jabaian, Fabrice Lefèvre
PharmaGPT: Domain-Specific Large Language Models for Bio-Pharmaceutical and Chemistry
Linqing Chen, Weilei Wang, Zilong Bai, Peng Xu, Yan Fang, Jie Fang, Wentao Wu, Lizhi Zhou, Ruiji Zhang, Yubin Xia, Chaobo Xu, Ran Hu, Licong Xu, Qijun Cai, Haoran Hua, Jing Sun, Jin Liu, Tian Qiu, Haowen Liu, Meng Hu, Xiuwen Li, Fei Gao, Yufu Wang, Lin Tie, Chaochao Wang, Jianping Lu, Cheng Sun, Yixin Wang, Shengjie Yang, Yuancheng Li, Lu Jin, Lisha Zhang, Fu Bian, Zhongkai Ye, Lidong Pei, Changyang Tu
Preference Tuning For Toxicity Mitigation Generalizes Across Languages
Xiaochen Li, Zheng-Xin Yong, Stephen H. Bach
Crosslingual Capabilities and Knowledge Barriers in Multilingual Large Language Models
Lynn Chua, Badih Ghazi, Yangsibo Huang, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Amer Sinha, Chulin Xie, Chiyuan Zhang
Exploring Design Choices for Building Language-Specific LLMs
Atula Tejaswi, Nilesh Gupta, Eunsol Choi
An Analysis of Multilingual FActScore
Kim Trong Vu, Michael Krumdick, Varshini Reddy, Franck Dernoncourt, Viet Dac Lai
Selected Languages are All You Need for Cross-lingual Truthfulness Transfer
Weihao Liu, Ning Wu, Wenbiao Ding, Shining Liang, Ming Gong, Dongmei Zhang