Cross Lingual Knowledge Transfer
Cross-lingual knowledge transfer aims to leverage knowledge learned from high-resource languages to improve the performance of natural language processing (NLP) models in low-resource languages. Current research focuses on adapting large language models (LLMs) and other architectures through techniques like parameter-efficient fine-tuning, knowledge editing, and data augmentation methods such as pseudo-semantic data augmentation and translation-based approaches. This research is significant because it addresses the critical issue of language imbalance in NLP, enabling the development of more inclusive and globally accessible NLP applications across diverse languages.
Papers
TransCoder: Towards Unified Transferable Code Representation Learning Inspired by Human Skills
Qiushi Sun, Nuo Chen, Jianing Wang, Xiang Li, Ming Gao
Cross-lingual Knowledge Transfer and Iterative Pseudo-labeling for Low-Resource Speech Recognition with Transducers
Jan Silovsky, Liuhui Deng, Arturo Argueta, Tresi Arvizo, Roger Hsiao, Sasha Kuznietsov, Yiu-Chang Lin, Xiaoqiang Xiao, Yuanyuan Zhang