Low Resource
Low-resource settings in natural language processing and related fields present significant challenges due to limited data and computational resources. Current research focuses on adapting existing large language models (LLMs) and other deep learning architectures, such as U-Net and transformer models, through techniques like parameter-efficient fine-tuning, data augmentation (including back-translation and synthetic data generation), and cross-lingual transfer learning to improve performance in tasks such as machine translation, speech recognition, and sentiment analysis for under-resourced languages. These advancements are crucial for bridging the digital divide and enabling access to AI-powered tools and services for a wider range of languages and communities.
Papers
On the Transferability of Pre-trained Language Models for Low-Resource Programming Languages
Fuxiang Chen, Fatemeh Fard, David Lo, Timofey Bryksin
A Complementary Joint Training Approach Using Unpaired Speech and Text for Low-Resource Automatic Speech Recognition
Ye-Qian Du, Jie Zhang, Qiu-Shi Zhu, Li-Rong Dai, Ming-Hui Wu, Xin Fang, Zhou-Wang Yang