Low Resource
Low-resource settings in natural language processing and related fields present significant challenges due to limited data and computational resources. Current research focuses on adapting existing large language models (LLMs) and other deep learning architectures, such as U-Net and transformer models, through techniques like parameter-efficient fine-tuning, data augmentation (including back-translation and synthetic data generation), and cross-lingual transfer learning to improve performance in tasks such as machine translation, speech recognition, and sentiment analysis for under-resourced languages. These advancements are crucial for bridging the digital divide and enabling access to AI-powered tools and services for a wider range of languages and communities.
Papers
Bridge-Coder: Unlocking LLMs' Potential to Overcome Language Gaps in Low-Resource Code
Jipeng Zhang, Jianshu Zhang, Yuanzhe Li, Renjie Pi, Rui Pan, Runtao Liu, Ziqiang Zheng, Tong Zhang
LLMs for Extremely Low-Resource Finno-Ugric Languages
Taido Purason, Hele-Andra Kuulmets, Mark Fishel
Monolingual and Multilingual Misinformation Detection for Low-Resource Languages: A Comprehensive Survey
Xinyu Wang, Wenbo Zhang, Sarah Rajtmajer
Together We Can: Multilingual Automatic Post-Editing for Low-Resource Languages
Sourabh Deoghare, Diptesh Kanojia, Pushpak Bhattacharyya
PETAH: Parameter Efficient Task Adaptation for Hybrid Transformers in a resource-limited Context
Maximilian Augustin, Syed Shakib Sarwar, Mostafa Elhoushi, Sai Qian Zhang, Yuecheng Li, Barbara De Salvo
A Survey on LLM-based Code Generation for Low-Resource and Domain-Specific Programming Languages
Sathvik Joel, Jie JW Wu, Fatemeh H. Fard
Improving Arabic Multi-Label Emotion Classification using Stacked Embeddings and Hybrid Loss Function
Muhammad Azeem Aslam, Wang Jun, Nisar Ahmed, Muhammad Imran Zaman, Li Yanan, Hu Hongfei, Wang Shiyu, Xin Liu