Low Resource
Low-resource settings in natural language processing and related fields present significant challenges due to limited data and computational resources. Current research focuses on adapting existing large language models (LLMs) and other deep learning architectures, such as U-Net and transformer models, through techniques like parameter-efficient fine-tuning, data augmentation (including back-translation and synthetic data generation), and cross-lingual transfer learning to improve performance in tasks such as machine translation, speech recognition, and sentiment analysis for under-resourced languages. These advancements are crucial for bridging the digital divide and enabling access to AI-powered tools and services for a wider range of languages and communities.
Papers
MParrotTTS: Multilingual Multi-speaker Text to Speech Synthesis in Low Resource Setting
Neil Shah, Vishal Tambrahalli, Saiteja Kosgi, Niranjan Pedanekar, Vineet Gandhi
Language-universal phonetic encoder for low-resource speech recognition
Siyuan Feng, Ming Tu, Rui Xia, Chuanzeng Huang, Yuxuan Wang
Language-Universal Phonetic Representation in Multilingual Speech Pretraining for Low-Resource Speech Recognition
Siyuan Feng, Ming Tu, Rui Xia, Chuanzeng Huang, Yuxuan Wang
Errors are Useful Prompts: Instruction Guided Task Programming with Verifier-Assisted Iterative Prompting
Marta Skreta, Naruki Yoshikawa, Sebastian Arellano-Rubach, Zhi Ji, Lasse Bjørn Kristensen, Kourosh Darvish, Alán Aspuru-Guzik, Florian Shkurti, Animesh Garg
SPEC: Summary Preference Decomposition for Low-Resource Abstractive Summarization
Yi-Syuan Chen, Yun-Zhu Song, Hong-Han Shuai
Few-shot learning approaches for classifying low resource domain specific software requirements
Anmol Nayak, Hari Prasad Timmapathini, Vidhya Murali, Atul Anil Gohad
SwitchPrompt: Learning Domain-Specific Gated Soft Prompts for Classification in Low-Resource Domains
Koustava Goswami, Lukas Lange, Jun Araki, Heike Adel