Many Natural Language Processing
Many Natural Language Processing (NLP) research focuses on improving the efficiency, accuracy, and applicability of large language models (LLMs) across diverse tasks. Current efforts concentrate on enhancing LLMs' information extraction capabilities, integrating them with external knowledge sources like relational databases, and developing more efficient architectures and fine-tuning methods, including prompt engineering and model compression techniques. These advancements are crucial for expanding NLP's reach into resource-constrained environments and specialized domains, ultimately improving applications ranging from question answering and code generation to sentiment analysis and hate speech detection.
Papers
Adapting to the Low-Resource Double-Bind: Investigating Low-Compute Methods on Low-Resource African Languages
Colin Leong, Herumb Shandilya, Bonaventure F. P. Dossou, Atnafu Lambebo Tonja, Joel Mathew, Abdul-Hakeem Omotayo, Oreen Yousuf, Zainab Akinjobi, Chris Chinenye Emezue, Shamsudeen Muhammad, Steven Kolawole, Younwoo Choi, Tosin Adewumi
AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators
Xingwei He, Zhenghao Lin, Yeyun Gong, A-Long Jin, Hang Zhang, Chen Lin, Jian Jiao, Siu Ming Yiu, Nan Duan, Weizhu Chen