Low Resource Text Classification
Low-resource text classification focuses on building effective text classifiers with limited training data, a crucial challenge in natural language processing for many languages and domains. Current research emphasizes leveraging large language models (LLMs) through techniques like parameter-efficient fine-tuning (PEFT) and prompt engineering, alongside active learning strategies that intelligently select data for annotation and methods that incorporate external knowledge sources. These advancements improve the accuracy and efficiency of classifiers in resource-constrained settings, impacting applications ranging from clinical text analysis to social media monitoring and enabling NLP development for under-resourced languages.
Papers
August 6, 2024
December 15, 2023
October 9, 2023
September 10, 2023
May 4, 2023
February 20, 2023
February 17, 2023
September 4, 2022
March 22, 2022