Weakly Supervised Text Classification
Weakly supervised text classification aims to train accurate text classifiers using minimal labeled data, relying instead on readily available information like class names or seed words. Current research focuses on leveraging large language models (LLMs) for pseudo-label generation and refinement, often incorporating techniques like prompting, rule-based systems, and retrieval-augmented training to improve classification accuracy. This field is significant because it reduces the substantial cost and effort associated with manual data annotation, enabling efficient text classification in diverse applications where labeled data is scarce, such as healthcare and scientific literature analysis.
Papers
June 17, 2024
March 5, 2024
February 29, 2024
October 31, 2023
August 11, 2023
June 24, 2023
June 12, 2023
June 5, 2023
May 24, 2023
May 22, 2023
April 4, 2023
December 11, 2022
October 27, 2022
October 13, 2022
June 24, 2022
May 25, 2022
May 13, 2022