Active Learning
Active learning is a machine learning paradigm focused on optimizing data labeling efficiency by strategically selecting the most informative samples for annotation from a larger unlabeled pool. Current research emphasizes developing novel acquisition functions and data pruning strategies to reduce computational costs associated with large datasets, exploring the integration of active learning with various model architectures (including deep neural networks, Gaussian processes, and language models), and addressing challenges like privacy preservation and handling open-set noise. This approach holds significant promise for reducing the substantial cost and effort of data labeling in diverse fields, ranging from image classification and natural language processing to materials science and healthcare.
Papers
LLMs in the Loop: Leveraging Large Language Model Annotations for Active Learning in Low-Resource Languages
Nataliia Kholodna, Sahib Julka, Mohammad Khodadadi, Muhammed Nurullah Gumus, Michael Granitzer
Hallucination Diversity-Aware Active Learning for Text Summarization
Yu Xia, Xu Liu, Tong Yu, Sungchul Kim, Ryan A. Rossi, Anup Rao, Tung Mai, Shuai Li