Active Learning
Active learning is a machine learning paradigm focused on optimizing data labeling efficiency by strategically selecting the most informative samples for annotation from a larger unlabeled pool. Current research emphasizes developing novel acquisition functions and data pruning strategies to reduce computational costs associated with large datasets, exploring the integration of active learning with various model architectures (including deep neural networks, Gaussian processes, and language models), and addressing challenges like privacy preservation and handling open-set noise. This approach holds significant promise for reducing the substantial cost and effort of data labeling in diverse fields, ranging from image classification and natural language processing to materials science and healthcare.
Papers
Multi-Task Consistency for Active Learning
Aral Hekimoglu, Philipp Friedrich, Walter Zimmer, Michael Schmidt, Alvaro Marcos-Ramiro, Alois C. Knoll
M-VAAL: Multimodal Variational Adversarial Active Learning for Downstream Medical Image Analysis Tasks
Bidur Khanal, Binod Bhattarai, Bishesh Khanal, Danail Stoyanov, Cristian A. Linte
Hyperbolic Active Learning for Semantic Segmentation under Domain Shift
Luca Franco, Paolo Mandica, Konstantinos Kallidromitis, Devin Guillory, Yu-Teng Li, Trevor Darrell, Fabio Galasso
Taming Small-sample Bias in Low-budget Active Learning
Linxin Song, Jieyu Zhang, Xiaotian Lu, Tianyi Zhou
Perturbation-Based Two-Stage Multi-Domain Active Learning
Rui He, Zeyu Dai, Shan He, Ke Tang
Parallelized Acquisition for Active Learning using Monte Carlo Sampling
Jesús Torrado, Nils Schöneberg, Jonas El Gammal
atTRACTive: Semi-automatic white matter tract segmentation using active learning
Robin Peretzke, Klaus Maier-Hein, Jonas Bohn, Yannick Kirchhoff, Saikat Roy, Sabrina Oberli-Palma, Daniela Becker, Pavlina Lenga, Peter Neher