Active Learning
Active learning is a machine learning paradigm focused on optimizing data labeling efficiency by strategically selecting the most informative samples for annotation from a larger unlabeled pool. Current research emphasizes developing novel acquisition functions and data pruning strategies to reduce computational costs associated with large datasets, exploring the integration of active learning with various model architectures (including deep neural networks, Gaussian processes, and language models), and addressing challenges like privacy preservation and handling open-set noise. This approach holds significant promise for reducing the substantial cost and effort of data labeling in diverse fields, ranging from image classification and natural language processing to materials science and healthcare.
Papers
Pool-Based Active Learning with Proper Topological Regions
Lies Hadjadj, Emilie Devijver, Remi Molinier, Massih-Reza Amini
Active Learning on Neural Networks through Interactive Generation of Digit Patterns and Visual Representation
Dong H. Jeong, Jin-Hee Cho, Feng Chen, Audun Josang, Soo-Yeon Ji
EALM: Introducing Multidimensional Ethical Alignment in Conversational Information Retrieval
Yiyao Yu, Junjie Wang, Yuxiang Zhang, Lin Zhang, Yujiu Yang, Tetsuya Sakai
Towards Free Data Selection with General-Purpose Models
Yichen Xie, Mingyu Ding, Masayoshi Tomizuka, Wei Zhan
Assessment and treatment of visuospatial neglect using active learning with Gaussian processes regression
Ivan De Boi, Elissa Embrechts, Quirine Schatteman, Rudi Penne, Steven Truijen, Wim Saeys
Two-Step Active Learning for Instance Segmentation with Uncertainty and Diversity Sampling
Ke Yu, Stephen Albro, Giulia DeSalvo, Suraj Kothawade, Abdullah Rashwan, Sasan Tavakkol, Kayhan Batmanghelich, Xiaoqi Yin
Comparing Active Learning Performance Driven by Gaussian Processes or Bayesian Neural Networks for Constrained Trajectory Exploration
Sapphira Akins, Frances Zhu