Supervised Learning
Supervised learning, a core machine learning paradigm, aims to train models to predict outputs based on labeled input data. Current research emphasizes improving model efficiency and robustness, particularly in scenarios with limited or noisy data, exploring techniques like self-supervised pre-training, active learning for data selection, and ensemble methods to enhance accuracy and address class imbalances. These advancements are crucial for various applications, from medical image analysis and infrastructure inspection to natural language processing and targeted advertising, enabling more accurate and reliable predictions with less reliance on extensive labeled datasets.
Papers
A Model-free Closeness-of-influence Test for Features in Supervised Learning
Mohammad Mehrabi, Ryan A. Rossi
Mean-field Analysis of Generalization Errors
Gholamali Aminian, Samuel N. Cohen, Łukasz Szpruch
A Universal Unbiased Method for Classification from Aggregate Observations
Zixi Wei, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Xiaofeng Zhu, Heng Tao Shen
Masking Augmentation for Supervised Learning
Byeongho Heo, Taekyung Kim, Sangdoo Yun, Dongyoon Han