Supervised Learning
Supervised learning, a core machine learning paradigm, aims to train models to predict outputs based on labeled input data. Current research emphasizes improving model efficiency and robustness, particularly in scenarios with limited or noisy data, exploring techniques like self-supervised pre-training, active learning for data selection, and ensemble methods to enhance accuracy and address class imbalances. These advancements are crucial for various applications, from medical image analysis and infrastructure inspection to natural language processing and targeted advertising, enabling more accurate and reliable predictions with less reliance on extensive labeled datasets.
Papers
SuperWarp: Supervised Learning and Warping on U-Net for Invariant Subvoxel-Precise Registration
Sean I. Young, Yaël Balbastre, Adrian V. Dalca, William M. Wells, Juan Eugenio Iglesias, Bruce Fischl
Supervised Learning and Model Analysis with Compositional Data
Shimeng Huang, Elisabeth Ailer, Niki Kilbertus, Niklas Pfister
Proxyless Neural Architecture Adaptation for Supervised Learning and Self-Supervised Learning
Do-Guk Kim, Heung-Chang Lee
Active Learning with Weak Supervision for Gaussian Processes
Amanda Olmin, Jakob Lindqvist, Lennart Svensson, Fredrik Lindsten
Optical Remote Sensing Image Understanding with Weak Supervision: Concepts, Methods, and Perspectives
Jun Yue, Leyuan Fang, Pedram Ghamisi, Weiying Xie, Jun Li, Jocelyn Chanussot, Antonio J Plaza
Hybrid Feature- and Similarity-Based Models for Joint Prediction and Interpretation
Jacqueline K. Kueper, Jennifer Rayner, Daniel J. Lizotte
A Robust Learning Rule for Soft-Bounded Memristive Synapses Competitive with Supervised Learning in Standard Spiking Neural Networks
Thomas F. Tiotto, Jelmer P. Borst, Niels A. Taatgen