Supervised Learning
Supervised learning, a core machine learning paradigm, aims to train models to predict outputs based on labeled input data. Current research emphasizes improving model efficiency and robustness, particularly in scenarios with limited or noisy data, exploring techniques like self-supervised pre-training, active learning for data selection, and ensemble methods to enhance accuracy and address class imbalances. These advancements are crucial for various applications, from medical image analysis and infrastructure inspection to natural language processing and targeted advertising, enabling more accurate and reliable predictions with less reliance on extensive labeled datasets.
Papers
Hybrid Feature- and Similarity-Based Models for Joint Prediction and Interpretation
Jacqueline K. Kueper, Jennifer Rayner, Daniel J. Lizotte
A Robust Learning Rule for Soft-Bounded Memristive Synapses Competitive with Supervised Learning in Standard Spiking Neural Networks
Thomas F. Tiotto, Jelmer P. Borst, Niels A. Taatgen
Positive Feature Values Prioritized Hierarchical Redundancy Eliminated Tree Augmented Naive Bayes Classifier for Hierarchical Feature Spaces
Cen Wan
TOV: The Original Vision Model for Optical Remote Sensing Image Understanding via Self-supervised Learning
Chao Tao, Ji Qia, Guo Zhang, Qing Zhu, Weipeng Lu, Haifeng Li
Active Learning with Label Comparisons
Gal Yona, Shay Moran, Gal Elidan, Amir Globerson
Towards efficient representation identification in supervised learning
Kartik Ahuja, Divyat Mahajan, Vasilis Syrgkanis, Ioannis Mitliagkas