Supervised Learning
Supervised learning, a core machine learning paradigm, aims to train models to predict outputs based on labeled input data. Current research emphasizes improving model efficiency and robustness, particularly in scenarios with limited or noisy data, exploring techniques like self-supervised pre-training, active learning for data selection, and ensemble methods to enhance accuracy and address class imbalances. These advancements are crucial for various applications, from medical image analysis and infrastructure inspection to natural language processing and targeted advertising, enabling more accurate and reliable predictions with less reliance on extensive labeled datasets.
Papers
Evaluating Self-Supervised Learning via Risk Decomposition
Yann Dubois, Tatsunori Hashimoto, Percy Liang
The SSL Interplay: Augmentations, Inductive Bias, and Generalization
Vivien Cabannes, Bobak T. Kiani, Randall Balestriero, Yann LeCun, Alberto Bietti
L'explicabilit\'e au service de l'extraction de connaissances : application \`a des donn\'ees m\'edicales
Robin Cugny, Emmanuel Doumard, Elodie Escriva, Haomiao Wang
Text classification in shipping industry using unsupervised models and Transformer based supervised models
Ying Xie, Dongping Song
BTS: Bifold Teacher-Student in Semi-Supervised Learning for Indoor Two-Room Presence Detection Under Time-Varying CSI
Li-Hsiang Shen, Kai-Jui Chen, An-Hung Hsiao, Kai-Ten Feng