Supervised Learning
Supervised learning, a core machine learning paradigm, aims to train models to predict outputs based on labeled input data. Current research emphasizes improving model efficiency and robustness, particularly in scenarios with limited or noisy data, exploring techniques like self-supervised pre-training, active learning for data selection, and ensemble methods to enhance accuracy and address class imbalances. These advancements are crucial for various applications, from medical image analysis and infrastructure inspection to natural language processing and targeted advertising, enabling more accurate and reliable predictions with less reliance on extensive labeled datasets.
Papers
An Empirical Study into Clustering of Unseen Datasets with Self-Supervised Encoders
Scott C. Lowe, Joakim Bruslund Haurum, Sageev Oore, Thomas B. Moeslund, Graham W. Taylor
An Axiomatic Approach to Loss Aggregation and an Adapted Aggregating Algorithm
Armando J. Cabrera Pacheco, Rabanus Derr, Robert C. Williamson
Automatic Data Curation for Self-Supervised Learning: A Clustering-Based Approach
Huy V. Vo, Vasil Khalidov, Timothée Darcet, Théo Moutakanni, Nikita Smetanin, Marc Szafraniec, Hugo Touvron, Camille Couprie, Maxime Oquab, Armand Joulin, Hervé Jégou, Patrick Labatut, Piotr Bojanowski
Cross-Validated Off-Policy Evaluation
Matej Cief, Branislav Kveton, Michal Kompan