Supervised Learning
Supervised learning, a core machine learning paradigm, aims to train models to predict outputs based on labeled input data. Current research emphasizes improving model efficiency and robustness, particularly in scenarios with limited or noisy data, exploring techniques like self-supervised pre-training, active learning for data selection, and ensemble methods to enhance accuracy and address class imbalances. These advancements are crucial for various applications, from medical image analysis and infrastructure inspection to natural language processing and targeted advertising, enabling more accurate and reliable predictions with less reliance on extensive labeled datasets.
Papers
Self-training of Machine Learning Models for Liver Histopathology: Generalization under Clinical Shifts
Jin Li, Deepta Rajan, Chintan Shah, Dinkar Juyal, Shreya Chakraborty, Chandan Akiti, Filip Kos, Janani Iyer, Anand Sampat, Ali Behrooz
Utilizing Synthetic Data in Supervised Learning for Robust 5-DoF Magnetic Marker Localization
Mengfan Wu, Thomas Langerak, Otmar Hilliges, Juan Zarate
On the Informativeness of Supervision Signals
Ilia Sucholutsky, Ruairidh M. Battleday, Katherine M. Collins, Raja Marjieh, Joshua C. Peterson, Pulkit Singh, Umang Bhatt, Nori Jacoby, Adrian Weller, Thomas L. Griffiths
More Speaking or More Speakers?
Dan Berrebbi, Ronan Collobert, Navdeep Jaitly, Tatiana Likhomanenko