Supervised Learning
Supervised learning, a core machine learning paradigm, aims to train models to predict outputs based on labeled input data. Current research emphasizes improving model efficiency and robustness, particularly in scenarios with limited or noisy data, exploring techniques like self-supervised pre-training, active learning for data selection, and ensemble methods to enhance accuracy and address class imbalances. These advancements are crucial for various applications, from medical image analysis and infrastructure inspection to natural language processing and targeted advertising, enabling more accurate and reliable predictions with less reliance on extensive labeled datasets.
Papers
GPS-SSL: Guided Positive Sampling to Inject Prior Into Self-Supervised Learning
Aarash Feizi, Randall Balestriero, Adriana Romero-Soriano, Reihaneh Rabbany
Evaluating Fairness in Self-supervised and Supervised Models for Sequential Data
Sofia Yfantidou, Dimitris Spathis, Marios Constantinides, Athena Vakali, Daniele Quercia, Fahim Kawsar
FusDom: Combining In-Domain and Out-of-Domain Knowledge for Continuous Self-Supervised Learning
Ashish Seth, Sreyan Ghosh, S. Umesh, Dinesh Manocha
Benchmarking and Analyzing In-context Learning, Fine-tuning and Supervised Learning for Biomedical Knowledge Curation: a focused study on chemical entities of biological interest
Emily Groves, Minhong Wang, Yusuf Abdulle, Holger Kunz, Jason Hoelscher-Obermaier, Ronin Wu, Honghan Wu
Few-shot Transfer Learning for Knowledge Base Question Answering: Fusing Supervised Models with In-Context Learning
Mayur Patidar, Riya Sawhney, Avinash Singh, Biswajit Chatterjee, Mausam, Indrajit Bhattacharya
Human-in-the-loop: Towards Label Embeddings for Measuring Classification Difficulty
Katharina Hechinger, Christoph Koller, Xiao Xiang Zhu, Göran Kauermann