Label Information
Label information, crucial for supervised machine learning, is being actively investigated for its efficient use and even its replacement in various contexts. Current research focuses on developing methods that leverage limited or noisy labels, including techniques like self-supervised learning, positive-unlabeled learning, and the incorporation of visual prompts or label-enhanced representations within model architectures such as deep predictive coding networks, large language models, and graph neural networks. These advancements aim to improve model performance, address ethical concerns related to biased labels, and enable applications in diverse fields like image matting, extreme classification, and federated learning where labeled data is scarce or expensive to obtain.
Papers
Well-calibrated Confidence Measures for Multi-label Text Classification with a Large Number of Labels
Lysimachos Maltoudoglou, Andreas Paisios, Ladislav Lenc, Jiří Martínek, Pavel Král, Harris Papadopoulos
Labels Need Prompts Too: Mask Matching for Natural Language Understanding Tasks
Bo Li, Wei Ye, Quansen Wang, Wen Zhao, Shikun Zhang
Estimating calibration error under label shift without labels
Teodora Popordanoska, Gorjan Radevski, Tinne Tuytelaars, Matthew B. Blaschko