Label Information
Label information, crucial for supervised machine learning, is being actively investigated for its efficient use and even its replacement in various contexts. Current research focuses on developing methods that leverage limited or noisy labels, including techniques like self-supervised learning, positive-unlabeled learning, and the incorporation of visual prompts or label-enhanced representations within model architectures such as deep predictive coding networks, large language models, and graph neural networks. These advancements aim to improve model performance, address ethical concerns related to biased labels, and enable applications in diverse fields like image matting, extreme classification, and federated learning where labeled data is scarce or expensive to obtain.
Papers
Ranked from Within: Ranking Large Multimodal Models for Visual Question Answering Without Labels
Weijie Tu, Weijian Deng, Dylan Campbell, Yu Yao, Jiyang Zheng, Tom Gedeon, Tongliang Liu
Measuring Pre-training Data Quality without Labels for Time Series Foundation Models
Songkang Wen, Vasilii Feofanov, Jianfeng Zhang
Beyond Labels: Aligning Large Language Models with Human-like Reasoning
Muhammad Rafsan Kabir, Rafeed Mohammad Sultan, Ihsanul Haque Asif, Jawad Ibn Ahad, Fuad Rahman, Mohammad Ruhul Amin, Nabeel Mohammed, Shafin Rahman
Training Matting Models without Alpha Labels
Wenze Liu, Zixuan Ye, Hao Lu, Zhiguo Cao, Xiangyu Yue