Label Information
Label information, crucial for supervised machine learning, is being actively investigated for its efficient use and even its replacement in various contexts. Current research focuses on developing methods that leverage limited or noisy labels, including techniques like self-supervised learning, positive-unlabeled learning, and the incorporation of visual prompts or label-enhanced representations within model architectures such as deep predictive coding networks, large language models, and graph neural networks. These advancements aim to improve model performance, address ethical concerns related to biased labels, and enable applications in diverse fields like image matting, extreme classification, and federated learning where labeled data is scarce or expensive to obtain.
Papers
Beyond Labels: Aligning Large Language Models with Human-like Reasoning
Muhammad Rafsan Kabir, Rafeed Mohammad Sultan, Ihsanul Haque Asif, Jawad Ibn Ahad, Fuad Rahman, Mohammad Ruhul Amin, Nabeel Mohammed, Shafin Rahman
Training Matting Models without Alpha Labels
Wenze Liu, Zixuan Ye, Hao Lu, Zhiguo Cao, Xiangyu Yue
Cross-Modal Self-Training: Aligning Images and Pointclouds to Learn Classification without Labels
Amaya Dharmasiri, Muzammal Naseer, Salman Khan, Fahad Shahbaz Khan
Unsupervised Federated Optimization at the Edge: D2D-Enabled Learning without Labels
Satyavrat Wagle, Seyyedali Hosseinalipour, Naji Khosravan, Christopher G. Brinton