Complementary Label Learning
Complementary label learning (CLL) addresses the challenge of training classifiers using only information about which classes a data point *does not* belong to, rather than its true class. Current research focuses on developing robust algorithms and model architectures, such as graph neural networks, that effectively leverage this incomplete information, often addressing issues like class imbalance and noisy labels through weighted loss functions or risk correction techniques. This area is significant because it reduces annotation costs in machine learning, impacting various applications where obtaining complete labels is expensive or impractical, such as image classification and natural language processing. The development of more accurate and reliable CLL methods is crucial for advancing weakly supervised learning.