Negative Learning
Negative learning, a subfield of machine learning, focuses on leveraging negative information—examples of what a model *should not* predict—to improve performance and robustness. Current research explores its application in various contexts, including semi-supervised learning, noisy label handling, and open-set recognition, often employing techniques like contrastive learning, adversarial training, and novel loss functions designed to push predictions away from undesired classes. This approach holds significant promise for enhancing model generalization, mitigating catastrophic forgetting in continual learning, and improving the safety and reliability of AI systems, particularly in scenarios with limited labeled data or noisy labels.
Papers
STAD: Self-Training with Ambiguous Data for Low-Resource Relation Extraction
Junjie Yu, Xing Wang, Jiangjiang Zhao, Chunjie Yang, Wenliang Chen
Noise-Robust Bidirectional Learning with Dynamic Sample Reweighting
Chen-Chen Zong, Zheng-Tao Cao, Hong-Tao Guo, Yun Du, Ming-Kun Xie, Shao-Yuan Li, Sheng-Jun Huang