Unseen Class
Unseen class problems in machine learning address the challenge of classifying data points belonging to categories not encountered during model training. Current research focuses on adapting existing models, such as those based on generative adversarial networks, prototype learning, and large language models, to generalize to these unseen classes, often employing techniques like prompt tuning and contrastive learning. Successfully addressing unseen classes is crucial for building robust and adaptable AI systems capable of handling real-world scenarios with evolving data distributions, impacting fields ranging from image recognition and object detection to natural language processing and robotics.
Papers
SEER-ZSL: Semantic Encoder-Enhanced Representations for Generalized Zero-Shot Learning
William Heyden, Habib Ullah, M. Salman Siddiqui, Fadi Al Machot
Spectral Prompt Tuning:Unveiling Unseen Classes for Zero-Shot Semantic Segmentation
Wenhao Xu, Rongtao Xu, Changwei Wang, Shibiao Xu, Li Guo, Man Zhang, Xiaopeng Zhang