Self Supervised Representation Learning
Self-supervised representation learning aims to learn meaningful data representations from unlabeled data by designing pretext tasks that leverage inherent data structures or invariances. Current research focuses on developing novel pretext tasks and architectures, including contrastive learning, masked modeling, generative models (like diffusion models and VAEs), and variations incorporating semantic information or temporal consistency, often applied within transformer-based frameworks. These advancements are significantly impacting various fields, improving performance in downstream tasks like image classification, speech enhancement, and time series analysis, particularly where labeled data is scarce or expensive to obtain. The resulting robust and generalizable representations are proving valuable across diverse applications in computer vision, natural language processing, and medical image analysis.
Papers
Self-Supervised Representation Learning for CAD
Benjamin T. Jones, Michael Hu, Vladimir G. Kim, Adriana Schulz
Anomaly Detection Requires Better Representations
Tal Reiss, Niv Cohen, Eliahu Horwitz, Ron Abutbul, Yedid Hoshen
End-to-End Integration of Speech Recognition, Dereverberation, Beamforming, and Self-Supervised Learning Representation
Yoshiki Masuyama, Xuankai Chang, Samuele Cornell, Shinji Watanabe, Nobutaka Ono
Training set cleansing of backdoor poisoning by self-supervised representation learning
H. Wang, S. Karami, O. Dia, H. Ritter, E. Emamjomeh-Zadeh, J. Chen, Z. Xiang, D. J. Miller, G. Kesidis