Self Supervised Representation Learning
Self-supervised representation learning aims to learn meaningful data representations from unlabeled data by designing pretext tasks that leverage inherent data structures or invariances. Current research focuses on developing novel pretext tasks and architectures, including contrastive learning, masked modeling, generative models (like diffusion models and VAEs), and variations incorporating semantic information or temporal consistency, often applied within transformer-based frameworks. These advancements are significantly impacting various fields, improving performance in downstream tasks like image classification, speech enhancement, and time series analysis, particularly where labeled data is scarce or expensive to obtain. The resulting robust and generalizable representations are proving valuable across diverse applications in computer vision, natural language processing, and medical image analysis.
Papers
Elevating Skeleton-Based Action Recognition with Efficient Multi-Modality Self-Supervision
Yiping Wei, Kunyu Peng, Alina Roitberg, Jiaming Zhang, Junwei Zheng, Ruiping Liu, Yufan Chen, Kailun Yang, Rainer Stiefelhagen
A Study of Forward-Forward Algorithm for Self-Supervised Learning
Jonas Brenig, Radu Timofte
SimFIR: A Simple Framework for Fisheye Image Rectification with Self-supervised Representation Learning
Hao Feng, Wendi Wang, Jiajun Deng, Wengang Zhou, Li Li, Houqiang Li
Identity-Seeking Self-Supervised Representation Learning for Generalizable Person Re-identification
Zhaopeng Dou, Zhongdao Wang, Yali Li, Shengjin Wang