Self Supervised Representation Learning
Self-supervised representation learning aims to learn meaningful data representations from unlabeled data by designing pretext tasks that leverage inherent data structures or invariances. Current research focuses on developing novel pretext tasks and architectures, including contrastive learning, masked modeling, generative models (like diffusion models and VAEs), and variations incorporating semantic information or temporal consistency, often applied within transformer-based frameworks. These advancements are significantly impacting various fields, improving performance in downstream tasks like image classification, speech enhancement, and time series analysis, particularly where labeled data is scarce or expensive to obtain. The resulting robust and generalizable representations are proving valuable across diverse applications in computer vision, natural language processing, and medical image analysis.
Papers
Self-Supervised Representation Learning for Nerve Fiber Distribution Patterns in 3D-PLI
Alexander Oberstrass, Sascha E. A. Muenzing, Meiqi Niu, Nicola Palomero-Gallagher, Christian Schiffer, Markus Axer, Katrin Amunts, Timo Dickscheid
M2CURL: Sample-Efficient Multimodal Reinforcement Learning via Self-Supervised Representation Learning for Robotic Manipulation
Fotios Lygerakis, Vedant Dave, Elmar Rueckert
Return of Unconditional Generation: A Self-supervised Representation Generation Method
Tianhong Li, Dina Katabi, Kaiming He
PointMoment:Mixed-Moment-based Self-Supervised Representation Learning for 3D Point Clouds
Xin Cao, Xinxin Han, Yifan Wang, Mengna Yang, Kang Li
PointJEM: Self-supervised Point Cloud Understanding for Reducing Feature Redundancy via Joint Entropy Maximization
Xin Cao, Huan Xia, Xinxin Han, Yifan Wang, Kang Li, Linzhi Su