Self Supervision
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by leveraging inherent data structures or relationships as supervisory signals, reducing reliance on expensive and time-consuming manual annotation. Current research focuses on developing novel SSL methods for diverse data types, including tabular data, images, videos, and point clouds, often employing architectures like Joint Embedding Predictive Architectures (JEPAs) and Vision Transformers (ViTs), and incorporating techniques such as contrastive learning and data augmentation (where applicable). The widespread adoption of SSL promises to significantly improve the efficiency and scalability of machine learning across various fields, from medical image analysis and robotics to natural language processing and remote sensing.
Papers
On Triangulation as a Form of Self-Supervision for 3D Human Pose Estimation
Soumava Kumar Roy, Leonardo Citraro, Sina Honari, Pascal Fua
PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision
Kehong Gong, Bingbing Li, Jianfeng Zhang, Tao Wang, Jing Huang, Michael Bi Mi, Jiashi Feng, Xinchao Wang
Nested Collaborative Learning for Long-Tailed Visual Recognition
Jun Li, Zichang Tan, Jun Wan, Zhen Lei, Guodong Guo