Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
Papers
Exploring the Mutual Influence between Self-Supervised Single-Frame and Multi-Frame Depth Estimation
Jie Xiang, Yun Wang, Lifeng An, Haiyang Liu, Jian Liu
ContrastMotion: Self-supervised Scene Motion Learning for Large-Scale LiDAR Point Clouds
Xiangze Jia, Hui Zhou, Xinge Zhu, Yandong Guo, Ji Zhang, Yuexin Ma
Domain Adaptable Self-supervised Representation Learning on Remote Sensing Satellite Imagery
Muskaan Chopra, Prakash Chandra Chhipa, Gopal Mengi, Varun Gupta, Marcus Liwicki
CMID: A Unified Self-Supervised Learning Framework for Remote Sensing Image Understanding
Dilxat Muhtar, Xueliang Zhang, Pengfeng Xiao, Zhenshi Li, Feng Gu
Realistic Data Enrichment for Robust Image Segmentation in Histopathology
Sarah Cechnicka, James Ball, Hadrien Reynaud, Callum Arthurs, Candice Roufosse, Bernhard Kainz
SelfAct: Personalized Activity Recognition based on Self-Supervised and Active Learning
Luca Arrotta, Gabriele Civitarese, Samuele Valente, Claudio Bettini
The Second Monocular Depth Estimation Challenge
Jaime Spencer, C. Stella Qian, Michaela Trescakova, Chris Russell, Simon Hadfield, Erich W. Graf, Wendy J. Adams, Andrew J. Schofield, James Elder, Richard Bowden, Ali Anwar, Hao Chen, Xiaozhi Chen, Kai Cheng, Yuchao Dai, Huynh Thai Hoa, Sadat Hossain, Jianmian Huang, Mohan Jing, Bo Li, Chao Li, Baojun Li, Zhiwen Liu, Stefano Mattoccia, Siegfried Mercelis, Myungwoo Nam, Matteo Poggi, Xiaohua Qi, Jiahui Ren, Yang Tang, Fabio Tosi, Linh Trinh, S. M. Nadim Uddin, Khan Muhammad Umair, Kaixuan Wang, Yufei Wang, Yixing Wang, Mochu Xiang, Guangkai Xu, Wei Yin, Jun Yu, Qi Zhang, Chaoqiang Zhao
Tempo vs. Pitch: understanding self-supervised tempo estimation
Giovana Morais, Matthew E. P. Davies, Marcelo Queiroz, Magdalena Fuentes
A surprisingly simple technique to control the pretraining bias for better transfer: Expand or Narrow your representation
Florian Bordes, Samuel Lavoie, Randall Balestriero, Nicolas Ballas, Pascal Vincent
Self-supervision for medical image classification: state-of-the-art performance with ~100 labeled training samples per class
Maximilian Nielsen, Laura Wenderoth, Thilo Sentker, René Werner