Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
Papers
MoGe: Unlocking Accurate Monocular Geometry Estimation for Open-Domain Images with Optimal Training Supervision
Ruicheng Wang, Sicheng Xu, Cassie Dai, Jianfeng Xiang, Yu Deng, Xin Tong, Jiaolong Yang
Understanding Players as if They Are Talking to the Game in a Customized Language: A Pilot Study
Tianze Wang, Maryam Honari-Jahromi, Styliani Katsarou, Olga Mikheeva, Theodoros Panagiotakopoulos, Oleg Smirnov, Lele Cao, Sahar Asadi
AC-Mix: Self-Supervised Adaptation for Low-Resource Automatic Speech Recognition using Agnostic Contrastive Mixup
Carlos Carvalho, Alberto Abad
Self-supervised contrastive learning performs non-linear system identification
Rodrigo González Laiz, Tobias Schmidt, Steffen Schneider
Domain Adaptive Safety Filters via Deep Operator Learning
Lakshmideepakreddy Manda, Shaoru Chen, Mahyar Fazlyab
SAda-Net: A Self-Supervised Adaptive Stereo Estimation CNN For Remote Sensing Image Data
Dominik Hirner, Friedrich Fraundorfer
Self-Supervised Scene Flow Estimation with Point-Voxel Fusion and Surface Representation
Xuezhi Xiang, Xi Wang, Lei Zhang, Denis Ombati, Himaloy Himu, Xiantong Zhen
End-to-End Integration of Speech Emotion Recognition with Voice Activity Detection using Self-Supervised Learning Features
Natsuo Yamashita, Masaaki Yamamoto, Yohei Kawaguchi
Self-Supervised Learning of Disentangled Representations for Multivariate Time-Series
Ching Chang, Chiao-Tung Chan, Wei-Yao Wang, Wen-Chih Peng, Tien-Fu Chen
Enhancing Speech Emotion Recognition through Segmental Average Pooling of Self-Supervised Learning Features
Jonghwan Hyeon, Yung-Hwan Oh, Ho-Jin Choi
On Discriminative Probabilistic Modeling for Self-Supervised Representation Learning
Bokun Wang, Yunwen Lei, Yiming Ying, Tianbao Yang
SegGrasp: Zero-Shot Task-Oriented Grasping via Semantic and Geometric Guided Segmentation
Haosheng Li, Weixin Mao, Weipeng Deng, Chenyu Meng, Rui Zhang, Fan Jia, Tiancai Wang, Haoqiang Fan, Hongan Wang, Xiaoming Deng