Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
Papers
Multi-IMU with Online Self-Consistency for Freehand 3D Ultrasound Reconstruction
Mingyuan Luo, Xin Yang, Zhongnuo Yan, Junyu Li, Yuanji Zhang, Jiongquan Chen, Xindi Hu, Jikuan Qian, Jun Cheng, Dong Ni
Semantic Positive Pairs for Enhancing Visual Representation Learning of Instance Discrimination methods
Mohammad Alkhalefi, Georgios Leontidis, Mingjun Zhong
DUET: 2D Structured and Approximately Equivariant Representations
Xavier Suau, Federico Danieli, T. Anderson Keller, Arno Blaas, Chen Huang, Jason Ramapuram, Dan Busbridge, Luca Zappella
Multi-network Contrastive Learning Based on Global and Local Representations
Weiquan Li, Xianzhong Long, Yun Li
Iterative self-transfer learning: A general methodology for response time-history prediction based on small dataset
Yongjia Xu, Xinzheng Lu, Yifan Fei, Yuli Huang
SpeechGLUE: How Well Can Self-Supervised Speech Models Capture Linguistic Knowledge?
Takanori Ashihara, Takafumi Moriya, Kohei Matsuura, Tomohiro Tanaka, Yusuke Ijima, Taichi Asami, Marc Delcroix, Yukinori Honma
SMC-UDA: Structure-Modal Constraint for Unsupervised Cross-Domain Renal Segmentation
Zhusi Zhong, Jie Li, Lulu Bi, Li Yang, Ihab Kamel, Rama Chellappa, Xinbo Gao, Harrison Bai, Zhicheng Jiao