Visual Representation Learning
Visual representation learning aims to create effective numerical representations of images, enabling computers to "understand" and process visual information. Current research heavily focuses on self-supervised learning methods, leveraging architectures like Vision Transformers (ViTs) and convolutional neural networks (CNNs), often incorporating contrastive learning, masked image modeling, and techniques like prompt tuning to improve representation quality. These advancements are driving progress in diverse applications, including image classification, object detection, medical image analysis, and robotic manipulation, by providing more robust and generalizable visual features.
Papers
Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model
Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, Xinggang Wang
Visual Robotic Manipulation with Depth-Aware Pretraining
Wanying Wang, Jinming Li, Yichen Zhu, Zhiyuan Xu, Zhengping Che, Yaxin Peng, Chaomin Shen, Dong Liu, Feifei Feng, Jian Tang