Pre Trained Representation
Pre-trained representations leverage the knowledge encoded in models trained on massive datasets to improve performance on downstream tasks, reducing the need for extensive training data and time. Current research focuses on adapting these representations for various applications, including robotics, natural language processing, and computer vision, often employing transformer-based architectures and self-supervised learning techniques like contrastive learning and masked autoencoders. This approach significantly advances fields like visual reinforcement learning and few-shot learning by providing strong initial representations, leading to improved efficiency and generalization capabilities in diverse applications. The resulting models demonstrate enhanced robustness and performance across a range of tasks, impacting both scientific understanding and practical applications.
Papers
Manipulate by Seeing: Creating Manipulation Controllers from Pre-Trained Representations
Jianren Wang, Sudeep Dasari, Mohan Kumar Srirama, Shubham Tulsiani, Abhinav Gupta
Leveraging Pretrained Representations with Task-related Keywords for Alzheimer's Disease Detection
Jinchao Li, Kaitao Song, Junan Li, Bo Zheng, Dongsheng Li, Xixin Wu, Xunying Liu, Helen Meng