Task Agnostic Representation
Task-agnostic representation learning aims to create feature representations from data that are useful across a wide range of downstream tasks, minimizing the need for task-specific model architectures. Current research focuses on developing self-supervised learning methods, leveraging models like transformers and structured state space models, and employing techniques such as contrastive learning and weight-space embedding to achieve this goal. This approach promises to improve efficiency and generalization in various applications, including computer vision, natural language processing, and robotics, by reducing the need for extensive task-specific training data and model adjustments. The resulting task-agnostic representations also offer potential benefits for continual learning and robust model development.
Papers
Do BERTs Learn to Use Browser User Interface? Exploring Multi-Step Tasks with Unified Vision-and-Language BERTs
Taichi Iki, Akiko Aizawa
Contrastive Learning of Sociopragmatic Meaning in Social Media
Chiyu Zhang, Muhammad Abdul-Mageed, Ganesh Jawahar
Task-Agnostic Robust Representation Learning
A. Tuan Nguyen, Ser Nam Lim, Philip Torr