Task Agnostic Representation
Task-agnostic representation learning aims to create feature representations from data that are useful across a wide range of downstream tasks, minimizing the need for task-specific model architectures. Current research focuses on developing self-supervised learning methods, leveraging models like transformers and structured state space models, and employing techniques such as contrastive learning and weight-space embedding to achieve this goal. This approach promises to improve efficiency and generalization in various applications, including computer vision, natural language processing, and robotics, by reducing the need for extensive task-specific training data and model adjustments. The resulting task-agnostic representations also offer potential benefits for continual learning and robust model development.
Papers
Abnormality-Driven Representation Learning for Radiology Imaging
Marta Ligero, Tim Lenz, Georg Wölflein, Omar S.M. El Nahhas, Daniel Truhn, Jakob Nikolas Kather
VisualLens: Personalization through Visual History
Wang Bill Zhu, Deqing Fu, Kai Sun, Yi Lu, Zhaojiang Lin, Seungwhan Moon, Kanika Narang, Mustafa Canim, Yue Liu, Anuj Kumar, Xin Luna Dong