Representation Learner
Representation learning aims to create models that automatically extract meaningful features from data, improving downstream task performance. Current research focuses on developing novel architectures and algorithms, including diffusion models, transformer-based approaches, and contrastive learning methods, often within self-supervised or continual learning frameworks. These advancements are driving improvements in various applications, such as image classification, object detection, and multi-document summarization, by enabling more robust and efficient learning from complex data. The resulting high-quality representations are proving valuable for both scientific understanding and practical deployment of AI systems.
Papers
September 9, 2024
July 4, 2024
May 12, 2024
December 4, 2023
November 29, 2023
October 28, 2023
August 21, 2023
June 1, 2023
May 17, 2023
April 7, 2023
March 26, 2023
March 13, 2023
April 27, 2022