Generic Representation
Generic representation learning aims to create feature representations applicable across diverse tasks and domains, minimizing the need for task-specific training. Current research focuses on developing robust pre-trained models, often leveraging techniques like contrastive learning, multi-teacher distillation, and transformer architectures, to generate these generic representations for various data modalities (images, text, audio, EEG). This work is significant because it promises more efficient and generalizable machine learning models, impacting fields ranging from anomaly detection and brain-computer interfaces to robotics and user interface design.
Papers
October 27, 2024
September 8, 2024
June 27, 2024
June 12, 2024
May 29, 2024
March 10, 2024
March 9, 2024
September 4, 2023
May 26, 2023
March 23, 2023
March 14, 2023
December 8, 2022
October 18, 2022
April 18, 2022