Unified Pre Training
Unified pre-training aims to leverage the power of large-scale pre-trained models across diverse downstream tasks by learning a generalizable representation from a variety of data sources. Current research focuses on developing unified frameworks that handle multiple modalities (e.g., image, text, audio, graph data) and adapting pre-trained models to specific tasks using techniques like prompt tuning and task hypergraphs. This approach promises to improve efficiency and generalization in various fields, including computer vision, natural language processing, and recommender systems, by reducing the need for extensive task-specific training data and model architectures.
Papers
October 18, 2024
September 10, 2024
August 9, 2024
May 8, 2024
October 20, 2023
October 3, 2023
September 4, 2023
July 20, 2023
July 18, 2023
May 30, 2023
February 24, 2023
February 16, 2023
October 26, 2022
October 6, 2022
July 28, 2022
April 28, 2022
April 22, 2022
March 24, 2022