Downstream Forecasting Task
Downstream forecasting tasks involve leveraging pre-trained models or learned representations to improve predictions in specific application domains, ranging from healthcare and energy to speech processing and computer vision. Current research emphasizes developing robust and efficient methods for adapting foundation models (like LLMs and self-supervised models) to diverse downstream tasks, often focusing on techniques like contrastive learning, transfer learning, and the design of effective interfaces between pre-trained models and task-specific prediction heads. These advancements aim to improve prediction accuracy, reduce reliance on large labeled datasets, and address issues like fairness and robustness in various applications, ultimately leading to more effective and reliable predictive systems.
Papers
Multiscale Video Pretraining for Long-Term Activity Forecasting
Reuben Tan, Matthias De Lange, Michael Iuzzolino, Bryan A. Plummer, Kate Saenko, Karl Ridgeway, Lorenzo Torresani
On the Connection between Pre-training Data Diversity and Fine-tuning Robustness
Vivek Ramanujan, Thao Nguyen, Sewoong Oh, Ludwig Schmidt, Ali Farhadi