Downstream Task
A "downstream task" refers to a secondary machine learning task that leverages the knowledge learned by a pre-trained model (often a large language model or foundation model) on a primary task. Current research focuses on improving the performance and robustness of these downstream tasks, addressing issues like bias propagation, efficient fine-tuning (e.g., using adapters or low-rank methods), and ensuring generalizability across diverse datasets and domains. This area is significant because it determines the practical applicability of powerful foundation models, impacting fields ranging from medical image analysis and natural language processing to remote sensing and materials science.
Papers
Pruning Small Pre-Trained Weights Irreversibly and Monotonically Impairs "Difficult" Downstream Tasks in LLMs
Lu Yin, Ajay Jaiswal, Shiwei Liu, Souvik Kundu, Zhangyang Wang
Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks
Hao Chen, Jindong Wang, Ankit Shah, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, Bhiksha Raj