Task Specific
Task-specific adaptation of large language models (LLMs) and other foundation models is a major focus of current research, aiming to improve efficiency and performance on diverse downstream tasks without retraining the entire model. This involves developing techniques like efficient fine-tuning methods (e.g., low-rank adaptation, knowledge distillation), data-efficient coreset selection, and the generation of synthetic task-specific datasets. These advancements are crucial for deploying LLMs in resource-constrained environments and for tailoring models to specific application domains, impacting fields ranging from robotics and healthcare to natural language processing and computer vision.
Papers
LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging
Ke Wang, Nikolaos Dimitriadis, Alessandro Favero, Guillermo Ortiz-Jimenez, Francois Fleuret, Pascal Frossard
Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration
Qintong Li, Jiahui Gao, Sheng Wang, Renjie Pi, Xueliang Zhao, Chuan Wu, Xin Jiang, Zhenguo Li, Lingpeng Kong