Fine Tuning
Fine-tuning adapts pre-trained large language models (LLMs) to specific tasks, improving performance and efficiency compared to training from scratch. Current research emphasizes efficient fine-tuning methods like low-rank adaptation (LoRA) and techniques addressing challenges such as catastrophic forgetting and calibration issues, often employing bilevel optimization or adaptive noise allocation for improved performance and privacy. This work is significant because it enables the deployment of powerful LLMs across diverse applications, from medical diagnosis to visual editing, while mitigating resource constraints and privacy concerns.
Papers
Efficient Data-Sketches and Fine-Tuning for Early Detection of Distributional Drift in Medical Imaging
Yusen Wu, Hao Chen, Alex Pissinou Makki, Phuong Nguyen, Yelena Yesha
5%>100%: Breaking Performance Shackles of Full Fine-Tuning on Visual Recognition Tasks
Dongshuo Yin, Leiyi Hu, Bin Li, Youqun Zhang, Xue Yang
SLCA++: Unleash the Power of Sequential Fine-tuning for Continual Learning with Pre-training
Gengwei Zhang, Liyuan Wang, Guoliang Kang, Ling Chen, Yunchao Wei
API-guided Dataset Synthesis to Finetune Large Code Models
Zongjie Li, Daoyuan Wu, Shuai Wang, Zhendong Su
S-SAM: SVD-based Fine-Tuning of Segment Anything Model for Medical Image Segmentation
Jay N. Paranjape, Shameema Sikder, S. Swaroop Vedula, Vishal M. Patel
Can We Rely on LLM Agents to Draft Long-Horizon Plans? Let's Take TravelPlanner as an Example
Yanan Chen, Ali Pesaranghader, Tanmana Sadhu, Dong Hoon Yi
A New Pipeline For Generating Instruction Dataset via RAG and Self Fine-Tuning
Chih-Wei Song, Yu-Kai Lee, Yin-Te Tsai