Fine Tuning
Fine-tuning adapts pre-trained large language models (LLMs) to specific tasks, improving performance and efficiency compared to training from scratch. Current research emphasizes efficient fine-tuning methods like low-rank adaptation (LoRA) and techniques addressing challenges such as catastrophic forgetting and calibration issues, often employing bilevel optimization or adaptive noise allocation for improved performance and privacy. This work is significant because it enables the deployment of powerful LLMs across diverse applications, from medical diagnosis to visual editing, while mitigating resource constraints and privacy concerns.
Papers
Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review
Masatoshi Uehara, Yulai Zhao, Tommaso Biancalani, Sergey Levine
Can Open-Source LLMs Compete with Commercial Models? Exploring the Few-Shot Performance of Current GPT Models in Biomedical Tasks
Samy Ateia, Udo Kruschwitz
Probing the Efficacy of Federated Parameter-Efficient Fine-Tuning of Vision Transformers for Medical Image Classification
Naif Alkhunaizi, Faris Almalik, Rouqaiah Al-Refai, Muzammal Naseer, Karthik Nandakumar
Exploring connections of spectral analysis and transfer learning in medical imaging
Yucheng Lu, Dovile Juodelyte, Jonathan D. Victor, Veronika Cheplygina
Automatic Pruning of Fine-tuning Datasets for Transformer-based Language Models
Mohammadreza Tayaranian, Seyyed Hasan Mozafari, Brett H. Meyer, James J. Clark, Warren J. Gross
Investigating Public Fine-Tuning Datasets: A Complex Review of Current Practices from a Construction Perspective
Runyuan Ma, Wei Li, Fukai Shang
Enhancing Robustness of Vision-Language Models through Orthogonality Learning and Self-Regularization
Jinlong Li, Dong Zhao, Zequn Jie, Elisa Ricci, Lin Ma, Nicu Sebe
Adversarial-MidiBERT: Symbolic Music Understanding Model Based on Unbias Pre-training and Mask Fine-tuning
Zijian Zhao
AnyTaskTune: Advanced Domain-Specific Solutions through Task-Fine-Tuning
Jiaxi Cui, Wentao Zhang, Jing Tang, Xudong Tong, Zhenwei Zhang, Amie, Jing Wen, Rongsheng Wang, Pengfei Wu
Learn and Don't Forget: Adding a New Language to ASR Foundation Models
Mengjie Qian, Siyuan Tang, Rao Ma, Kate M. Knill, Mark J. F. Gales