Fine Tuning
Fine-tuning adapts pre-trained large language models (LLMs) to specific tasks, improving performance and efficiency compared to training from scratch. Current research emphasizes efficient fine-tuning methods like low-rank adaptation (LoRA) and techniques addressing challenges such as catastrophic forgetting and calibration issues, often employing bilevel optimization or adaptive noise allocation for improved performance and privacy. This work is significant because it enables the deployment of powerful LLMs across diverse applications, from medical diagnosis to visual editing, while mitigating resource constraints and privacy concerns.
Papers
What explains the success of cross-modal fine-tuning with ORCA?
Paloma García-de-Herreros, Vagrant Gautam, Philipp Slusallek, Dietrich Klakow, Marius Mosbach
HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models
Wenqiao Zhang, Tianwei Lin, Jiang Liu, Fangxun Shu, Haoyuan Li, Lei Zhang, He Wanggui, Hao Zhou, Zheqi Lv, Hao Jiang, Juncheng Li, Siliang Tang, Yueting Zhuang
FissionFusion: Fast Geometric Generation and Hierarchical Souping for Medical Image Analysis
Santosh Sanjeev, Nuren Zhaksylyk, Ibrahim Almakky, Anees Ur Rehman Hashmi, Mohammad Areeb Qazi, Mohammad Yaqub
Adaptive Ensembles of Fine-Tuned Transformers for LLM-Generated Text Detection
Zhixin Lai, Xuesheng Zhang, Suiyao Chen
Technical Report: Competition Solution For BetterMixture
Shuaijiang Zhao, Xiaoquan Fang
SIFT-DBT: Self-supervised Initialization and Fine-Tuning for Imbalanced Digital Breast Tomosynthesis Image Classification
Yuexi Du, Regina J. Hooley, John Lewin, Nicha C. Dvornek
AdaFish: Fast low-rank parameter-efficient fine-tuning by using second-order information
Jiang Hu, Quanzheng Li
Quantifying uncertainty in lung cancer segmentation with foundation models applied to mixed-domain datasets
Aneesh Rangnekar, Nishant Nadkarni, Jue Jiang, Harini Veeraraghavan
Generalizable and Stable Finetuning of Pretrained Language Models on Low-Resource Texts
Sai Ashish Somayajula, Youwei Liang, Abhishek Singh, Li Zhang, Pengtao Xie
BiLoRA: A Bi-level Optimization Framework for Overfitting-Resilient Low-Rank Adaptation of Large Pre-trained Models
Rushi Qiang, Ruiyi Zhang, Pengtao Xie
Fine-Tuning Pre-trained Language Models to Detect In-Game Trash Talks
Daniel Fesalbon, Arvin De La Cruz, Marvin Mallari, Nelson Rodelas
LASPA: Latent Spatial Alignment for Fast Training-free Single Image Editing
Yazeed Alharbi, Peter Wonka
FOCIL: Finetune-and-Freeze for Online Class Incremental Learning by Training Randomly Pruned Sparse Experts
Murat Onur Yildirim, Elif Ceren Gok Yildirim, Decebal Constantin Mocanu, Joaquin Vanschoren
An Empirical Study of Parameter Efficient Fine-tuning on Vision-Language Pre-train Model
Yuxin Tian, Mouxing Yang, Yunfan Li, Dayiheng Liu, Xingzhang Ren, Xi Peng, Jiancheng Lv