Fine Tuning
Fine-tuning adapts pre-trained large language models (LLMs) to specific tasks, improving performance and efficiency compared to training from scratch. Current research emphasizes efficient fine-tuning methods like low-rank adaptation (LoRA) and techniques addressing challenges such as catastrophic forgetting and calibration issues, often employing bilevel optimization or adaptive noise allocation for improved performance and privacy. This work is significant because it enables the deployment of powerful LLMs across diverse applications, from medical diagnosis to visual editing, while mitigating resource constraints and privacy concerns.
Papers
Comparative Analysis of Different Efficient Fine Tuning Methods of Large Language Models (LLMs) in Low-Resource Setting
Krishna Prasad Varadarajan Srinivasan, Prasanth Gumpena, Madhusudhana Yattapu, Vishal H. Brahmbhatt
Mining the Explainability and Generalization: Fact Verification Based on Self-Instruction
Guangyao Lu, Yulin Liu
Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process
Ermo Hua, Biqing Qi, Kaiyan Zhang, Yue Yu, Ning Ding, Xingtai Lv, Kai Tian, Bowen Zhou
Rethinking Overlooked Aspects in Vision-Language Models
Yuan Liu, Le Tian, Xiao Zhou, Jie Zhou
FeTT: Continual Class Incremental Learning via Feature Transformation Tuning
Sunyuan Qiang, Xuxin Lin, Yanyan Liang, Jun Wan, Du Zhang
EnterpriseEM: Fine-tuned Embeddings for Enterprise Semantic Search
Kamalkumar Rathinasamy, Jayarama Nettar, Amit Kumar, Vishal Manchanda, Arun Vijayakumar, Ayush Kataria, Venkateshprasanna Manjunath, Chidambaram GS, Jaskirat Singh Sodhi, Shoeb Shaikh, Wasim Akhtar Khan, Prashant Singh, Tanishq Dattatray Ige, Vipin Tiwari, Rajab Ali Mondal, Harshini K, S Reka, Chetana Amancharla, Faiz ur Rahman, Harikrishnan P A, Indraneel Saha, Bhavya Tiwary, Navin Shankar Patel, Pradeep T S, Balaji A J, Priyapravas, Mohammed Rafee Tarafdar
Unveiling Key Aspects of Fine-Tuning in Sentence Embeddings: A Representation Rank Analysis
Euna Jung, Jaeill Kim, Jungmin Ko, Jinwoo Park, Wonjong Rhee
TriLoRA: Integrating SVD for Advanced Style Personalization in Text-to-Image Generation
Chengcheng Feng, Mu He, Qiuyu Tian, Haojie Yin, Xiaofang Zhao, Hongwei Tang, Xingqiang Wei
LoRA Learns Less and Forgets Less
Dan Biderman, Jacob Portes, Jose Javier Gonzalez Ortiz, Mansheej Paul, Philip Greengard, Connor Jennings, Daniel King, Sam Havens, Vitaliy Chiley, Jonathan Frankle, Cody Blakeney, John P. Cunningham
SA-FedLora: Adaptive Parameter Allocation for Efficient Federated Learning with LoRA Tuning
Yuning Yang, Xiaohong Liu, Tianrun Gao, Xiaodong Xu, Guangyu Wang
A safety realignment framework via subspace-oriented model fusion for large language models
Xin Yi, Shunfan Zheng, Linlin Wang, Xiaoling Wang, Liang He
MEDVOC: Vocabulary Adaptation for Fine-tuning Pre-trained Language Models on Medical Text Summarization
Gunjan Balde, Soumyadeep Roy, Mainack Mondal, Niloy Ganguly
Refining Joint Text and Source Code Embeddings for Retrieval Task with Parameter-Efficient Fine-Tuning
Karim Galliamov, Leila Khaertdinova, Karina Denisova