Parameter Efficient Fine Tuning
Parameter-efficient fine-tuning (PEFT) aims to adapt large pre-trained models to specific downstream tasks while minimizing the number of trainable parameters, thus reducing computational costs and memory requirements. Current research focuses on improving the efficiency and effectiveness of PEFT methods, exploring techniques like low-rank matrix and tensor decompositions (e.g., LoRA, its variants, and tensor-based adaptations), selective layer training, and novel parameter initialization strategies. These advancements are significant because they enable the deployment of large language models and other foundation models on resource-constrained devices and facilitate more efficient and sustainable model adaptation for diverse applications.
Papers
Hate Speech Detection and Target Identification in Devanagari Languages via Parameter Efficient Fine-Tuning of LLMs
Rushendra Sidibomma, Pransh Patwa, Parth Patwa, Aman Chadha, Vinija Jain, Amitava Das
Semantic Hierarchical Prompt Tuning for Parameter-Efficient Fine-Tuning
Haowei Zhu, Fangyuan Zhang, Rui Qin, Tianxiang Pan, Junhai Yong, Bin Wang
Parameter-efficient Fine-tuning for improved Convolutional Baseline for Brain Tumor Segmentation in Sub-Saharan Africa Adult Glioma Dataset
Bijay Adhikari, Pratibha Kulung, Jakesh Bohaju, Laxmi Kanta Poudel, Confidence Raymond, Dong Zhang, Udunna C Anazodo, Bishesh Khanal, Mahesh Shakya
Refining Salience-Aware Sparse Fine-Tuning Strategies for Language Models
Xinxin Liu, Aaron Thomas, Cheng Zhang, Jianyi Cheng, Yiren Zhao, Xitong Gao
Mixture of Physical Priors Adapter for Parameter-Efficient Fine-Tuning
Zhaozhi Wang, Conghu Li, Qixiang Ye, Tong Zhang
LoRA Diffusion: Zero-Shot LoRA Synthesis for Diffusion Model Personalization
Ethan Smith, Rami Seid, Alberto Hojel, Paramita Mishra, Jianbo Wu
A Comprehensive Evaluation of Large Language Models on Aspect-Based Sentiment Analysis
Changzhi Zhou, Dandan Song, Yuhang Tian, Zhijing Wu, Hao Wang, Xinyu Zhang, Jun Yang, Ziyi Yang, Shuhao Zhang