LLM Adaptation
Adapting large language models (LLMs) to specific tasks or user preferences, a process called LLM adaptation, aims to improve performance and efficiency for diverse applications. Current research focuses on parameter-efficient fine-tuning techniques, such as low-rank adaptation and methods employing mixtures of experts or attention head modifications, to minimize computational costs and memory overhead while maintaining accuracy. These advancements are crucial for deploying LLMs on resource-constrained devices and for mitigating risks associated with adapting models using potentially malicious data. The resulting improvements in efficiency and controllability are significant for both scientific understanding of LLMs and their practical deployment across various industries.
Papers
SouLLMate: An Application Enhancing Diverse Mental Health Support with Adaptive LLMs, Prompt Engineering, and RAG Techniques
Qiming Guo, Jinwen Tang, Wenbo Sun, Haoteng Tang, Yi Shang, Wenlu Wang
SemiEvol: Semi-supervised Fine-tuning for LLM Adaptation
Junyu Luo, Xiao Luo, Xiusi Chen, Zhiping Xiao, Wei Ju, Ming Zhang