Adaptation Concern
Adaptation concern in machine learning focuses on efficiently tailoring large pre-trained models to specific tasks or domains without retraining the entire model. Current research heavily emphasizes low-rank adaptation (LoRA) techniques and their variants, often applied to transformer-based models like LLMs and diffusion models, to achieve parameter efficiency and improved performance. This research area is significant because it addresses the computational cost and memory limitations associated with fine-tuning massive models, enabling broader application and deployment of advanced AI systems across diverse tasks and resource-constrained environments. Furthermore, investigations into bias mitigation and improved adaptation strategies within these frameworks are actively pursued.
Papers
Stochastic Dynamic Power Dispatch with High Generalization and Few-Shot Adaption via Contextual Meta Graph Reinforcement Learning
Bairong Deng, Tao Yu, Zhenning Pan, Xuehan Zhang, Yufeng Wu, Qiaoyi Ding
Investigating Training Strategies and Model Robustness of Low-Rank Adaptation for Language Modeling in Speech Recognition
Yu Yu, Chao-Han Huck Yang, Tuan Dinh, Sungho Ryu, Jari Kolehmainen, Roger Ren, Denis Filimonov, Prashanth G. Shivakumar, Ankur Gandhe, Ariya Rastow, Jia Xu, Ivan Bulyko, Andreas Stolcke
How Much Is Hidden in the NAS Benchmarks? Few-Shot Adaptation of a NAS Predictor
Hrushikesh Loya, Łukasz Dudziak, Abhinav Mehrotra, Royson Lee, Javier Fernandez-Marques, Nicholas D. Lane, Hongkai Wen
HiPA: Enabling One-Step Text-to-Image Diffusion Models via High-Frequency-Promoting Adaptation
Yifan Zhang, Bryan Hooi
Low-Rank Adaptation for Multilingual Summarization: An Empirical Study
Chenxi Whitehouse, Fantine Huot, Jasmijn Bastings, Mostafa Dehghani, Chu-Cheng Lin, Mirella Lapata
SAMIHS: Adaptation of Segment Anything Model for Intracranial Hemorrhage Segmentation
Yinuo Wang, Kai Chen, Weimin Yuan, Cai Meng, XiangZhi Bai