Adaptation Concern
Adaptation concern in machine learning focuses on efficiently tailoring large pre-trained models to specific tasks or domains without retraining the entire model. Current research heavily emphasizes low-rank adaptation (LoRA) techniques and their variants, often applied to transformer-based models like LLMs and diffusion models, to achieve parameter efficiency and improved performance. This research area is significant because it addresses the computational cost and memory limitations associated with fine-tuning massive models, enabling broader application and deployment of advanced AI systems across diverse tasks and resource-constrained environments. Furthermore, investigations into bias mitigation and improved adaptation strategies within these frameworks are actively pursued.
Papers
Federated Low-Rank Adaptation with Differential Privacy over Wireless Networks
Tianqu Kang, Zixin Wang, Hengtao He, Jun Zhang, Shenghui Song, Khaled B. Letaief
Uncertainty-Aware Test-Time Adaptation for Inverse Consistent Diffeomorphic Lung Image Registration
Muhammad F. A. Chaudhary, Stephanie M. Aguilera, Arie Nakhmani, Joseph M. Reinhardt, Surya P. Bhatt, Sandeep Bodduluri
Dynamic Detection of Relevant Objectives and Adaptation to Preference Drifts in Interactive Evolutionary Multi-Objective Optimization
Seyed Mahdi Shavarani, Mahmoud Golabi, Richard Allmendinger, Lhassane Idoumghar
Variational Low-Rank Adaptation Using IVON
Bai Cong, Nico Daheim, Yuesong Shen, Daniel Cremers, Rio Yokota, Mohammad Emtiyaz Khan, Thomas Möllenhoff
Enhancing Bronchoscopy Depth Estimation through Synthetic-to-Real Domain Adaptation
Qingyao Tian, Huai Liao, Xinyan Huang, Lujie Li, Hongbin Liu
Robust and Efficient Fine-tuning of LLMs with Bayesian Reparameterization of Low-Rank Adaptation
Vaibhav Seth, Arinjay Pathak, Ayan Sengupta, Natraj Raman, Sriram Gopalakrishnan, Tanmoy Chakraborty
Vocal Sandbox: Continual Learning and Adaptation for Situated Human-Robot Collaboration
Jennifer Grannen, Siddharth Karamcheti, Suvir Mirchandani, Percy Liang, Dorsa Sadigh
Dynamic Weight Adjusting Deep Q-Networks for Real-Time Environmental Adaptation
Xinhao Zhang, Jinghan Zhang, Wujun Si, Kunpeng Liu
Offline Reinforcement Learning and Sequence Modeling for Downlink Link Adaptation
Samuele Peri, Alessio Russo, Gabor Fodor, Pablo Soldati
Towards Robust and Efficient Federated Low-Rank Adaptation with Heterogeneous Clients
Jabin Koo, Minwoo Jang, Jungseul Ok
Incremental Learning of Retrievable Skills For Efficient Continual Task Adaptation
Daehee Lee, Minjong Yoo, Woo Kyung Kim, Wonje Choi, Honguk Woo