Model Adaptation
Model adaptation focuses on efficiently modifying pre-trained models to perform well on new, unseen data or tasks, overcoming the limitations of traditional retraining. Current research emphasizes techniques like meta-learning, adapter modules (e.g., SE/BN adapters), and prompt tuning to achieve parameter-efficient adaptation, often addressing challenges such as concept drift, distribution shifts, and limited target data. These advancements are crucial for improving the robustness and generalizability of machine learning models across diverse real-world applications, including autonomous driving, image recognition, and natural language processing, while minimizing computational costs and data requirements.
Papers
BMD: A General Class-balanced Multicentric Dynamic Prototype Strategy for Source-free Domain Adaptation
Sanqing Qu, Guang Chen, Jing Zhang, Zhijun Li, Wei He, Dacheng Tao
Efficient Test-Time Model Adaptation without Forgetting
Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Yaofo Chen, Shijian Zheng, Peilin Zhao, Mingkui Tan